Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 10.0 years
12 - 22 Lacs
Bengaluru
Remote
Role & responsibilities Lead the design and development of new Data & Analytics solutions in various parts of the product Modernize existing system components by re-designing them according to the new architecture paradigm Integrate developed solutions with other components of the system Working closely with the Architecture team to influence new technologies choice Ensure the non-functional requirements (ex. stability, scalability, and performance) of services and applications are met Mentor and guide other team members in best practices and technical decisions Being a part of a scrum team collaborating with other cross-functional teams to identify and solve complex technical problems Understand business domain, customers needs, and business use cases Preferred candidate profile Overall working experience in Data Engineering 7+ years Strong knowledge of Python, Apache Spark, and Airflow with a minimum of 3+ years of practical experience Solid understanding of SQL Strong experience with relational and non-relational databases Solid understanding of data models and data pipeline design (batch and real-time) Strong Experience developing solutions in a Cloud environment Strong understanding of DevOps methodologies Working experience in an Agile environment A natural interest in modern data processing technologies and the ability to learn new things Passion to get into the development process quickly and deliver good-quality code English level at B2 or higher
Posted 2 weeks ago
4.0 - 6.0 years
9 - 11 Lacs
Hyderabad
Remote
Role: Data Engineer (Azure, Snowflake) - Mid-Level Duration: 6+ Months Location: Remote Working Hours: 12:30pm IST - 9:30pm IST (3am - 12pm EST) Job Summary: We are looking for a Data Engineer with solid hands-on experience in Azure-based data pipelines and Snowflake to help build and scale data ingestion, transformation, and integration processes in a cloud-native environment. Key Responsibilities: Develop and maintain data pipelines using ADF, Snowflake, and Azure Storage Perform data integration from various sources including APIs, flat files, and databases Write clean, optimized SQL and support data modeling efforts in Snowflake Monitor and troubleshoot pipeline issues and data quality concerns Contribute to documentation and promote best practices across the team Qualifications: 3-5 years of experience in data engineering or related role Strong hands-on knowledge of Snowflake, Azure Data Factory, SQL, and Azure Data Lake Proficient in scripting (Python preferred) for data manipulation and automation Understanding of data warehousing concepts and ETL /ELT patterns Experience with Git, JIRA, and agile delivery environments is a plus Strong attention to detail and eagerness to learn in a collaborative team setting
Posted 2 weeks ago
5.0 - 10.0 years
12 - 18 Lacs
Hyderabad
Remote
Role: Senior Data Engineer Azure/Snowflake Duration: 6+ Months Location: Remote Working Hours: 12:30pm IST - 9:30pm IST (3am - 12pm EST) Job Summary: We are seeking a Senior Data Engineer with advanced hands-on experience in Snowflake and Azure to support the development and optimization of enterprise-grade data pipelines. This role is ideal for someone who enjoys deep technical work and solving complex data engineering challenges in a modern cloud environment. Key Responsibilities: Build and enhance scalable data pipelines using Azure Data Factory, Snowflake, and Azure Data Lake Develop and maintain ELT processes to ingest and transform data from various structured and semi-structured sources Write optimized and reusable SQL for complex data transformations in Snowflake Collaborate closely with analytics teams to ensure clean, reliable data delivery Monitor and troubleshoot pipeline performance, data quality, and reliability Participate in code reviews and contribute to best practices around data engineering standards and governance Qualifications: 5+ years of data engineering experience in enterprise environments Deep hands-on experience with Snowflake, Azure Data Factory, Azure Blob/Data Lake, and SQL Proficient in scripting for data workflows (Python or similar) Strong grasp of data warehousing concepts and ELT development best practices Experience with version control tools (e.g., Git) and CI/CD processes for data pipelines Detail-oriented with strong problem-solving skills and the ability to work independently
Posted 2 weeks ago
5.0 - 8.0 years
9 - 13 Lacs
Hyderabad
Work from Office
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do Let’s do this. Let’s change the world. In this vital role you will as an expert IS Architect lead the design and implementation of integration frameworks for pharmacovigilance (PV) systems spanning both SaaS and internally hosted. This role focuses on building secure, compliant, and scalable architectures to ensure seamless data flow between safety databases, external systems, and analytics platforms, without direct access to backend databases. The ideal candidate will work closely with PV system collaborators, SaaS vendors, and internal IT teams to deliver robust and efficient solutions. Roles & Responsibilities: Design hybrid integration architectures to manage data flows between SaaS-based PV systems, internally hosted systems and platforms. Implement middleware solutions to bridge on-premise and cloud environments, applying Application Programming Interface API-first integration design pattern and establishing secure data exchange mechanisms to ensure data consistency and compliance. Work with SaaS providers and internal IT teams to define integration approach for Extract Transform Load (ETL), event-driven architecture, and batch processing. Design and maintain end-to-end data flow diagrams and blueprints that consider the unique challenges of hybrid environments. Define and enforce data governance frameworks to maintain data quality, integrity, and traceability across integrated systems. Lead all aspects of data lifecycle management for both cloud and internally hosted systems to ensure consistency and compliance. Act as the main point of contact between pharmacovigilance teams, SaaS vendors, internal IT staff, and other parties to align technical solutions with business goals. Ensure alignment with the delivery and platform teams to safeguard that the applications follow approved Amgen’s architectural and development guidelines as well as data/software standards. Collaborate with analytics teams to ensure timely access to PV data for signal detection, trending, and regulatory reporting. Continuously evaluate and improve integration frameworks to adapt to evolving PV requirements, data volumes, and business needs. Provide technical guidance and mentorship to junior developers. Basic Qualifications Master’s degree with 4 to 6 years of experience in Computer Science, software development or related field Bachelor’s degree with 6 to 8 years of experience in Computer Science, software development or related field Diploma with 10 to 12 years of experience in Computer Science, software development or related field Must-Have Skills: Demonstrable experience in architecting data pipeline and/or integration cross technology landscape (SaaS, Data lake, internally hosted systems) Experience with Application Programming Interface (API integrations) such as MuleSoft and Extract Transform Load (ETL tools) as Informatica platform, Snowflake, or Databricks. Strong problem-solving skills, particularly in hybrid system integrations. Superb communication and collaborator leadership skills, ability to explain technical concepts to non-technical clients Ability to balance technical solutions with business priorities and compliance needs. Passion for using technology to improve pharmacovigilance and patient safety. Experience with data transfer processes and taking on stuck or delayed data files. Knowledge of testing methodologies and quality assurance standard processes. Proficiency in working with data analysis and QA tools. Understanding data flows related to regulations such as GDPR and HIPAA. Experience in SQL/NOSQL database, database programming languages, data modelling concepts. Good-to-Have Skills: Knowledgeable in SDLC, including requirements, design, testing, data analysis, change control Knowledgeable in reporting tools (e.g. Tableau, Power BI) Professional Certifications: SAFe for Architect certification (preferred) Soft Skills: Excellent analytical skills to gather options to deal with ambiguity scenarios. Excellent leadership and progressive thinking abilities Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to balance multiple priorities Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills Ability to influence and strive to an intended outcome Ability to hold team members accountable to commitments Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation .
Posted 2 weeks ago
2.0 - 5.0 years
7 - 11 Lacs
Hyderabad
Work from Office
Data Platform Engineer About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. What you will do Roles & Responsibilities: Work as a member of a Data Platform Engineering team that uses Cloud and Big Data technologies to design, develop, implement and maintain solutions to support various functional areas like Manufacturing, Commercial, Research and Development. Work closely with the Enterprise Data Lake delivery and platform teams to ensure that the applications are aligned with the overall architectural and development guidelines Research and evaluate technical solutions including Databricks and AWS Services, NoSQL databases, Data Science packages, platforms and tools with a focus on enterprise deployment capabilities like security, scalability, reliability, maintainability, cost management etc. Assist in building and managing relationships with internal and external business stakeholders Develop basic understanding of core business problems and identify opportunities to use advanced analytics Assist in reviewing 3rd party providers for new feature/function/technical fit with EEA's data management needs. Work closely with the Enterprise Data Lake ecosystem leads to identify and evaluate emerging providers of data management & processing components that could be incorporated into data platform. Work with platform stakeholders to ensure effective cost observability and control mechanisms are in place for all aspects of data platform management. Experience developing in an Agile development environment, and comfortable with Agile terminology and ceremonies. Keen on adopting new responsibilities, facing challenges, and mastering new technologies What we expect of you Basic Qualifications and Experience: Master’s degree in computer science or engineering field and 1 to 3 years of relevant experience OR Bachelor’s degree in computer science or engineering field and 3 to 5 years of relevant experience OR Diploma and Minimum of 8+ years of relevant work experience Must-Have Skills: Experience with Databricks (or Snowflake), including cluster setup, execution, and tuning Experience with common data processing librariesPandas, PySpark, SQL-Alchemy. Experience in UI frameworks (Angular.js or React.js) Experience with data lake, data fabric and data mesh concepts Experience with data modeling, performance tuning, and experience on relational databases Experience building ETL or ELT pipelines; Hands-on experience with SQL/NoSQL Program skills in one or more computer languages – SQL, Python, Java Experienced with software engineering best-practices, including but not limited to version control (Git, GitLab.), CI/CD (GitLab, Jenkins etc.), automated unit testing, and Dev Ops Exposure to Jira or Jira Align. Good-to-Have Skills: Knowledge on R language will be considered an advantage Experience in Cloud technologies AWS preferred. Cloud Certifications -AWS, Databricks, Microsoft Familiarity with the use of AI for development productivity, such as GitHub Copilot, Databricks Assistant, Amazon Q Developer or equivalent. Knowledge of Agile and DevOps practices. Skills in disaster recovery planning. Familiarity with load testing tools (JMeter, Gatling). Basic understanding of AI/ML for monitoring. Knowledge of distributed systems and microservices. Data visualization skills (Tableau, Power BI). Strong communication and leadership skills. Understanding of compliance and auditing requirements. Soft Skills: Excellent analytical and solve skills Excellent written and verbal communications skills (English) in translating technology content into business-language at various levels Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem-solving and analytical skills. Strong time and task leadership skills to estimate and successfully meet project timeline with ability to bring consistency and quality assurance across various projects. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 2 weeks ago
2.0 - 6.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Site Reliability Engineer ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. What you will do Roles & Responsibilities Ensure high system reliability and uptime. Develop and maintain monitoring systems. Lead incident response and root cause analysis. Automate repetitive tasks for efficiency. Perform capacity planning and resource scaling. Lead infrastructure as code (e.g., Terraform, Kubernetes). Collaborate with development and operations teams. Maintain clear documentation and share knowledge. Optimize system and application performance. Ensure security and compliance standards are met. Define, measure, and monitor Service Level Objectives (SLOs) and Service-Level Agreements (SLAs) to align with business goals. Drive continuous process and system improvements. Define guidelines, standards, strategies, security policies and organizational change policies to support the Data Lake What we expect of you Basic Qualifications and Experience: Master’s degree in computer science or engineering field and 1 to 3 years of relevant experience OR Bachelor’s degree in computer science or engineering field and 3 to 5 years of relevant experience OR Diploma and Minimum of 8+ years of relevant work experience Must-Have Skills: Proficiency in programming/scripting (Python, Java). Experience in Linux/Unix system administration. Experience with cloud platforms (AWS, Databricks, Azure, Snowflake). Proficiency in containerization and orchestration (Docker, Kubernetes). Knowledge of Infrastructure as Code (Terraform, Ansible). Familiarity with monitoring and logging tools (Prometheus, Grafana). Understanding of CI/CD pipelines (Jenkins, GitLab CI/CD). Strong networking knowledge and troubleshooting skills. Understanding of security principles and compliance. Familiarity with database management (SQL and NoSQL). Strong troubleshooting and debugging skills. Experience in performance optimization. Experience with backup and storage solutions. Good-to-Have Skills: Familiarity with the use of AI for development productivity, such as GitHub Copilot, Databricks Assistant, Amazon Q Developer or equivalent. Knowledge of Agile and DevOps practices. Skills in disaster recovery planning. Familiarity with load testing tools (JMeter, Gatling). Basic understanding of AI/ML for monitoring. Knowledge of distributed systems and microservices. Data visualization skills (Tableau, Power BI). Strong communication and leadership skills. Understanding of compliance and auditing requirements. Soft Skills: Excellent analytical and solve skills Excellent written and verbal communications skills (English) in translating technology content into business-language at various levels Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to handle multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem-solving and analytical skills. Strong time and task leadership skills to estimate and successfully meet project timeline with ability to bring consistency and quality assurance across various projects. Apply now for a career that defies creativity Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 2 weeks ago
3.0 - 7.0 years
4 - 7 Lacs
Hyderabad
Work from Office
ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you Master’s degree and 4 to 6 years of Computer Science, IT or related field experience OR Bachelor’s degree and 6 to 8 years of Computer Science, IT or related field experience OR Diploma and 10 to 12 years of Computer Science, IT or related field experience Basic Qualifications: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL),Snowflake, workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores Excellent problem-solving skills and the ability to work with large, complex datasets Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Proven ability to optimize query performance on big data platforms Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Professional Certifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation .
Posted 2 weeks ago
1.0 - 4.0 years
2 - 5 Lacs
Hyderabad
Work from Office
ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you Master’s degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelor’s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Basic Qualifications: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL),Snowflake, workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Proven ability to optimize query performance on big data platforms Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Professional Certifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation .
Posted 2 weeks ago
1.0 - 4.0 years
2 - 5 Lacs
Hyderabad
Work from Office
What you will do Let’s do this. Let’s change the world. In this vital role you are responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelor’s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Preferred Qualifications: Functional Skills: Must-Have Skills : Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), AWS, Redshift, Snowflake, workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools. Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Proven ability to optimize query performance on big data platforms Good-to-Have Skills: Experience with data modeling, performance tuning, on relational and graph databases( e.g. Marklogic, Allegrograph, Stardog, RDF Triplestore).. Strong understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, cloud data platform Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Professional Certifications : AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. Equal opportunity statement Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com
Posted 2 weeks ago
0.0 - 2.0 years
3 - 5 Lacs
Hyderabad
Work from Office
What you will do Let’s do this. Let’s change the world. In this vital role you areresponsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelor’s degree and 0 to 3 years of Computer Science, IT or related field experience OR Diploma and 4 to 7 years of Computer Science, IT or related field experience Preferred Qualifications: Functional Skills: Must-Have Skills : Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), AWS, Redshift, Snowflake, workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools. Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores. Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Good-to-Have Skills: Experience with data modeling, performance tuning on relational and graph databases ( e.g. Marklogic, Allegrograph, Stardog, RDF Triplestore). Understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, cloud data platform Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Professional Certifications : AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. Equal opportunity statement Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com
Posted 2 weeks ago
5.0 - 8.0 years
9 - 14 Lacs
Bengaluru, Bangalaore
Work from Office
ETL Data Engineer - Tech Lead Bangalore, India Information Technology 16748 Overview We are seeking a skilled and experienced Data Engineer who has expertise in playing a vital role in supporting data discovery, creating design document, data ingestion/migration, creating data pipelines, creating data marts and managing, monitoring the data using tech stack Azure, SQL, Python, PySpark, Airflow and Snowflake. Responsibilities 1. Data DiscoveryCollaborate with source teams and gather complete details of data sources and create design diagram. 2. Data Ingestion/MigrationCollaborate with cross-functional teams to Ingest/migrate data from various sources to staging area. Develop and implement efficient data migration strategies, ensuring data integrity and security throughout the process. 3. Data Pipeline DevelopmentDesign, develop, and maintain robust data pipelines that extract, transform, and load (ETL) data from different sources into GCP. Implement data quality checks and ensure scalability, reliability, and performance of the pipelines. 4. Data ManagementBuild and maintain data models and schemas, ensuring optimal storage, organization, and accessibility of data. Collaborate with requirement team to understand their data requirements and provide solutions by creating data marts to meet their needs. 5. Performance OptimizationIdentify and resolve performance bottlenecks within the data pipelines and data services. Optimize queries, job configurations, and data processing techniques to improve overall system efficiency. 6. Data Governance and SecurityImplement data governance policies, access controls, and data security measures to ensure compliance with regulatory requirements and protect sensitive data. Monitor and troubleshoot data-related issues, ensuring high availability and reliability of data systems. 7. Documentation and CollaborationCreate comprehensive technical documentation, including data flow diagrams, system architecture, and standard operating procedures. Collaborate with cross-functional teams, analysts, and software engineers, to understand their requirements and provide technical expertise. Requirements Bachelor's or Master's degree in Computer Science, Information Systems, or a related field. - Proven experience as a Data Engineer Technical lead, with a focus on output driven. - Strong knowledge and hands-on experience with Azure, SQL, Python, PySpark, Airflow, Snowflake and related tools. - Proficiency in data processing and pipeline development. Solid understanding of data modeling, database design, and ETL principles. - Experience with data migration projects, including data extraction, transformation, and loading. - Familiarity with data governance, security, and compliance practices. - Strong problem-solving skills and ability to work in a fast-paced, collaborative environment. - Excellent communication and interpersonal skills, with the ability to articulate technical concepts to non-technical stakeholders.
Posted 2 weeks ago
5.0 - 10.0 years
4 - 8 Lacs
Chennai, Guindy, Chenai
Work from Office
Data Modeller Chennai - Guindy, India Information Technology 17074 Overview A Data Modeller is responsible for designing, implementing, and managing data models that support the strategic and operational needs of an organization. This role involves translating business requirements into data structures, ensuring consistency, accuracy, and efficiency in data storage and retrieval processes. Responsibilities Develop and maintain conceptual, logical, and physical data models. Collaborate with business analysts, data architects, and stakeholders to gather data requirements. Translate business needs into efficient database designs. Optimize and refine existing data models to support analytics and reporting. Ensure data models support data governance, quality, and security standards. Work closely with database developers and administrators on implementation. Document data models, metadata, and data flows. Requirements Bachelors or Masters degree in Computer Science, Information Systems, Data Science, or related field. Data Modeling Tools: ER/Studio, ERwin, SQL Developer Data Modeler, or similar. Database Technologies: Proficiency in SQL and familiarity with databases like Oracle, SQL Server, MySQL, PostgreSQL. Data Warehousing: Experience with dimensional modeling, star and snowflake schemas. ETL Processes: Knowledge of Extract, Transform, Load processes and tools. Cloud Platforms: Familiarity with cloud data services (e.g., AWS Redshift, Azure Synapse, Google BigQuery). Metadata Management & Data Governance: Understanding of data cataloging and governance principles. Strong analytical and problem-solving skills. Excellent communication skills to work with business stakeholders and technical teams. Ability to document models clearly and explain complex data relationships. 5+ years in data modeling, data architecture, or related roles. Experience working in Agile or DevOps environments is often preferred. Understanding of normalization/denormalization. Experience with business intelligence and reporting tools. Familiarity with master data management (MDM) principles.
Posted 2 weeks ago
2.0 - 4.0 years
3 - 7 Lacs
Bengaluru
Work from Office
There is a need for a resource (proficient) for a Data Engineer with experience monitoring and fixing jobs for data pipelines written in Azure data Factory and Python Design and implement data models for Snowflake to support analytical solutions. Develop ETL processes to integrate data from various sources into Snowflake. Optimize data storage and query performance in Snowflake. Collaborate with cross-functional teams to gather requirements and deliver scalable data solutions. Monitor and maintain Snowflake environments, ensuring optimal performance and data security. Create documentation for data architecture, processes, and best practices. Provide support and training for teams utilizing Snowflake services. Roles and Responsibilities Strong experience with Snowflake architecture and data warehousing concepts. Proficiency in SQL for data querying and manipulation. Familiarity with ETL tools such as Talend, Informatica, or Apache NiFi. Experience with data modeling techniques and tools. Knowledge of cloud platforms, specifically AWS, Azure, or Google Cloud. Understanding of data governance and compliance requirements. Excellent analytical and problem-solving skills. Strong communication and collaboration skills to work effectively within a team. Experience with Python or Java for data pipeline development is a plus.
Posted 2 weeks ago
3.0 - 6.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Data Engineer - Senior Software Engineer Bangalore, India Information Technology 16750 Overview We are seeking a skilled and experienced Data Engineer who has expertise in playing a vital role in supporting, data ingestion/migration, creating data pipelines, creating data marts and managing, monitoring the data using tech stack Azure, SQL, Python, PySpark, Airflow and Snowflake. Responsibilities 1. Data Ingestion/MigrationCollaborate with cross-functional teams to Ingest/migrate data from various sources to staging area. Develop and implement efficient data migration strategies, ensuring data integrity and security throughout the process. 2. Data Pipeline DevelopmentDesign, develop, and maintain robust data pipelines that extract, transform, and load (ETL) data from different sources into GCP. Implement data quality checks and ensure scalability, reliability, and performance of the pipelines. 3. Data ManagementBuild and maintain data models and schemas, ensuring optimal storage, organization, and accessibility of data. Collaborate with requirement team to understand their data requirements and provide solutions by creating data marts to meet their needs. 4. Performance OptimizationIdentify and resolve performance bottlenecks within the data pipelines and data services. Optimize queries, job configurations, and data processing techniques to improve overall system efficiency. 5. Data Governance and SecurityImplement data governance policies, access controls, and data security measures to ensure compliance with regulatory requirements and protect sensitive data. Monitor and troubleshoot data-related issues, ensuring high availability and reliability of data systems. 7. Documentation and CollaborationCreate comprehensive technical documentation, including data flow diagrams, system architecture, and standard operating procedures. Collaborate with cross-functional teams, analysts, and software engineers, to understand their requirements and provide technical expertise. Requirements Qualifications: - Bachelor's or Master's degree in Computer Science, Information Systems, or a related field. - Proven experience as a Data Engineer with a focus on output driven. - Knowledge and hands-on experience with Azure, SQL, Python, PySpark, Airflow, Snowflake and related tools. - Proficiency in data processing and pipeline development. Solid understanding of data modeling, database design, and ETL principles. - Experience with data migration projects, including data extraction, transformation, and loading. - Familiarity with data governance, security, and compliance practices. - Good communication and interpersonal skills, with the ability to articulate technical concepts to non-technical stakeholders.
Posted 2 weeks ago
5.0 - 10.0 years
4 - 8 Lacs
Chennai, Guindy
Work from Office
Data ELT Engineer Chennai - Guindy, India Information Technology 17075 Overview We are looking for a highly skilled DataELT Engineer to architect and implement data solutions that support our enterprise analytics and real-time decision-making capabilities. This role combines data modeling expertise with hands-on experience building and managing ELT pipelines across diverse data sources. You will work with Snowflake , AWS Glue , and Apache Kafka to ingest, transform, and stream both batch and real-time data, ensuring high data quality and performance across systems. If you have a passion for data architecture and scalable engineering, we want to hear from you. Responsibilities Design, build, and maintain scalable ELT pipelines into Snowflake from diverse sources including relational databases (SQL Server, MySQL, Oracle) and SaaS platforms. Utilize AWS Glue for data extraction and transformation, and Kafka for real-time streaming ingestion. Model data using dimensional and normalized techniques to support analytics and business intelligence workloads. Handle large-scale batch processing jobs and implement real-time streaming solutions. Ensure data quality, consistency, and governance across pipelines. Collaborate with data analysts, data scientists, and business stakeholders to align models with organizational needs. Monitor, troubleshoot, and optimize pipeline performance and reliability. Requirements 5+ years of experience in data engineering and data modeling. Strong proficiency with SQL and data modeling techniques (star, snowflake schemas). Hands-on experience with Snowflake data platform. Proficiency with AWS Glue (ETL jobs, crawlers, workflows). Experience using Apache Kafka for streaming data integration. Experience with batch and streaming data processing. Familiarity with orchestration tools (e.g., Airflow, Step Functions) is a plus. Strong understanding of data governance and best practices in data architecture. Excellent problem-solving skills and communication abilities.
Posted 2 weeks ago
6.0 - 11.0 years
7 - 11 Lacs
Chennai, Guindy
Work from Office
Lead Engineer Chennai - Guindy, India Information Technology 17056 Overview Design, Development, and Responsible for implementing Snowflake / Oracle /MYSQL for the customer. Hands-on experience in ETL Snowflake data pipeline setup. Good understanding of Cloud-based Architectures such as AWS, GCP, Azure. Responsibilities Collaborate with clients and stakeholders to gather and understand technical and business requirements. Assess various technologies and tools to recommend the most suitable solutions for different projects. Assess Data warehouse implementation procedures to ensure they comply with internal and external regulations. Prepare accurate Data warehouse design and architecture reports for management. Oversee the migration of data from legacy systems to new solutions. Monitor the system performance by performing regular tests, troubleshooting and integrating new features. Recommend solutions to improve new and existing Data warehouse solutions. Ensure solutions are scalable and adaptable for future modifications, considering the organization's goals. Understand and document data flows in and between different systems/applications Act as Data domain expert for Snowflake in a collaborative environment to provide demonstrated understanding of data management best practices and patterns. Implement cloud-based Enterprise data warehouse solutions with multiple data platforms along with Snowflake environment to build data movement strategy. Collaborate with project managers and developers to guide development processes in line with solution requirements. Lead in Developing Project requirements for end-to-end Data integration process using ETL for Structured, semi-structured and Unstructured Data. Make necessary on-going updates to modeling principles, processes, solutions, and best practices to ensure that the Snowflake is aligned with business needs of the environment. Guidance to developers in preparing functional/technical specs to define reporting requirement and ETL process. Educate staff members through training and individual support. Offer support by responding to system problems in a timely manner. Requirements Bachelors degree in Computer Science or a related field Proven experience as a Snowflake Subject Matter Expert, with at least 6 years of experience in designing, implementing, and maintaining Snowflake data warehousing solutions In-depth knowledge of Snowflake architecture, including Snowflake data warehousing, data sharing, and data integration Experience with ETL and data integration tools, such as Talend, Informatica, or Matillion Strong SQL skills, with the ability to write complex queries and optimize query performance Excellent communication and interpersonal skills, with the ability to work effectively in a team environment Strong problem-solving and analytical skills, with the ability to troubleshoot and resolve complex technical issues Experience with data governance and data security principles Snowflake certification is a plus
Posted 2 weeks ago
15.0 - 20.0 years
4 - 8 Lacs
Chennai
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Snowflake Data Warehouse Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Snowflake Data Warehouse.- Good To Have Skills: Experience with data modeling and database design.- Strong understanding of ETL processes and data integration techniques.- Familiarity with cloud platforms such as AWS or Azure.- Experience in performance tuning and optimization of data queries. Additional Information:- The candidate should have minimum 5 years of experience in Snowflake Data Warehouse.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
5.0 - 10.0 years
20 - 25 Lacs
Mumbai
Work from Office
Entity :- Accenture Strategy & Consulting Team :- Strategy & Consulting Global Network Practice :- Marketing Analytics Title :- Data Science Manager Job location :- Gurgaon About S&C - Global Network :- Accenture Applied Intelligence practice help our clients grow their business in entirely new ways. Analytics enables our clients to achieve high performance through insights from data - insights that inform better decisions and strengthen customer relationships. From strategy to execution, Accenture works with organizations to develop analytic capabilities - from accessing and reporting on data to predictive modelling - to outperform the competition. WHATS IN IT FOR YOU As part of our Analytics practice, you will join a worldwide network of over 20,000 smart and driven colleagues experienced in leading statistical tools, methods and applications. From data to analytics and insights to actions, our forward-thinking consultants provide analytically-informed, issue-based insights at scale to help our clients improve outcomes and achieve high performance. Accenture will continually invest in your learning and growth. You'll work with MMM experts, and Accenture will support you in growing your own tech stack and certifications In Applied intelligence you will understands the importance of sound analytical decision-making, relationship of tasks to the overall project, and executes projects in the context of a business performance improvement initiative. What you would do in this role Working through the phases of project Define data requirements for creating a model and understand the business problem Clean, aggregate, analyze, interpret data and carry out quality analysis of it 5+ years of advanced experience of Market Mix Modeling and related concepts of optimizing promotional channels and budget allocation Experience in working with non linear optimization techniques. Proficiency in Statistical and Probabilistic methods such as SVM, Decision-Trees, Bagging and Boosting Techniques, Clustering Hands on experience in python data-science and math packages such as NumPy , Pandas, Sklearn, Seaborne, Pycaret, Matplotlib Development of AI/ML models Develop and Manage data pipelines Develop and Manage Data within different layers of Azure/ Snowflake Aware of common design patterns for scalable machine learning architectures, as well as tools for deploying and maintaining machine learning models in production. Knowledge of cloud platforms and usage for pipelining and deploying and scaling marketing mix models. Working knowledge of MMM optimizer and its intricacies Awareness of MMM application development and backend engine integration will be preferred Working along with the team and consultant/manager Well versed with creating insights presentations and client ready decks. Should be able to mentor and guide a team of 10-15 people under him/her Manage client relationships and expectations, and communicate insights and recommendations effectively Capability building and thought leadership Logical Thinking Able to think analytically, use a systematic and logical approach to analyze data, problems, and situations. Notices discrepancies and inconsistencies in information and materials. Task Management Advanced level of task management knowledge and experience. Should be able to plan own tasks, discuss and work on priorities, track and report progress Qualification Who we are looking for 5+ years of work experience in consulting/analytics with reputed organization is desirable. Master degree in Statistics/Econometrics/ Economics or B Tech/M Tech or Masters/M Tech in Computer Science or M.Phil/Ph.D in statistics/econometrics or related field from reputed college Must have knowledge of SQL and Python language and at-least one cloud-based technologies (Azure, AWS, GCP) Must have good knowledge of Market mix modeling techniques and optimization algorithms and applicability to industry data Must have data migration experience from cloud to snowflake (Azure, GCP, AWS) Managing sets of XML, JSON, and CSV from disparate sources. Manage documentation of data models, architecture, and maintenance processes Have an understanding of econometric/statistical modeling and analysis techniques such as regression analysis, hypothesis testing, multivariate statistical analysis, time series techniques, optimization techniques, and statistical packages such as R, Python, Java, SQL, Spark etc. Working knowledge in Machine Learning algorithms like Random Forest, Gradient Boosting, Neural Network etc. Proficient in Excel, MS word, PowerPoint, etc. Strong client and team management and planning of large-scale projects with risk assessment Accenture is an equal opportunities employer and welcomes applications from all sections of society and does not discriminate on grounds of race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, or any other basis as protected by applicable law.
Posted 2 weeks ago
5.0 - 10.0 years
17 - 20 Lacs
Mumbai
Work from Office
Management Level: Ind&Func AI Decision Science Manager Location: Gurgaon, Bangalore Must-Have Skills: Market Mix Modeling (MMM) Techniques, Optimization Algorithms for budget allocation and promotional channel optimization, Statistical and Probabilistic Methods:SVM, Decision Trees, Programming Languages & Tools:Python, NumPy, Pandas, Sklearn, AI/ML Models Development and Data Pipeline Management, Data Management within Snowflake (data layers, migration), Cloud Platforms experience (Azure, AWS, GCP). Good-to-Have Skills: Experience with Nonlinear Optimization Techniques, Experience in Data Migration (cloud to Snowflake), Proficiency in SQL and cloud-based technologies, Understanding of Econometrics/Statistical Modeling (Regression, Time Series, Multivariate Analysis). Job Summary We are seeking a skilled Ind & Func AI Decision Science Manager to join the Accenture Strategy & Consulting team in the Global Network Data & AI practice. This role will focus on Market Mix Modeling (MMM), where you will be responsible for developing AI/ML models, optimizing promotional channels, managing data pipelines, and working on scaling marketing mix models across cloud platforms. This role offers an exciting opportunity to collaborate with leading financial clients and leverage cutting-edge technology to drive business impact and innovation. Roles & Responsibilities Engagement Execution Lead MMM engagements that involve optimizing promotional strategies, budget allocation, and marketing analytics solutions. Apply advanced statistical techniques and machine learning models to improve marketing effectiveness. Collaborate with clients to develop tailored market mix models, delivering data-driven insights to optimize their marketing budgets and strategies. Develop Proof of Concepts (PoC) for clients, including scoping, staffing, and execution phases. Practice Enablement Mentor and guide analysts, consultants, and managers to build their expertise in Market Mix Modeling and analytics. Contribute to the growth of the Analytics practice through knowledge sharing, staffing initiatives, and the development of new methodologies. Promote thought leadership in Marketing Analytics by publishing research and presenting at industry events. Opportunity Development Identify business development opportunities in marketing analytics and develop compelling business cases for potential clients. Work closely with deal teams to provide subject matter expertise in MMM, ensuring the development of high-quality client proposals and responses to RFPs. Client Relationship Development Build and maintain strong, trusted relationships with internal and external clients. Serve as a consultant to clients, offering strategic insights to optimize marketing spend and performance. Professional & Technical Skills 5+ years of experience in Market Mix Modeling (MMM) and associated optimization techniques. Strong knowledge of nonlinear optimization, AI/ML models, and advanced statistical techniques for marketing. Proficiency in programming languages such as Python, NumPy, Pandas, Sklearn, Seaborne, Pycaret, and Matplotlib. Experience with cloud platforms such as AWS, Azure, or GCP and data migration to Snowflake. Familiarity with econometrics/statistical modeling techniques (Regression, Hypothesis Testing, Time Series, Multivariate Analysis). Hands-on experience in managing data pipelines and deploying scalable machine learning architectures. Additional Information Masters degree in Statistics, Econometrics, Economics, or related fields from reputed universities. Ph.D. or M.Tech is a plus. Excellent communication and interpersonal skills to effectively collaborate with global teams and clients. Willingness to travel up to 40% of the time. Work on impactful projects to help clients optimize their marketing strategies through advanced data-driven insights. About Our Company | Accenture (do not remove the hyperlink) Qualification Experience: 5+ years of advanced experience in Market Mix Modeling (MMM) and related optimization techniques for promotional channels and budget allocation 2+ years for Analysts & 4+ years for Consultants of experience in consulting/analytics with reputed organizations Educational Qualification: Masters degree in Statistics, Econometrics, Economics, or related fields from reputed institutions Ph.D. or M.Tech in relevant fields is an advantage
Posted 2 weeks ago
5.0 - 10.0 years
17 - 20 Lacs
Bengaluru
Work from Office
Management Level: Ind&Func AI Decision Science Manager Location: Gurgaon, Bangalore Must-Have Skills: Market Mix Modeling (MMM) Techniques, Optimization Algorithms for budget allocation and promotional channel optimization, Statistical and Probabilistic Methods:SVM, Decision Trees, Programming Languages & Tools:Python, NumPy, Pandas, Sklearn, AI/ML Models Development and Data Pipeline Management, Data Management within Snowflake (data layers, migration), Cloud Platforms experience (Azure, AWS, GCP). Good-to-Have Skills: Experience with Nonlinear Optimization Techniques, Experience in Data Migration (cloud to Snowflake), Proficiency in SQL and cloud-based technologies, Understanding of Econometrics/Statistical Modeling (Regression, Time Series, Multivariate Analysis). Job Summary We are seeking a skilled Ind & Func AI Decision Science Manager to join the Accenture Strategy & Consulting team in the Global Network Data & AI practice. This role will focus on Market Mix Modeling (MMM), where you will be responsible for developing AI/ML models, optimizing promotional channels, managing data pipelines, and working on scaling marketing mix models across cloud platforms. This role offers an exciting opportunity to collaborate with leading financial clients and leverage cutting-edge technology to drive business impact and innovation. Roles & Responsibilities Engagement Execution Lead MMM engagements that involve optimizing promotional strategies, budget allocation, and marketing analytics solutions. Apply advanced statistical techniques and machine learning models to improve marketing effectiveness. Collaborate with clients to develop tailored market mix models, delivering data-driven insights to optimize their marketing budgets and strategies. Develop Proof of Concepts (PoC) for clients, including scoping, staffing, and execution phases. Practice Enablement Mentor and guide analysts, consultants, and managers to build their expertise in Market Mix Modeling and analytics. Contribute to the growth of the Analytics practice through knowledge sharing, staffing initiatives, and the development of new methodologies. Promote thought leadership in Marketing Analytics by publishing research and presenting at industry events. Opportunity Development Identify business development opportunities in marketing analytics and develop compelling business cases for potential clients. Work closely with deal teams to provide subject matter expertise in MMM, ensuring the development of high-quality client proposals and responses to RFPs. Client Relationship Development Build and maintain strong, trusted relationships with internal and external clients. Serve as a consultant to clients, offering strategic insights to optimize marketing spend and performance. Professional & Technical Skills 5+ years of experience in Market Mix Modeling (MMM) and associated optimization techniques. Strong knowledge of nonlinear optimization, AI/ML models, and advanced statistical techniques for marketing. Proficiency in programming languages such as Python, NumPy, Pandas, Sklearn, Seaborne, Pycaret, and Matplotlib. Experience with cloud platforms such as AWS, Azure, or GCP and data migration to Snowflake. Familiarity with econometrics/statistical modeling techniques (Regression, Hypothesis Testing, Time Series, Multivariate Analysis). Hands-on experience in managing data pipelines and deploying scalable machine learning architectures. Additional Information Masters degree in Statistics, Econometrics, Economics, or related fields from reputed universities. Ph.D. or M.Tech is a plus. Excellent communication and interpersonal skills to effectively collaborate with global teams and clients. Willingness to travel up to 40% of the time. Work on impactful projects to help clients optimize their marketing strategies through advanced data-driven insights. About Our Company | Accenture (do not remove the hyperlink) Qualification Experience: 5+ years of advanced experience in Market Mix Modeling (MMM) and related optimization techniques for promotional channels and budget allocation 2+ years for Analysts & 4+ years for Consultants of experience in consulting/analytics with reputed organizations Educational Qualification: Masters degree in Statistics, Econometrics, Economics, or related fields from reputed institutions Ph.D. or M.Tech in relevant fields is an advantage
Posted 2 weeks ago
3.0 - 5.0 years
13 - 17 Lacs
Gurugram
Work from Office
for MMM Role: Management Level :Ind&Func AI Decision Science Consultant Location :Gurgaon, Bangalore Must-Have Skills: Market Mix Modeling (MMM) Techniques, Optimization Algorithms for budget allocation and promotional channel optimization, Statistical and Probabilistic Methods:SVM, Decision Trees, Programming Languages & Tools:Python, NumPy, Pandas, Sklearn, AI/ML Models Development and Data Pipeline Management, Data Management within Snowflake (data layers, migration), Cloud Platforms experience (Azure, AWS, GCP) Good-to-Have Skills: Experience with Nonlinear Optimization Techniques, Experience in Data Migration (cloud to Snowflake), Proficiency in SQL and cloud-based technologies, Understanding of Econometrics/Statistical Modeling (Regression, Time Series, Multivariate Analysis). Job Summary We are seeking a skilled Ind & Func AI Decision Science Consultant to join the Accenture Strategy & Consulting team in the Global Network Data & AI practice . This role is focused on Market Mix Modeling (MMM) , where you will be responsible for developing AI/ML models, optimizing promotional channels, managing data pipelines, and working on scaling marketing mix models across cloud platforms. You will work with leading financial clients, leveraging cutting-edge technology to drive business impact and innovation. Roles & Responsibilities Engagement Execution Lead MMM engagements that involve optimizing promotional strategies, budget allocation, and marketing analytics solutions. Apply advanced statistical techniques and machine learning models to improve marketing effectiveness. Collaborate with clients to develop tailored market mix models , delivering data-driven insights to optimize their marketing budgets and strategies. Develop Proof of Concepts (PoC) for clients, including scoping , staffing , and execution phases. Practice Enablement Mentor and guide analysts , consultants , and managers to build their expertise in Market Mix Modeling and analytics . Contribute to the growth of the Analytics practice through knowledge sharing, staffing initiatives, and development of new methodologies. Promote thought leadership in Marketing Analytics by publishing research and presenting at industry events. Opportunity Development Identify business development opportunities in marketing analytics and develop compelling business cases for potential clients. Work closely with deal teams to provide subject matter expertise in MMM , ensuring the development of high-quality client proposals and responses to RFPs . Client Relationship Development Build and maintain strong, trusted relationships with internal and external clients. Serve as a consultant to clients, offering strategic insights to optimize marketing spend and performance. Professional & Technical Skills 3+ years of experience in Market Mix Modeling (MMM) and associated optimization techniques. Strong knowledge of nonlinear optimization , AI/ML models , and advanced statistical techniques for marketing. Proficiency in programming languages such as Python , NumPy , Pandas , Sklearn , Seaborne , Pycaret , and Matplotlib . Experience with cloud platforms such as AWS , Azure , or GCP and data migration to Snowflake . Familiarity with econometrics/statistical modeling techniques (Regression, Hypothesis Testing, Time Series, Multivariate Analysis). Hands-on experience in managing data pipelines and deploying scalable machine learning architectures . Additional Information: Masters degree in Statistics , Econometrics , Economics , or related fields from reputed universities . Ph.D. or M.Tech is a plus. Excellent communication and interpersonal skills to effectively collaborate with global teams and clients. Willingness to travel up to 40% of the time. Work on impactful projects to help clients optimize their marketing strategies through advanced data-driven insights . About Our Company | AccentureQualification Experience: 3+ years of experience in Market Mix Modeling (MMM) and optimization techniques 2+ years for Analysts & 4+ years for Consultants of experience in consulting/analytics within reputed organizations Educational Qualification: Masters degree in Statistics , Econometrics , Economics , or related fields from reputed institutions Ph.D. or M.Tech in relevant fields is an advantage
Posted 2 weeks ago
3.0 - 5.0 years
16 - 20 Lacs
Bengaluru
Work from Office
for MMM Role: Management Level :Ind&Func AI Decision Science Consultant Location :Gurgaon, Bangalore Must-Have Skills: Market Mix Modeling (MMM) Techniques, Optimization Algorithms for budget allocation and promotional channel optimization, Statistical and Probabilistic Methods:SVM, Decision Trees, Programming Languages & Tools:Python, NumPy, Pandas, Sklearn, AI/ML Models Development and Data Pipeline Management, Data Management within Snowflake (data layers, migration), Cloud Platforms experience (Azure, AWS, GCP) Good-to-Have Skills: Experience with Nonlinear Optimization Techniques, Experience in Data Migration (cloud to Snowflake), Proficiency in SQL and cloud-based technologies, Understanding of Econometrics/Statistical Modeling (Regression, Time Series, Multivariate Analysis). Job Summary We are seeking a skilled Ind & Func AI Decision Science Consultant to join the Accenture Strategy & Consulting team in the Global Network Data & AI practice . This role is focused on Market Mix Modeling (MMM) , where you will be responsible for developing AI/ML models, optimizing promotional channels, managing data pipelines, and working on scaling marketing mix models across cloud platforms. You will work with leading financial clients, leveraging cutting-edge technology to drive business impact and innovation. Roles & Responsibilities Engagement Execution Lead MMM engagements that involve optimizing promotional strategies, budget allocation, and marketing analytics solutions. Apply advanced statistical techniques and machine learning models to improve marketing effectiveness. Collaborate with clients to develop tailored market mix models , delivering data-driven insights to optimize their marketing budgets and strategies. Develop Proof of Concepts (PoC) for clients, including scoping , staffing , and execution phases. Practice Enablement Mentor and guide analysts , consultants , and managers to build their expertise in Market Mix Modeling and analytics . Contribute to the growth of the Analytics practice through knowledge sharing, staffing initiatives, and development of new methodologies. Promote thought leadership in Marketing Analytics by publishing research and presenting at industry events. Opportunity Development Identify business development opportunities in marketing analytics and develop compelling business cases for potential clients. Work closely with deal teams to provide subject matter expertise in MMM , ensuring the development of high-quality client proposals and responses to RFPs . Client Relationship Development Build and maintain strong, trusted relationships with internal and external clients. Serve as a consultant to clients, offering strategic insights to optimize marketing spend and performance. Professional & Technical Skills 3+ years of experience in Market Mix Modeling (MMM) and associated optimization techniques. Strong knowledge of nonlinear optimization , AI/ML models , and advanced statistical techniques for marketing. Proficiency in programming languages such as Python , NumPy , Pandas , Sklearn , Seaborne , Pycaret , and Matplotlib . Experience with cloud platforms such as AWS , Azure , or GCP and data migration to Snowflake . Familiarity with econometrics/statistical modeling techniques (Regression, Hypothesis Testing, Time Series, Multivariate Analysis). Hands-on experience in managing data pipelines and deploying scalable machine learning architectures . Additional Information: Masters degree in Statistics , Econometrics , Economics , or related fields from reputed universities . Ph.D. or M.Tech is a plus. Excellent communication and interpersonal skills to effectively collaborate with global teams and clients. Willingness to travel up to 40% of the time. Work on impactful projects to help clients optimize their marketing strategies through advanced data-driven insights . About Our Company | AccentureQualification Experience: 3+ years of experience in Market Mix Modeling (MMM) and optimization techniques 2+ years for Analysts & 4+ years for Consultants of experience in consulting/analytics within reputed organizations Educational Qualification: Masters degree in Statistics , Econometrics , Economics , or related fields from reputed institutions Ph.D. or M.Tech in relevant fields is an advantage
Posted 2 weeks ago
3.0 - 5.0 years
13 - 17 Lacs
Bengaluru
Work from Office
for MMM Role: Management Level :Ind&Func AI Decision Science Consultant Location :Gurgaon, Bangalore Must-Have Skills: Market Mix Modeling (MMM) Techniques, Optimization Algorithms for budget allocation and promotional channel optimization, Statistical and Probabilistic Methods:SVM, Decision Trees, Programming Languages & Tools:Python, NumPy, Pandas, Sklearn, AI/ML Models Development and Data Pipeline Management, Data Management within Snowflake (data layers, migration), Cloud Platforms experience (Azure, AWS, GCP) Good-to-Have Skills: Experience with Nonlinear Optimization Techniques, Experience in Data Migration (cloud to Snowflake), Proficiency in SQL and cloud-based technologies, Understanding of Econometrics/Statistical Modeling (Regression, Time Series, Multivariate Analysis). Job Summary We are seeking a skilled Ind & Func AI Decision Science Consultant to join the Accenture Strategy & Consulting team in the Global Network Data & AI practice . This role is focused on Market Mix Modeling (MMM) , where you will be responsible for developing AI/ML models, optimizing promotional channels, managing data pipelines, and working on scaling marketing mix models across cloud platforms. You will work with leading financial clients, leveraging cutting-edge technology to drive business impact and innovation. Roles & Responsibilities Engagement Execution Lead MMM engagements that involve optimizing promotional strategies, budget allocation, and marketing analytics solutions. Apply advanced statistical techniques and machine learning models to improve marketing effectiveness. Collaborate with clients to develop tailored market mix models , delivering data-driven insights to optimize their marketing budgets and strategies. Develop Proof of Concepts (PoC) for clients, including scoping , staffing , and execution phases. Practice Enablement Mentor and guide analysts , consultants , and managers to build their expertise in Market Mix Modeling and analytics . Contribute to the growth of the Analytics practice through knowledge sharing, staffing initiatives, and development of new methodologies. Promote thought leadership in Marketing Analytics by publishing research and presenting at industry events. Opportunity Development Identify business development opportunities in marketing analytics and develop compelling business cases for potential clients. Work closely with deal teams to provide subject matter expertise in MMM , ensuring the development of high-quality client proposals and responses to RFPs . Client Relationship Development Build and maintain strong, trusted relationships with internal and external clients. Serve as a consultant to clients, offering strategic insights to optimize marketing spend and performance. Professional & Technical Skills 3+ years of experience in Market Mix Modeling (MMM) and associated optimization techniques. Strong knowledge of nonlinear optimization , AI/ML models , and advanced statistical techniques for marketing. Proficiency in programming languages such as Python , NumPy , Pandas , Sklearn , Seaborne , Pycaret , and Matplotlib . Experience with cloud platforms such as AWS , Azure , or GCP and data migration to Snowflake . Familiarity with econometrics/statistical modeling techniques (Regression, Hypothesis Testing, Time Series, Multivariate Analysis). Hands-on experience in managing data pipelines and deploying scalable machine learning architectures . Additional Information: Masters degree in Statistics , Econometrics , Economics , or related fields from reputed universities . Ph.D. or M.Tech is a plus. Excellent communication and interpersonal skills to effectively collaborate with global teams and clients. Willingness to travel up to 40% of the time. Work on impactful projects to help clients optimize their marketing strategies through advanced data-driven insights . About Our Company | AccentureQualification Experience: 3+ years of experience in Market Mix Modeling (MMM) and optimization techniques 2+ years for Analysts & 4+ years for Consultants of experience in consulting/analytics within reputed organizations Educational Qualification: Masters degree in Statistics , Econometrics , Economics , or related fields from reputed institutions Ph.D. or M.Tech in relevant fields is an advantage
Posted 2 weeks ago
6.0 - 9.0 years
15 - 17 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Hybrid
Proper Job Description- JD shared by client. 6+ years of experience in data engineering. Strong knowledge in SQL. Expertise in Snowflake, DBT and Python Minimum 3+ years SnapLogic or FivTran tool knowledge is added advantage. Must automate manual work using SnapLogic. Good communication and interpersonal skills is must as need to collaborate with data team, business analyst 2. Primary Skills in 5 liners that manager cannot negotiate on - Snowflake, DBT and Python & SQL 3. Location and Flexible locations – Yes
Posted 2 weeks ago
7.0 - 10.0 years
7 - 12 Lacs
Hyderabad, Bengaluru
Work from Office
GitHub and CI/CD Pipeline Management Cloud Integration and Automation Oversee GitHub branching strategies, code merging, and repository management to ensure code quality and integrity. Automate the import and export of IICS assets and execute other IICS commands using the CLI as part of CI/CD processes Develop and implement Python scripts for automating infrastructure provisioning, deployment processes, monitoring, and other operational tasks. Ensure IICS integrations are performed and automated primarily through CLI-based methods
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Snowflake has become one of the most sought-after skills in the tech industry, with a growing demand for professionals who are proficient in handling data warehousing and analytics using this cloud-based platform. In India, the job market for Snowflake roles is flourishing, offering numerous opportunities for job seekers with the right skill set.
These cities are known for their thriving tech industries and have a high demand for Snowflake professionals.
The average salary range for Snowflake professionals in India varies based on experience levels: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum
A typical career path in Snowflake may include roles such as: - Junior Snowflake Developer - Snowflake Developer - Senior Snowflake Developer - Snowflake Architect - Snowflake Consultant - Snowflake Administrator
In addition to expertise in Snowflake, professionals in this field are often expected to have knowledge in: - SQL - Data warehousing concepts - ETL tools - Cloud platforms (AWS, Azure, GCP) - Database management
As you explore opportunities in the Snowflake job market in India, remember to showcase your expertise in handling data analytics and warehousing using this powerful platform. Prepare thoroughly for interviews, demonstrate your skills confidently, and keep abreast of the latest developments in Snowflake to stay competitive in the tech industry. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.