Home
Jobs

22 Workflow Orchestration Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 9.0 years

8 - 12 Lacs

Pune

Work from Office

Naukri logo

Amdocs helps those who build the future to make it amazing. With our market-leading portfolio of software products and services, we unlock our customers innovative potential, empowering them to provide next-generation communication and media experiences for both the individual end user and enterprise customers. Our employees around the globe are here to accelerate service providers migration to the cloud, enable them to differentiate in the 5G era, and digitalize and automate their operations. Listed on the NASDAQ Global Select Market, Amdocs had revenue of $5.00 billion in fiscal 2024. For more information, visit www.amdocs.com In one sentence We are seeking a Data Engineer with advanced expertise in Databricks SQL, PySpark, Spark SQL, and workflow orchestration using Airflow. The successful candidate will lead critical projects, including migrating SQL Server Stored Procedures to Databricks Notebooks, designing incremental data pipelines, and orchestrating workflows in Azure Databricks What will your job look like Migrate SQL Server Stored Procedures to Databricks Notebooks, leveraging PySpark and Spark SQL for complex transformations. Design, build, and maintain incremental data load pipelines to handle dynamic updates from various sources, ensuring scalability and efficiency. Develop robust data ingestion pipelines to load data into the Databricks Bronze layer from relational databases, APIs, and file systems. Implement incremental data transformation workflows to update silver and gold layer datasets in near real-time, adhering to Delta Lake best practices. Integrate Airflow with Databricks to orchestrate end-to-end workflows, including dependency management, error handling, and scheduling. Understand business and technical requirements, translating them into scalable Databricks solutions. Optimize Spark jobs and queries for performance, scalability, and cost-efficiency in a distributed environment. Implement robust data quality checks, monitoring solutions, and governance frameworks within Databricks. Collaborate with team members on Databricks best practices, reusable solutions, and incremental loading strategies All you need is... Bachelor s degree in computer science, Information Systems, or a related discipline. 4+ years of hands-on experience with Databricks, including expertise in Databricks SQL, PySpark, and Spark SQL. Proven experience in incremental data loading techniques into Databricks, leveraging Delta Lake's features (e.g., time travel, MERGE INTO). Strong understanding of data warehousing concepts, including data partitioning, and indexing for efficient querying. Proficiency in T-SQL and experience in migrating SQL Server Stored Procedures to Databricks. Solid knowledge of Azure Cloud Services, particularly Azure Databricks and Azure Data Lake Storage. Expertise in Airflow integration for workflow orchestration, including designing and managing DAGs. Familiarity with version control systems (e.g., Git) and CI/CD pipelines for data engineering workflows. Excellent analytical and problem-solving skills with a focus on detail-oriented development. Preferred Qualifications Advanced knowledge of Delta Lake optimizations, such as compaction, Z-ordering, and vacuuming. Experience with real-time streaming data pipelines using tools like Kafka or Azure Event Hubs. Familiarity with advanced Airflow features, such as SLA monitoring and external task dependencies. Certifications such as Databricks Certified Associate Developer for Apache Spark or equivalent. Experience in Agile development methodologie Why you will love this job: You will be able to use your specific insights to lead business change on a large scale and drive transformation within our organization. You will be a key member of a global, dynamic and highly collaborative team with various possibilities for personal and professional development. You will have the opportunity to work in multinational environment for the global market leader in its field! We offer a wide range of stellar benefits including health, dental, vision, and life insurance as well as paid time off, sick time, and parental leave!

Posted 5 days ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do In this vital role We are looking for highly motivated expert Senior Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines to support structured, semi-structured, and unstructured data processing across the Enterprise Data Fabric. Implement real-time and batch data processing solutions, integrating data from multiple sources into a unified, governed data fabric architecture. Optimize big data processing frameworks using Apache Spark, Hadoop, or similar distributed computing technologies to ensure high availability and cost efficiency. Work with metadata management and data lineage tracking tools to enable enterprise-wide data discovery and governance. Ensure data security, compliance, and role-based access control (RBAC) across data environments. Optimize query performance, indexing strategies, partitioning, and caching for large-scale data sets. Develop CI/CD pipelines for automated data pipeline deployments, version control, and monitoring. Implement data virtualization techniques to provide seamless access to data across multiple storage systems. Collaborate with cross-functional teams, including data architects, business analysts, and DevOps teams, to align data engineering strategies with enterprise goals. Stay up to date with emerging data technologies and best practices, ensuring continuous improvement of Enterprise Data Fabric architecture. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Masters degree and 4to 6 years of Computer Science, IT or related field experience OR Bachelors degree and 6 to 8 years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Preferred Qualifications: Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Experience with Data Fabric, Data Mesh, or similar enterprise-wide data architectures. Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and collaboration skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.

Posted 1 week ago

Apply

3.0 - 6.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

Extensive experience on Composable Infrastructure such as Liqid or GigaIO Advanced experience with Cisco UCS-X, UCS-C Series infrastructure, UCS Central and Cisco Intersight. Advanced experience with Dell R Series servers and Dell OME Advanced knowledge of Workflow Orchestration platforms, and Container technologies including Kubernetes and Docker. Proficient with BASH, Python, Javascript, Ansible, Terraform, Redfish or similar scripting language. Advanced knowledge in robot framework to automate QA testing and robotic processes. Primary Skills Cisco Certified Network Professional DC (CCNP) Linux Foundation Certified Engineer (LFCE) VMWare Certified Professional (VCP-DCV)

Posted 1 week ago

Apply

1.0 - 4.0 years

2 - 5 Lacs

Hyderabad

Work from Office

Naukri logo

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: Let’s do this. Let’s change the world. We are looking for highly motivated expert Data Engineer who can own the design, development & maintenance of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Expert in data quality, data validation and verification frameworks Innovate, explore and implement new tools and technologies to enhance efficient data processing Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Minimum 5 to 8 years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 1 week ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Apache Airflow Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders to drive project success. You will also engage in problem-solving activities, ensuring that the applications meet the required standards and functionality while adapting to any changes in project scope or requirements. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate training and knowledge sharing sessions to enhance team capabilities.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Airflow.- Good To Have Skills: Experience with cloud platforms such as AWS or Azure.- Strong understanding of workflow orchestration and scheduling.- Experience with data pipeline development and management.- Familiarity with containerization technologies like Docker. Additional Information:- The candidate should have minimum 7.5 years of experience in Apache Airflow.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

3.0 - 7.0 years

12 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

Job Summary: Synechron is seeking an experienced Senior Data Engineer with expertise in AWS, Apache Airflow, and DBT to design and implement scalable, reliable data pipelines. The role involves collaborating with data teams and business stakeholders to develop data solutions that enable actionable insights and support organizational decision-making. The ideal candidate will bring data engineering experience, demonstrating strong technical skills, strategic thinking, and the ability to work in a fast-paced, evolving environment. Software Requirements: Required: Strong proficiency in AWS services including S3, Redshift, Lambda, and Glue, with proven hands-on experience Expertise in Apache Airflow for workflow orchestration and pipeline management Extensive experience with DBT for data transformation and modeling Solid knowledge of SQL for data querying and manipulation Preferred: Familiarity with Hadoop, Spark, or other big data technologies Experience with NoSQL databases (e.g., DynamoDB, Cassandra) Knowledge of data governance and security best practices within cloud environments Overall Responsibilities: Lead the design, development, and maintenance of scalable and efficient data pipelines and workflows utilizing AWS, Airflow, and DBT Collaborate with data scientists, analysts, and business teams to gather requirements and translate them into technical solutions Optimize Extract, Transform, Load (ETL) processes to enhance data quality, integrity, and timeliness Monitor pipeline performance, troubleshoot issues, and implement improvements to ensure operational excellence Enforce data management, governance, and security protocols across all data flows Mentor junior data engineers and promote best practices within the team Stay current with emerging data technologies and industry trends, recommending innovations for the data ecosystem Technical Skills (By Category): Programming Languages: Essential: SQL, Python (preferred for scripting and automation) Preferred: Spark, Scala, Java (for big data integration) Databases/Data Management: Extensive experience with data warehousing (Redshift, Snowflake, or similar) and relational databases (MySQL, PostgreSQL) Familiarity with NoSQL databases such as DynamoDB or Cassandra is a plus Cloud Technologies: AWS cloud platform, leveraging services like S3, Lambda, Glue, Redshift, and IAM security features Frameworks and Libraries: Apache Airflow, dbt, and related data orchestration and transformation tools Development Tools and Methodologies: Git, Jenkins, CI/CD pipelines, Agile/Scrum environment experience Security Protocols: Knowledge of data encryption, access control, and compliance standards in cloud data engineering Experience Requirements: At least 8 years of professional experience in data engineering or related roles with a focus on cloud ecosystems and big data pipelines Demonstrated experience designing and managing end-to-end data workflows in AWS environments Proven success in collaborating with cross-functional teams and translating business requirements into technical solutions Prior experience mentoring junior engineers and leading data projects is highly desirable Day-to-Day Activities: Develop, deploy, and monitor scalable data pipelines using AWS, Airflow, and DBT Collaborate regularly with data scientists, analysts, and business stakeholders to refine data requirements and deliver impactful solutions Troubleshoot production data pipeline issues to resolve data quality or performance bottlenecks Conduct code reviews, optimize existing workflows, and implement automation to improve efficiency Document data architecture, pipelines, and governance practices for knowledge sharing and compliance Keep abreast of emerging data tools and industry best practices, proposing enhancements to existing systems Qualifications: Bachelor’s degree in Computer Science, Data Science, Engineering, or related field; Master’s degree preferred Professional certifications such as AWS Certified Data Analytics – Specialty or related credentials are advantageous Commitment to continuous professional development and staying current with industry trends Professional Competencies: Strong analytical, problem-solving, and critical thinking skills Excellent communication abilities to effectively liaise with technical and business teams Proven leadership in mentoring team members and managing project deliverables Ability to work independently, prioritize tasks, and adapt to changing business needs Innovative mindset focused on scalable, efficient, and sustainable data solutions

Posted 3 weeks ago

Apply

6 - 8 years

2 - 5 Lacs

Hyderabad

Work from Office

Naukri logo

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: Let’s do this. Let’s change the world. We are looking for highly motivated expert Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Expert in data quality, data validation and verification frameworks Innovate, explore and implement new tools and technologies to enhance efficient data processing Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Any Degree and 6-8 years of experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 1 month ago

Apply

1 - 4 years

2 - 5 Lacs

Hyderabad

Work from Office

Naukri logo

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: Let’s do this. Let’s change the world. We are looking for highly motivated expert Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and maintain data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Expert in data quality, data validation and verification frameworks Innovate, explore and implement new tools and technologies to enhance efficient data processing Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Minimum 5 to 8 years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 1 month ago

Apply

1 - 4 years

2 - 5 Lacs

Hyderabad

Work from Office

Naukri logo

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: Let’s do this. Let’s change the world. We are looking for highly motivated expert Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Expert in data quality, data validation and verification frameworks Innovate, explore and implement new tools and technologies to enhance efficient data processing Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Any Degree and 6-8 years of experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 1 month ago

Apply

7 - 9 years

12 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do Lets do this. Lets change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data deliver actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and driving data governance initiatives, and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has deep technical skills and provides administration support for Master Data Management (MDM) and Data Quality platform, including solution architecture, inbound/outbound data integration (ETL), Data Quality (DQ), and maintenance/tuning of match rules. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Collaborate and communicate with MDM Developers, Data Architects, Product teams, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to standard processes for coding, testing, and designing reusable code/component Participate in sprint planning meetings and provide estimations on technical implementation As a SME, work with the team on MDM related product installation, configuration, customization and optimization Responsible for the understanding, documentation, maintenance, and additional creation of master data related data-models (conceptual, logical, and physical) and database structures Review technical model specifications and participate in data quality testing Collaborate with Data Quality & Governance Analyst and Data Governance Organization to monitor and preserve the master data quality Create and maintain system specific master data data-dictionaries for domains in scope Architect MDM Solutions, including data modeling and data source integrations from proof-of-concept through development and delivery Develop the architectural design for Master Data Management domain development, base object integration to other systems and general solutions as related to Master Data Management Develop and deliver solutions individually or as part of a development team Approves code reviews and technical work Maintains compliance with change control, SDLC and development standards Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Masters degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelors degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience. Preferred Qualifications: Expertise in architecting and designing Master Data Management (MDM) solutions. Practical experience with AWS Cloud, Databricks, Apache Spark, workflow orchestration, and optimizing big data processing performance. Familiarity with enterprise source systems and consumer systems for master and reference data, such as CRM, ERP, and Data Warehouse/Business Intelligence. At least 2 to 3 years of experience as an MDM developer using Informatica MDM or Reltio MDM, along with strong proficiency in SQL. Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development. Good understanding of data modeling, data warehousing, and data integration concepts. Experience with development using Python, React JS, cloud data platforms. Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments). Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.

Posted 2 months ago

Apply

3 - 7 years

4 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. What you will do Role Description: We are seeking a Senior Data Engineer with expertise in Graph Data technologies to join our data engineering team and contribute to the development of scalable, high-performance data pipelines and advanced data models that power next-generation applications and analytics. This role combines core data engineering skills with specialized knowledge in graph data structures, graph databases, and relationship-centric data modeling, enabling the organization to leverage connected data for deep insights, pattern detection, and advanced analytics use cases. The ideal candidate will have a strong background in data architecture, big data processing, and Graph technologies and will work closely with data scientists, analysts, architects, and business stakeholders to design and deliver graph-based data engineering solutions. Roles & Responsibilities: Design, build, and maintain robust data pipelines using Databricks (Spark, Delta Lake, PySpark) for complex graph data processing workflows. Own the implementation of graph-based data models, capturing complex relationships and hierarchies across domains. Build and optimize Graph Databases such as Stardog, Neo4j, Marklogic or similar to support query performance, scalability, and reliability. Implement graph query logic using SPARQL, Cypher, Gremlin, or GSQL, depending on platform requirements. Collaborate with data architects to integrate graph data with existing data lakes, warehouses, and lakehouse architectures. Work closely with data scientists and analysts to enable graph analytics, link analysis, recommendation systems, and fraud detection use cases. Develop metadata-driven pipelines and lineage tracking for graph and relational data processing. Ensure data quality, governance, and security standards are met across all graph data initiatives. Mentor junior engineers and contribute to data engineering best practices, especially around graph-centric patterns and technologies. Stay up to date with the latest developments in graph technology, graph ML, and network analytics. What we expect of you Must-Have Skills: Hands-on experience in Databricks, including PySpark, Delta Lake, and notebook-based development. Hands-on experience with graph database platforms such as Stardog, Neo4j, Marklogic etc. Strong understanding of graph theory, graph modeling, and traversal algorithms Proficiency in workflow orchestration, performance tuning on big data processing Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies with strong problem-solving and analytical skills Excellent collaboration and communication skills, with experience working with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Master’s degree and 3 to 4 + years of Computer Science, IT or related field experience Bachelor’s degree and 5 to 8 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly

Posted 2 months ago

Apply

1 - 3 years

3 - 5 Lacs

Gurgaon

Work from Office

Naukri logo

Role Objective : To bill out medical accounts with accuracy within defined timelines and reduce rejections for payers. Essential Duties and Responsibilities: Process Accounts accurately basis US medical billing within defined TAT Able to process payer rejection with accuracy within defined TAT. 24*7 Environment, Open for night shifts Good analytical skills and proficiency with MS Word, Excel, and PowerPoint Qualifications: Graduate in any discipline from a recognized educational institute. Good analytical skills and proficiency with MS Word, Excel, and PowerPoint. Good communication Skills (both written & verbal) Skill Set: Candidate should have good healthcare knowledge. Candidate should have knowledge of Medicare and Medicaid. Ability to interact positively with team members, peer group and seniors.

Posted 2 months ago

Apply

4 - 6 years

6 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking a skilled and experienced Cloud Engineer with a strong background in Terraform and Apache Airflow to design, build, and maintain cloud infrastructure and data orchestration workflows. The ideal candidate will have extensive experience with cloud platforms (AWS, Azure, or GCP), Infrastructure as Code (IaC), and workflow automation to support scalable and efficient cloud solutions. Additionally, the candidate should have hands-on expertise in AWS, cybersecurity concepts, and tools, including Splunk, Qualys, CrowdStrike, and more. Roles and Responsibilities Key Responsibilities: Design, deploy, and manage cloud infrastructure using Terraform. Develop and optimize Apache Airflow Directed Acyclic Graphs (DAGs) for workflow automation. Implement Infrastructure as Code (IaC) best practices to ensure repeatability, scalability, and disaster recovery. Collaborate with DevOps, data engineering, and software development teams to support cloud-native applications. Monitor, troubleshoot, and optimize cloud infrastructure and workflow automation processes to ensure high availability. Ensure security, compliance, and performance optimization of cloud resources, with a focus on cloud security best practices. Automate cloud deployment and operations using CI/CD pipelines. Stay up to date with emerging cloud technologies and best practices, particularly in the fields of cloud security, automation, and monitoring. Support cybersecurity initiatives by integrating tools such as Splunk, Qualys, CrowdStrike, and other security monitoring systems. Implement best practices for security monitoring and incident response. Required Qualifications: Bachelor’s degree in Computer Science, Engineering, or a related field. 5+ years of IT experience with at least 4 years of relevant experience in cloud technologies. 4+ years of hands-on experience with AWS cloud services, including EC2, S3, Lambda, and RDS. Strong experience with Terraform for cloud infrastructure provisioning and management. Proficiency in Apache Airflow for designing and managing data workflows. Strong knowledge of cloud security, networking, and monitoring best practices, including experience with security tools such as Splunk, Qualys, CrowdStrike, or similar. Solid experience in containerization technologies like Docker and Kubernetes. Proficiency in scripting languages such as Python, Bash, or PowerShell for automation and orchestration. Familiarity with CI/CD pipelines, GitOps practices, and version control systems like Git. Deep understanding of cybersecurity concepts and tools to ensure a secure cloud environment. Preferred Qualifications: AWS Certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer) or similar certifications in Azure or GCP. Hands-on experience with serverless computing (e.g., AWS Lambda, Google Cloud Functions). Knowledge of big data tools such as Apache Spark, Apache Kafka, or Databricks. Experience working in Agile or DevOps environments. Strong troubleshooting skills for cloud infrastructure, security incidents, and workflow orchestration.

Posted 2 months ago

Apply

3 - 5 years

2 - 5 Lacs

Hyderabad

Work from Office

Naukri logo

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Expert in data quality, data validation and verification frameworks Innovate, explore and implement new tools and technologies to enhance efficient data processing Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions What we expect of you We are all different, yet we all use our unique contributions to serve patients. We are looking for highly motivated expert Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. Basic Qualifications: Master’s degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelor’s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies Proficiency in workflow orchestration, performance tuning on big data processing Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices Preferred Qualifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and DevOps Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 months ago

Apply

4 - 6 years

4 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

Senior Data Engineer What you will do Let’s do this. Let’s change the world. In this vital role you will contribute to the development of scalable, high-performance data pipelines and advanced data models that power next-generation applications and analytics. This role combines core data engineering skills with specialized knowledge in graph data structures, graph databases, and relationship-centric data modeling, enabling the organization to leverage connected data for deep insights, pattern detection, and advanced analytics use cases. The ideal candidate will have a solid background in data architecture, big data processing, and Graph technologies and will work closely with data scientists, analysts, architects, and business stakeholders to design and deliver graph-based data engineering solutions. . Roles & Responsibilities: Design, build, and maintain robust data pipelines using Databricks (Spark, Delta Lake, PySpark) for complex graph data processing workflows. Lead the implementation of graph-based data models, capturing complex relationships and hierarchies across domains. Build and optimize Graph Databases such as Stardog, Neo4j, Marklogic or similar to support query performance, scalability, and reliability. Implement graph query logic using SPARQL, Cypher, Gremlin, or GSQL, depending on platform requirements. Collaborate with data architects to integrate graph data with existing data lakes, warehouses, and lakehouse architectures. Work closely with data scientists and analysts to enable graph analytics, link analysis, recommendation systems, and fraud detection use cases. Develop metadata-driven pipelines and lineage tracking for graph and relational data processing. Ensure data quality, governance, and security standards are met across all graph data initiatives. Mentor junior engineers and contribute to data engineering best practices, especially around graph-centric patterns and technologies. Stay up to date with the latest developments in graph technology, graph ML, and network analytics. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree and 4 to 6 years of Computer Science, IT or related field experience Bachelor’s degree and 6 to 8 years of Computer Science, IT or related field experience Preferred Qualifications: Must-Have Skills: Hands-on experience in Databricks, including PySpark, Delta Lake, and notebook-based development. Hands-on experience with graph database platforms such as Stardog, Neo4j, Marklogic etc. Good understanding of graph theory, graph modeling, and traversal algorithms Proficiency in workflow orchestration, performance tuning on big data processing Good understanding of AWS services Ability to quickly learn, adapt and apply new technologies with strong problem-solving and analytical skills Excellent collaboration and communication skills, with experience working with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 months ago

Apply

4 - 6 years

4 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

Senior Data Engineer What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines to support structured, semi-structured, and unstructured data processing across the Enterprise Data Fabric. Implement real-time and batch data processing solutions, integrating data from multiple sources into a unified, governed data fabric architecture. Optimize big data processing frameworks using Apache Spark, Hadoop, or similar distributed computing technologies to ensure high availability and cost efficiency. Work with metadata management and data lineage tracking tools to enable enterprise-wide data discovery and governance. Ensure data security, compliance, and role-based access control (RBAC) across data environments. Optimize query performance, indexing strategies, partitioning, and caching for large-scale data sets. Develop CI/CD pipelines for automated data pipeline deployments, version control, and monitoring. Implement data virtualization techniques to provide seamless access to data across multiple storage systems. Collaborate with cross-functional teams, including data architects, business analysts, and DevOps teams, to align data engineering strategies with enterprise goals. Stay up to date with emerging data technologies and best practices, ensuring continuous improvement of Enterprise Data Fabric architectures. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Master’s degree and 4 to 6 years of Computer Science, IT or related field experience OR Bachelor’s degree and 6 to 8 years of Computer Science, IT or related field experience Preferred Qualifications: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Experience with Data Fabric, Data Mesh, or similar enterprise-wide data architectures. Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Good communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 months ago

Apply

2 - 7 years

4 - 9 Lacs

Karnataka, Uttar Pradesh

Work from Office

Naukri logo

RoleAdvanced Data Engineer , LocationHyderabad Hire TypeFulltime Tech Stack: Proficiency in Python, SQL, Databricks, Snowflake Data Modeling, ETL process, Apache Spark and PySpark, data integration and workflow orchestration, real time data processing experience/frameworks, Cloud experience (Azure preferred) About The Role Minimum 5 years of experience in Data Engineering role that involves Analyzing, organizing raw data, and building data systems & pipelines in Cloud Platform [Azure] Experienced in migrating data from on-prem to cloud-based solution architectures [Azure] Extensive experience with Python, Spark and SQL Experience in developing ETL processes. Proficient with Azure Data Lake, Azure Data Factory, Azure SQL, Azure Databricks, Azure Synapse Analytics or equivalent tools and technologies Experience building data lakes & data warehouses to support operational intelligence and business intelligence. Excellent written and verbal communication skills

Posted 2 months ago

Apply

5 - 10 years

35 - 40 Lacs

Pune

Work from Office

Naukri logo

Skilled in web technologies and core architecture such as JAVA 8+, Spring Boot Micro-services (RESTful webservices), Kafka, Spark for distributed data, event driven architecture and workflow orchestration Proficient in CI/CD using Jenkins and Gitlab Strong knowledge of JavaScript/React.JS/Angular JUnit, Swagger Familiar with tools like: Grafana, Splunk, App Dynamics, Enterprise Architect, SonarQube, SQL Developer Working knowledge of Oracle, PL/SQL, Cloud/Docker container Some experience with Snowflake if possible (not mandatory)

Posted 2 months ago

Apply

7 - 10 years

8 - 12 Lacs

Bengaluru

Hybrid

Naukri logo

Notice Period Immediate to 30 Days Responsibilities: Monitors and reacts to alerts in real time and triages issues Executes runbook instructions to resolve routine problems and user requests. Escalates complex or unresolved issues to L2. Documents new findings to improve runbooks and knowledge base. Participates in shift handovers to ensure seamless coverage Participates in ceremonies to share operational status Technical Skills: System Administration: Basic troubleshooting, monitoring, and operational support. Cloud Platforms: Familiarity with AWS services (e.g., EC2, S3, Lambda, IAM). Infrastructure as Code (IaC): Exposure to Terraform, CloudFormation, or similar tools. CI/CD Pipelines: Understanding of Git, Jenkins, GitHub Actions, or similar tools. Linux Fundamentals: Command-line proficiency, scripting, process management. Programming & Data: Python, SQL, and Spark (nice to have, but not mandatory). Data Engineering Awareness: Understanding of data pipelines, ETL processes, and workflow orchestration (e.g., Airflow). DevOps Practices: Observability, logging, alerting, and automation. Soft Skills (Most Important): Strong Communication: Ability to articulate technical concepts clearly to different audiences. Collaboration & Teamwork: Works well with engineering, operations, and business teams. Problem-Solving: Logical thinking and troubleshooting mindset. Documentation & Knowledge Sharing: Contributes to runbooks and operational guides.

Posted 3 months ago

Apply

5 - 9 years

7 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

About The Role : Manage and optimize Control-M and Apache Airflow workflows for production operations. Design and implement Airflow infrastructure based on industry best practices. Troubleshoot workflow issues and define new job scheduling processes in Control-M and Airflow DAGs. Collaborate with system administrators, platform engineers, and networking teams to deploy and maintain Airflow infrastructure. Develop and maintain Python-based automation scripts to support ETLs, file transfers, and operational tasks. Primary Skills Expertise in Python for automation, scripting, and workflow orchestration. Strong experience with Apache Airflow, including DAG creation, scheduling, and troubleshooting. Hands-on experience with BMC Control-M for job scheduling and workload automation. Proficiency in Linux system administration, including shell scripting and OS-level configurations.

Posted 3 months ago

Apply

5 - 10 years

10 - 20 Lacs

Chennai

Work from Office

Naukri logo

Overview We are looking for an experienced Python ETL Developer to design, develop, and optimize data pipelines. The ideal candidate should have expertise in Python, PySpark, Airflow, and data processing frameworks, along with the ability to work independently and communicate effectively in English. Roles & Responsibilities Develop and maintain ETL pipelines using Python, NumPy, Pandas, PySpark, and Apache Airflow. Work with large-scale data processing and transformation workflows. Optimize and enhance ETL performance and scalability. Collaborate with data engineers and business teams to ensure efficient data flow. Troubleshoot and debug ETL-related issues to ensure data integrity and reliability. Qualifications & Skills 8+ years of Python experience, with 5+ years dedicated to Python ETL development. Proficiency in PySpark, Apache Airflow, NumPy, and Pandas. Experience in working with SQLAlchemy and FastAPI (added advantage). Strong problem-solving skills and the ability to work independently. Good English communication skills to collaborate with global teams. Preferred Qualifications: Experience in cloud-based ETL solutions (AWS, GCP, Azure). Knowledge of big data technologies like Hadoop, Spark, or Kafka.

Posted 3 months ago

Apply

5 - 10 years

15 - 30 Lacs

Bengaluru, Hyderabad

Work from Office

Naukri logo

Data Pipelines & ETL Processes: Snowflake Data Modeling: Cloud & On-Prem Solutions: Design and manage cloud-based data infrastructure (AWS, Azure, GCP), Real-Time Data Processing:

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies