Jobs
Interviews

6882 Performance Tuning Jobs - Page 39

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 5.0 years

13 - 15 Lacs

Hyderabad

Work from Office

We are looking for an Associate Data Engineer with deep expertise in writing data pipelines to build scalable, high-performance data solutions. The ideal candidate will be responsible for developing, optimizing and maintaining complex data pipelines, integration frameworks, and metadata-driven architectures that enable seamless access and analytics. This role prefers deep understanding of the big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Data Engineer who owns development of complex ETL/ELT data pipelines to process large-scale datasets Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Exploring and implementing new tools and technologies to enhance ETL platform and performance of the pipelines Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Eager to understand the biotech/pharma domains & build highly efficient data pipelines to migrate and deploy complex data across systems Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions What we expect of you Must-Have Skills: Experience in Data Engineering with a focus on Databricks, AWS, Python, SQL, and Scaled Agile methodologies Proficiency & Strong understanding of data processing and transformation of big data frameworks (Databricks, Apache Spark, Delta Lake, and distributed computing concepts) Strong understanding of AWS services and can demonstrate the same Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery, and DevOps practices Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Exposure to APIs, full stack development Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Bachelor s degree and 2 to 5 + years of Computer Science, IT or related field experience OR Master s degree and 1 to 4 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.

Posted 3 weeks ago

Apply

9.0 - 12.0 years

15 - 19 Lacs

Hyderabad

Work from Office

We are seeking a Data Solutions Architect with deep expertise in Biotech/Pharma to design, implement, and optimize scalable and high-performance data solutions that support enterprise analytics, AI-driven insights, and digital transformation initiatives. This role will focus on data strategy, architecture, governance, security, and operational efficiency, ensuring seamless data integration across modern cloud platforms. The ideal candidate will work closely with engineering teams, business stakeholders, and leadership to establish a future-ready data ecosystem, balancing performance, cost-efficiency, security, and usability. This position requires expertise in modern cloud-based data architectures, data engineering best practices, and Scaled Agile methodologies. Roles & Responsibilities: Design and implement scalable, modular, and future-proof data architectures that initiatives in enterprise. Develop enterprise-wide data frameworks that enable governed, secure, and accessible data across various business domains. Define data modeling strategies to support structured and unstructured data, ensuring efficiency, consistency, and usability across analytical platforms. Lead the development of high-performance data pipelines for batch and real-time data processing, integrating APIs, streaming sources, transactional systems, and external data platforms. Optimize query performance, indexing, caching, and storage strategies to enhance scalability, cost efficiency, and analytical capabilities. Establish data interoperability frameworks that enable seamless integration across multiple data sources and platforms. Drive data governance strategies, ensuring security, compliance, access controls, and lineage tracking are embedded into enterprise data solutions. Implement DataOps best practices, including CI/CD for data pipelines, automated monitoring, and proactive issue resolution, to improve operational efficiency. Lead Scaled Agile (SAFe) practices, facilitating Program Increment (PI) Planning, Sprint Planning, and Agile ceremonies, ensuring iterative delivery of enterprise data capabilities. Collaborate with business stakeholders, product teams, and technology leaders to align data architecture strategies with organizational goals. Act as a trusted advisor on emerging data technologies and trends, ensuring that the enterprise adopts cutting-edge data solutions that provide competitive advantage and long-term scalability. Must-Have Skills: Experience in data architecture, enterprise data management, and cloud-based analytics solutions. Well versed in domain of Biotech/Pharma industry and has been instrumental in solving complex problems for them using data strategy. Expertise in Databricks, cloud-native data platforms, and distributed computing frameworks. Strong proficiency in modern data modeling techniques, including dimensional modeling, NoSQL, and data virtualization. Experience designing high-performance ETL/ELT pipelines and real-time data processing solutions. Deep understanding of data governance, security, metadata management, and access control frameworks. Hands-on experience with CI/CD for data solutions, DataOps automation, and infrastructure as code (IaC). Proven ability to collaborate with cross-functional teams, including business executives, data engineers, and analytics teams, to drive successful data initiatives. Strong problem-solving, strategic thinking, and technical leadership skills. Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark, Apache Airflow Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Good-to-Have Skills: Experience with Data Mesh architectures and federated data governance models. Certification in cloud data platforms or enterprise architecture frameworks. Knowledge of AI/ML pipeline integration within enterprise data architectures. Familiarity with BI & analytics platforms for enabling self-service analytics and enterprise reporting. Education and Professional Certifications 9 to 12 years of experience in Computer Science, IT or related field AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.

Posted 3 weeks ago

Apply

4.0 - 9.0 years

40 - 45 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. We are looking for highly motivated expert Senior Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines to support structured, semi-structured, and unstructured data processing across the Enterprise Data Fabric. Implement real-time and batch data processing solutions, integrating data from multiple sources into a unified, governed data fabric architecture. Optimize big data processing frameworks using Apache Spark, Hadoop, or similar distributed computing technologies to ensure high availability and cost efficiency. Work with metadata management and data lineage tracking tools to enable enterprise-wide data discovery and governance. Ensure data security, compliance, and role-based access control (RBAC) across data environments. Optimize query performance, indexing strategies, partitioning, and caching for large-scale data sets. Develop CI/CD pipelines for automated data pipeline deployments, version control, and monitoring. Implement data virtualization techniques to provide seamless access to data across multiple storage systems. Collaborate with cross-functional teams, including data architects, business analysts, and DevOps teams, to align data engineering strategies with enterprise goals. Stay up to date with emerging data technologies and best practices, ensuring continuous improvement of Enterprise Data Fabric architectures. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Master s degree and 3 to 4 + years of Computer Science, IT or related field experience OR Bachelor s degree and 5 to 8 + years of Computer Science, IT or related field experience OR Diploma and 10 to 12 years of Computer Science, IT or related field experience Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Experience with Data Fabric, Data Mesh, or similar enterprise-wide data architectures. Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Preferred Qualifications: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way.

Posted 3 weeks ago

Apply

4.0 - 6.0 years

40 - 45 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and implementing data governance initiatives and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes . Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing. Be a key team member that assists in design and development of the data pipeline. Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems. Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks. Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs. Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency. Implement data security and privacy measures to protect sensitive data. Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions. Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions. Identify and resolve complex data-related challenges. Adhere to standard processes for coding, testing, and designing reusable code/component. Explore new tools and technologies that will help to improve ETL platform performance. Participate in sprint planning meetings and provide estimations on technical implementation. Collaborate and communicate effectively with product teams. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master s degree with 4 - 6 years of experience in Computer Science, IT or related field OR Bachelor s degree with 6 - 8 years of experience in Computer Science, IT or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT or related field. Functional Skills: Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing. Hands on experience with various Python/R packages for EDA, feature engineering and machine learning model training. Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools. Excellent problem-solving skills and the ability to work with large, complex datasets. Strong understanding of data governance frameworks, tools, and standard methodologies. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA). Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development. Strong understanding of data modeling, data warehousing, and data integration concepts. Knowledge of Python/R, Databricks, SageMaker, OMOP. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments). Certified Data Scientist (preferred on Databricks or Cloud environments). Machine Learning Certification (preferred on Databricks or Cloud environments). SAFe for Teams certification (preferred). Soft Skills: Excellent critical-thinking and problem-solving skills. Strong communication and collaboration skills. Demonstrated awareness of how to function in a team setting. Demonstrated presentation skills. Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 3 weeks ago

Apply

2.0 - 4.0 years

1 - 5 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. This role supports the day-to-day operation and maintenance of Virtual & Physical server systems and infrastructure. The engineer will perform tasks such as system monitoring, security patching, automation, and troubleshooting under guidance from senior team members, contributing to uptime and compliance with main responsibility for managing all aspects of VMWare/ESX and Nutanix administration within a 24x7 regulated environment. The role emphasizes reliability, security, and automation, and involves working with virtualization servers and related technologies introduced into the Amgen ecosystem. The ideal candidate will have a consistent record in Virtualization and serve as the global enterprise administration group, supervising both physical and virtual hosting environments including private and public cloud platforms. The responsibilities span across operating systems, databases, middleware, storage, and backup services. The selected candidate will play a key role in configuration and operations management, implementing daily systems administration tasks within a complex and dynamic environment. This role demands the ability to drive and deliver against key organizational critical initiatives, foster a collaborative environment, and deliver high-quality results in a matrixed organizational structure. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Roles & Responsibilities: Maintain and support VMware environments including ESXi, vCenter, vSAN, and VMware Tools across enterprise infrastructure. Perform regular patching and updates of VMware components, operating systems, and firmware in alignment with company change management policies. Implement automation using PowerShell, Python, or Ansible Monitor systems and proactively address performance bottlenecks Own the day-to-day operations and strategic evolution of VMware infrastructure, including VMware ESXi, vSAN, vCenter, Nutanix, and Dell VxRail platforms. Contribute to Root Cause Analysis (RCA) efforts for critical incidents. Support business continuity and disaster recovery efforts including HA and DRS configurations. Participate in an on-call rotation for 24x7 support. Collaborate with multi-functional teams on infrastructure needs Document system configurations and operational procedures Basic Qualifications and Experience: Bachelors degree with 2-4 years of experience OR Master s degree with 1-2 years of experience OR Diploma with 5+ years of relevant experience Functional Skills: Must-Have Skills: VMware certifications (VCP, VCAP, or equivalent) Proven track record managing complex virtual infrastructure Familiarity with ITIL processes and regulated environments (Pharma preferred) Expertise in performance tuning, capacity planning, and HA/DR configurations Exceptional troubleshooting and problem-solving capabilities Hands-on expertise in scripting and automation tools Good-to-Have Skills: Experience with cloud services (AWS, Azure, GCP) Experience with ITIL processes and frameworks Experience with CI/CD and DevOps practices Understanding of configuration management and automation tools (Red Hat Satellite Server, Ansible) Strong interpersonal, negotiation, and mentoring skills Professional Certifications: VMware certifications (VCP, VCAP, or equivalent) (Mandatory) Red Hat Certified Engineer (RHCE) (preferred) ITIL Foundation (preferred) Soft Skills: Excellent troubleshooting and analytical abilities Strong communication skills, both written and verbal Ability to work in a fast-paced environment Strong organizational and time management skills Problem-solving and critical thinking capabilities Team collaboration and knowledge sharing Adaptability to changing priorities and technologies Ability to follow procedures accurately Willingness to learn and grow Shift Information: This position requires you to be onsite and participate 24/5 and weekend on call in rotation fashion and may require you to work a later shift. Candidates must be willing and able to work off hours, as required based on business requirements. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 3 weeks ago

Apply

8.0 - 10.0 years

13 - 17 Lacs

Hyderabad

Work from Office

We are seeking an experienced Senior Manager, Data Engineering to lead and scale a strong team of data engineers. This role blends technical depth with strategic oversight and people leadership. The ideal candidate will oversee the execution of data engineering initiatives, collaborate with business analysts and multi-functional teams, manage resource capacity, and ensure delivery aligned to business priorities. In addition to technical competence, the candidate will be adept at managing agile operations and driving continuous improvement. Roles & Responsibilities: Possesses strong rapid prototyping skills and can quickly translate concepts into working code. Provide expert guidance and mentorship to the data engineering team, fostering a culture of innovation and standard methodologies. Design, develop, and implement robust data architectures and platforms to support business objectives. Oversee the development and optimization of data pipelines, and data integration solutions. Establish and maintain data governance policies and standards to ensure data quality, security, and compliance. Architect and manage cloud-based data solutions, leveraging AWS or other preferred platforms. Lead and motivate a strong data engineering team to deliver exceptional results. Identify, analyze, and resolve complex data-related challenges. Collaborate closely with business collaborators to understand data requirements and translate them into technical solutions. Stay abreast of emerging data technologies and explore opportunities for innovation. Lead and manage a team of data engineers, ensuring appropriate workload distribution, goal alignment, and performance management. Work closely with business analysts and product collaborators to prioritize and align engineering output with business objectives. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master s degree and 8 to 10 years of computer science and engineering preferred, other Engineering field is considered OR Bachelor s degree and 10 to 14 years of computer science and engineering preferred, other Engineering field is considered OR Diploma and 14 to 18 years of computer science and engineering preferred, other Engineering field is considered Demonstrated proficiency in using cloud platforms (AWS, Azure, GCP) for data engineering solutions. Strong understanding of cloud architecture principles and cost optimization strategies. Proficient on experience in Python, PySpark, SQL. Handon experience with bid data ETL performance tuning. Proven ability to lead and develop strong data engineering teams. Strong problem-solving, analytical, and critical thinking skills to address complex data challenges. Strong communication skills for collaborating with business and technical teams alike. Preferred Qualifications: Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark, Apache Airflow Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Experienced with AWS, GCP or Azure cloud services Professional Certification: AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 3 weeks ago

Apply

4.0 - 9.0 years

40 - 45 Lacs

Hyderabad

Work from Office

We are seeking a Senior Data Engineer with expertise in Graph Data technologies to join our data engineering team and contribute to the development of scalable, high-performance data pipelines and advanced data models that power next-generation applications and analytics. This role combines core data engineering skills with specialized knowledge in graph data structures, graph databases, and relationship-centric data modeling, enabling the organization to leverage connected data for deep insights, pattern detection, and advanced analytics use cases. The ideal candidate will have a strong background in data architecture, big data processing, and Graph technologies and will work closely with data scientists, analysts, architects, and business stakeholders to design and deliver graph-based data engineering solutions. Roles & Responsibilities: Design, build, and maintain robust data pipelines using Databricks (Spark, Delta Lake, PySpark) for complex graph data processing workflows. Own the implementation of graph-based data models, capturing complex relationships and hierarchies across domains. Build and optimize Graph Databases such as Stardog, Neo4j, Marklogic or similar to support query performance, scalability, and reliability. Implement graph query logic using SPARQL, Cypher, Gremlin, or GSQL, depending on platform requirements. Collaborate with data architects to integrate graph data with existing data lakes, warehouses, and lakehouse architectures. Work closely with data scientists and analysts to enable graph analytics, link analysis, recommendation systems, and fraud detection use cases. Develop metadata-driven pipelines and lineage tracking for graph and relational data processing. Ensure data quality, governance, and security standards are met across all graph data initiatives. Mentor junior engineers and contribute to data engineering best practices, especially around graph-centric patterns and technologies. Stay up to date with the latest developments in graph technology, graph ML, and network analytics. What we expect of you Must-Have Skills: Hands-on experience in Databricks, including PySpark, Delta Lake, and notebook-based development. Hands-on experience with graph database platforms such as Stardog, Neo4j, Marklogic etc. Strong understanding of graph theory, graph modeling, and traversal algorithms Proficiency in workflow orchestration, performance tuning on big data processing Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies with strong problem-solving and analytical skills Excellent collaboration and communication skills, with experience working with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Master s degree and 3 to 4 + years of Computer Science, IT or related field experience Bachelor s degree and 5 to 8 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way.

Posted 3 weeks ago

Apply

2.0 - 7.0 years

40 - 45 Lacs

Hyderabad

Work from Office

In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, build, and support data ingestion, transformation, and delivery pipelines across structured and unstructured sources within the enterprise data engineering. Manage and monitor day-to-day operations of the data engineering environment, ensuring high availability, performance, and data integrity. Collaborate with data architects, data governance, platform engineering, and business teams to support data integration use cases across R&D, Clinical, Regulatory, and Commercial functions. Integrate data from laboratory systems, clinical platforms, regulatory systems, and third-party data sources into enterprise data repositories. Implement and maintain metadata capture, data lineage, and data quality checks across pipelines to meet governance and compliance requirements. Support real-time and batch data flows using technologies such as Databricks, Kafka, Delta Lake, or similar. Work within GxP-aligned environments, ensuring compliance with data privacy, audit, and quality control standards. Partner with data stewards and business analysts to support self-service data access, reporting, and analytics enablement. Maintain operational documentation, runbooks, and process automation scripts for continuous improvement of data fabric operations. Participate in incident resolution and root cause analysis, ensuring timely and effective remediation of data pipeline issues. Create documentation, playbooks, and best practices for metadata ingestion, data lineage, and catalog usage. Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must have Skills: Build and maintain data pipelines to ingest and update metadata into enterprise data catalog platforms in biotech or life sciences or pharma. Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. experience in data engineering, data operations, or related roles, with at least 2+ years in life sciences, biotech, or pharmaceutical environments. Experience with cloud platforms (e.g., AWS, Azure, or GCP) for data pipeline and storage solutions. Understanding of data governance frameworks, metadata management, and data lineage tracking. Strong problem-solving skills, attention to detail, and ability to manage multiple priorities in a dynamic environment. Effective communication and collaboration skills to work across technical and business stakeholders. Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Preferred Qualifications: Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Basic Qualifications: Master s degree and 3 to 4 + years of Computer Science, IT or related field experience Bachelor s degree and 5 to 8 + years of Computer Science, IT or related field experience Diploma and 7 to 9 years of Computer Science, IT or related field experience Professional Certifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills : Excellent verbal and written communication skills. High degree of professionalism and interpersonal skills. Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 3 weeks ago

Apply

2.0 - 7.0 years

20 - 25 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. We are seeking a highly skilled Machine Learning Engineer with a strong MLOps background to join our team. You will play a pivotal role in building and scaling our machine learning models from development to production. Your expertise in both machine learning and operations will be essential in creating efficient and reliable ML pipelines. Roles & Responsibilities: Collaborate with data scientists to develop, train, and evaluate machine learning models. Build and maintain MLOps pipelines, including data ingestion, feature engineering, model training, deployment, and monitoring. Leverage cloud platforms (AWS, GCP, Azure) for ML model development, training, and deployment. Implement DevOps/MLOps best practices to automate ML workflows and improve efficiency. Develop and implement monitoring systems to track model performance and identify issues. Conduct A/B testing and experimentation to optimize model performance. Work closely with data scientists, engineers, and product teams to deliver ML solutions. Guide and mentor junior engineers in the team Stay updated with the latest trends and advancements What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Doctorate degree and 2 years of Computer Science, Statistics, and Data Science, Machine Learning experience OR Master s degree and 8 to 10 years of Computer Science, Statistics, and Data Science, Machine Learning experience OR Bachelor s degree and 10 to 14 years of Computer Science, Statistics, and Data Science, Machine Learning experience OR Diploma and 14 to 18 years of years of Computer Science, Statistics, and Data Science, Machine Learning experience Preferred Qualifications: Must-Have Skills: Strong foundation in machine learning algorithms and techniques Experience in MLOps practices and tools (e.g., MLflow, Kubeflow, Airflow); Experience in DevOps tools (e.g., Docker, Kubernetes, CI/CD) Proficiency in Python and relevant ML libraries (e.g., TensorFlow, PyTorch, Scikit-learn) Outstanding analytical and problem-solving skills; Ability to learn quickly; Excellent communication and interpersonal skills Good-to-Have Skills: Experience with big data technologies (e.g., Spark), and performance tuning in query and data processing Experience with data engineering and pipeline development Experience in statistical techniques and hypothesis testing, experience with regression analysis, clustering and classification Knowledge of NLP techniques for text analysis and sentiment analysis Experience in analyzing time-series data for forecasting and trend analysis Familiar with AWS, Azure, or Google Cloud; Familiar with Databricks platform for data analytics and MLOps Professional Certifications Cloud Computing and Databricks certificate preferred Soft Skills: Excellent analytical and fixing skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 3 weeks ago

Apply

1.0 - 3.0 years

3 - 7 Lacs

Hyderabad

Work from Office

This is a technical role responsible for managing complex server environments, including ORACLE and Postgres databases. The role includes planning, implementation, performance tuning, and maintenance of enterprise relational database platforms with a focus on reliability, security, and automation.. The ideal candidate will have a consistent track record in database infrastructure operations and have a passion for fostering innovation and excellence in the biotechnology industry. Additionally, collaboration with multi-functional and global teams is required to ensure seamless integration and operational excellence. The ideal candidate will have a solid background in database service delivery and operations, coupled with leadership and transformation experience. This role demands the ability to drive and deliver against key organizational pivotal initiatives, foster a collaborative environment, and deliver high-quality results in a matrixed organizational structure. Please note, this is an on-site role based in Hyderabad. Provide database administration support for development, test and production environments including installation, upgrades, performance optimization, decommissions as well as managing requests and incidents for Oracle databases Administer security access controls following all standard operating procedures Recover databases in case of disaster recovery support Follow, review and recommend updates of support documentation as needed Recommend automation opportunities for operational work and process improvements. Collaborating with technical leads on database design decisions to ensure applications have high levels of performance, security, scalability, maintainability and reliability Investigate and resolve technical database issues. Participate in rotational 24x7 On-Call Support and assist root cause analysis when applicable Perform necessary security patch implementations to ensure ongoing database security Understanding of how to support and use NAS storage and cluster technologies What we expect of you We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Master s degree and 1 to 3 years of Information Systems and Database Administration experience OR Bachelor s degree and 3 to 5 years of Information Systems and Database Administration experience OR Diploma with 7 to 9 years of Information Systems and Database Administration experience Experience administering Oracle Database and related services & monitoring systems Demonstrable experience automating database provisioning, patching and administration Experience with various DB toolsets to review performance, monitoring and solving issues Understanding of ITIL frameworks and standard processes Understanding of operating system tools for performance and solving issues Excellent data-driven problem solving and analytical skills Demonstrable experience as part of a high-performance team Preferred Qualifications: Experience supporting Oracle RAC and Data Guard Experience working on regulated systems (preferably in the pharmaceutical sector) Good communication skills Change management knowledge Experience using Ansible for automation Experience supporting PostgreSQL databases Professional Certifications: Database certifications (OCA) (preferred) Soft Skills: Detail-oriented and organized Effective communicator Ability to follow procedures accurately Willingness to learn and grow Shift Information: This position is required to be onsite and participate in 24/5 and weekend on call in rotation fashion and may require you to work a later shift. Candidates must be willing and able to work off hours, as required based on business requirements. As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 3 weeks ago

Apply

12.0 - 17.0 years

13 - 18 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. We are looking for highly motivated expert Principal Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Architect and maintain robust, scalable data pipelines using Databricks, Spark, and Delta Lake, enabling efficient batch and real-time processing. Lead efforts to evaluate, adopt, and integrate emerging technologies and tools that enhance productivity, scalability, and data delivery capabilities. Drive performance optimization efforts, including Spark tuning, resource utilization , job scheduling, and query improvements. Identify and implement innovative solutions that streamline data ingestion, transformation, lineage tracking, and platform observability. Build frameworks for metadata-driven data engineering, enabling reusability and consistency across pipelines. Foster a culture of technical excellence, experimentation, and continuous improvement within the data engineering team. Collaborate with platform, architecture, analytics, and governance teams to align platform enhancements with enterprise data strategy. Define and uphold SLOs, monitoring standards, and data quality KPIs for production pipelines and infrastructure. Partner with cross-functional teams to translate business needs into scalable, governed data products. Mentor engineers across the team, promoting knowledge sharing and adoption of modern engineering patterns and tools. Collaborate with cross-functional teams, including data architects, business analysts, and DevOps teams, to align data engineering strategies with enterprise goals. Stay up to date with emerging data technologies and best practices, ensuring continuous improvement of Enterprise Data Fabric architectures. Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Experience with Data Fabric, Data Mesh, or similar enterprise-wide data architectures. Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications 12 to 17 years of experience in Computer Science, IT or related field AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT .

Posted 3 weeks ago

Apply

8.0 - 10.0 years

12 - 17 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. We are seeking a seasoned Principal Data Engineer to lead the design, development, and implementation of our data strategy. The ideal candidate possesses a deep understanding of data engineering principles, coupled with strong leadership and problem-solving skills. As a Principal Data Engineer, you will architect and oversee the development of robust data platforms, while mentoring and guiding a team of data engineers. Roles & Responsibilities: Possesses strong rapid prototyping skills and can quickly translate concepts into working code. Provide expert guidance and mentorship to the data engineering team, fostering a culture of innovation and standard methodologies. Design, develop, and implement robust data architectures and platforms to support business objectives. Oversee the development and optimization of data pipelines, and data integration solutions. Establish and maintain data governance policies and standards to ensure data quality, security, and compliance. Architect and manage cloud-based data solutions, using AWS or other preferred platforms. Lead and motivate an impactful data engineering team to deliver exceptional results. Identify, analyze, and resolve complex data-related challenges. Collaborate closely with business collaborators to understand data requirements and translate them into technical solutions. Stay abreast of emerging data technologies and explore opportunities for innovation. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master s degree and 8 to 10 years of computer science and engineering preferred, other Engineering field is considered OR Bachelor s degree and 10 to 14 years of computer science and engineering preferred, other Engineering field is considered; Diploma and 14 to 18 years of in computer science and engineering preferred, other Engineering field is considered Demonstrated proficiency in using cloud platforms (AWS, Azure, GCP) for data engineering solutions. Strong understanding of cloud architecture principles and cost optimization strategies. Proficient on experience in Python, PySpark, SQL. Handon experience with bid data ETL performance tuning. Proven ability to lead and develop impactful data engineering teams. Strong problem-solving, analytical, and critical thinking skills to address complex data challenges. Preferred Qualifications: Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark, Apache Airflow Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Experienced with AWS, GCP or Azure cloud services Professional Certifications AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 3 weeks ago

Apply

4.0 - 6.0 years

5 - 9 Lacs

Hyderabad

Work from Office

This is a technical role responsible for managing complex database environments including SQL and MySQL databases. The role includes planning, implementation, performance tuning, and maintenance of enterprise relational database platforms with a focus on reliability, security, and automation. The ideal candidate will have a consistent track record in database infrastructure operations and have a passion for fostering innovation and excellence in the biotechnology industry. Additionally, collaboration with multi-functional and global teams is required to ensure seamless integration and operational excellence. The ideal candidate will have a solid background in database service delivery and operations, coupled with leadership and transformation experience. This role demands the ability to drive and deliver against key organizational critical initiatives, foster a collaborative environment, and deliver high-quality results in a matrixed organizational structure. Please note, this is an on-site role based in Hyderabad. Database administration for all database lifecycle stages including installation, upgrade, optimization and decommission of SQL Server databases Administer security access controls, as needed recover databases during disaster recovery, develop and update documentation, automate routine operational work and implement process improvements Plan the implementation & configuration of Database software related services to support specific database business requirements (OLTP, decision support, standby DB, replication) while following database security requirements, reliability, and performance and standard processes Provide database administration support for development, test and production environments Investigate and resolve technical database issues. Participate in a 24x7 on-call support rotation and assist/lead root cause analysis reviews as needed Provide technical leadership for less experienced personnel, including training on installation and upgrades of RDBMS software, backup/recovery strategies and high availability configurations Develop and document standards, procedures, and work instructions that increase operational productivity Perform necessary security patch implementations to ensure ongoing database security Understanding of SAN storage and knowledge of supporting and provisioning databases in AWS and Azure public clouds. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master s degree and 4 to 6 years of Information Systems and Database Administration experience OR Bachelor s degree and 6 to 8 years of Information Systems and Database Administration experience OR Diploma with 10 to 12 years of Information Systems and Database Administration experience Experience administering and monitoring SQL Server Databases & systems Demonstrable experience automating database provisioning, patching and administration Demonstrable experience with MSSQL Always on Availability Groups (AAG) Experience with DB tools to review performance, monitor and solve issues Understanding of ITIL frameworks and standard processes Understanding of operating system tools for performance and solving issues Excellent data-driven problem solving and analytical skills Demonstrable experience as part of a high-performance team Preferred Qualifications: Experience working on regulated systems (preferably in Pharmaceutical sector) Superb communication skills Organisational change expertise Skill in persuasion and negotiation Experience maximising Ansible for automation Experience supporting MySQL databases Soft Skills: Partner communication and expectation management Crisis management capabilities Shift Information: This position is required to be onsite and participate in 24/5 and weekend on call in rotation fashion and may require you to work a later shift. Candidates must be willing and able to work off hours, as required based on business requirements. As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 3 weeks ago

Apply

4.0 - 6.0 years

5 - 9 Lacs

Hyderabad

Work from Office

This is a technical role responsible for managing complex database environments including Oracle and PostgreSQL databases. The role includes planning, implementation, performance tuning, and maintenance of enterprise relational database platforms with a focus on reliability, security, and automation. The ideal candidate will have a proven track record in database infrastructure operations and have a passion for fostering innovation and excellence in the biotechnology industry. Additionally, collaboration with multi-functional and global teams is required to ensure seamless integration and operational excellence. The ideal candidate will have a solid background in database service delivery and operations, coupled with leadership and transformation experience. This role demands the ability to drive and deliver against key organizational critical initiatives, foster a collaborative environment, and deliver high-quality results in a matrixed organizational structure. Please note, this is an on-site role based in Hyderabad. Provide database administration support for development, test and production environments including installation, upgrades, performance optimization and decommissions as well as managing requests and incidents for Oracle databases Administer security access controls following standard operating procedures Recover databases in case of disaster recovery support Follow, review and recommend updates of support documentation as needed Recommend automation opportunities for operational work and process improvements Collaborating with technical leads on database design decisions to ensure applications have high levels of performance, security, scalability, maintainability and reliability Investigate and resolve technical database issues. Participate in rotational 24x7 On-Call Support and assist root cause analysis when applicable Perform necessary security patch implementations to ensure ongoing database security Understanding of how to support and use NAS storage and cluster technologies What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master s degree and 4 to 6 years of Information Systems and Database Administration experience OR Bachelor s degree and 6 to 8 years of Information Systems and Database Administration experience OR Diploma and 10 to 12 years of Information Systems and Database Administration experience Experience administering Oracle Database and related services & monitoring systems Demonstrable experience automating database provisioning, patching and administration Experience with various toolsets to monitor, fix, verify database and server performance metrics, server and storage issues Understanding of ITIL frameworks and standard processes Understanding of operating system tools for performance and troubleshooting issues Excellent data-driven problem solving and analytical skills Demonstrable experience as part of a high-performance team Preferred Qualifications: Experience provisioning and supporting Oracle RAC and Data Guard Experience working on regulated systems (preferably in the pharmaceutical sector) Change management knowledge Experience leveraging Ansible for automation Experience supporting PostgreSQL databases Professional Certifications: Database certifications (OCP) (preferred not required) Soft Skills: Superb communication skills Partner negotiation and expectation management Crisis management capabilities Shift Information: This position is required to be onsite and participate in 24/5 and weekend on call in rotation fashion and may require you to work a later shift. Candidates must be willing and able to work off hours, as required based on business requirements. As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 3 weeks ago

Apply

5.0 - 9.0 years

6 - 9 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. In this vital role you will be responsible for development and maintenance of software in support of target/biomarker discovery at Amgen. Design, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions Contribute to data pipeline projects from inception to deployment, manage scope, timelines, and risks Contribute to data models for biopharma scientific data, data dictionaries, and other documentation to ensure data accuracy and consistency Optimize large datasets for query performance Collaborate with global cross-functional teams including research scientists to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, Software Engineers and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Maintain documentation of processes, systems, and solutions What we expect of you We are all different, yet we all use our unique contributions to serve patients. The role requires proficiency in scientific software development (e.g. Python, R, Rshiny, Plotly Dash, etc), and some knowledge of CI/CD processes and cloud computing technologies (e.g. AWS, Google Cloud, etc). Basic Qualifications: Master s degree/Bachelors Degree and 5 to 9 years of Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field experience. Preferred Qualifications: 5+ years of experience in designing and supporting biopharma scientific research data analytics (software platforms. Functional Skills: Must-Have Skills: Proficiency with SQL and Python for data engineering, test automation frameworks (pytest), and scripting tasks Hands on experience with big data technologies and platforms, such as Databricks (or equivalent), Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Excellent problem-solving skills and the ability to work with large, complex datasets Good-to-Have Skills: Experience the git, CICD and the software development lifecycle Experience with SQL and relational databases (e.g PostgreSQL, MySQL, Oracle) or Databricks Experience with cloud computing platforms and infrastructure (AWS preferred) Experience using and adopting Agile Framework A passion for tackling complex challenges in drug discovery with technology and data Basic understanding of data modeling, data warehousing, and data integration concepts Experience with data visualization tools (e.g. Dash, Plotly, Spotfire) Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstorming Experience writing and maintaining technical documentation in Confluence Professional Certifications: Databricks Certified Data Engineer Professional preferred Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills High degree of initiative and self-motivation. Demonstrated presentation skills Ability to manage multiple priorities successfully. Team-oriented with a focus on achieving team goals. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 3 weeks ago

Apply

8.0 - 13.0 years

15 - 19 Lacs

Hyderabad

Work from Office

We are seeking a Data Solutions Architect to design, implement, and optimize scalable and high-performance data solutions that support enterprise analytics, AI-driven insights, and digital transformation initiatives. This role will focus on data strategy, architecture, governance, security, and operational efficiency, ensuring seamless data integration across modern cloud platforms. The ideal candidate will work closely with engineering teams, business stakeholders, and leadership to establish a future-ready data ecosystem, balancing performance, cost-efficiency, security, and usability. This position requires expertise in modern cloud-based data architectures, data engineering best practices, and Scaled Agile methodologies. Roles & Responsibilities: Design and implement scalable, modular, and future-proof data architectures that support enterprise data lakes, data warehouses, and real-time analytics. Develop enterprise-wide data frameworks that enable governed, secure, and accessible data across various business domains. Define data modeling strategies to support structured and unstructured data, ensuring efficiency, consistency, and usability across analytical platforms. Lead the development of high-performance data pipelines for batch and real-time data processing, integrating APIs, streaming sources, transactional systems, and external data platforms. Optimize query performance, indexing, caching, and storage strategies to enhance scalability, cost efficiency, and analytical capabilities. Establish data interoperability frameworks that enable seamless integration across multiple data sources and platforms. Drive data governance strategies, ensuring security, compliance, access controls, and lineage tracking are embedded into enterprise data solutions. Implement DataOps best practices, including CI/CD for data pipelines, automated monitoring, and proactive issue resolution, to improve operational efficiency. Lead Scaled Agile (SAFe) practices, facilitating Program Increment (PI) Planning, Sprint Planning, and Agile ceremonies, ensuring iterative delivery of enterprise data capabilities. Collaborate with business stakeholders, product teams, and technology leaders to align data architecture strategies with organizational goals. Act as a trusted advisor on emerging data technologies and trends, ensuring that the enterprise adopts cutting-edge data solutions that provide competitive advantage and long-term scalability. What we expect of you Must-Have Skills: Experience in data architecture, enterprise data management, and cloud-based analytics solutions. Expertise in Databricks, cloud-native data platforms, and distributed computing frameworks. Strong proficiency in modern data modeling techniques, including dimensional modeling, NoSQL, and data virtualization. Experience designing high-performance ETL/ELT pipelines and real-time data processing solutions. Deep understanding of data governance, security, metadata management, and access control frameworks. Hands-on experience with CI/CD for data solutions, DataOps automation, and infrastructure as code (IaaC). Proven ability to collaborate with cross-functional teams, including business executives, data engineers, and analytics teams, to drive successful data initiatives. Strong problem-solving, strategic thinking, and technical leadership skills. Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience with Data Mesh architectures and federated data governance models. Certification in cloud data platforms or enterprise architecture frameworks. Knowledge of AI/ML pipeline integration within enterprise data architectures. Familiarity with BI & analytics platforms for enabling self-service analytics and enterprise reporting. Education and Professional Certifications Doctorate Degree with 6-8 + years of experience in Computer Science, IT or related field OR Master s degree with 8-10 + years of experience in Computer Science, IT or related field OR Bachelor s degree with 10-12 + years of experience in Computer Science, IT or related field AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. .

Posted 3 weeks ago

Apply

5.0 - 10.0 years

14 - 18 Lacs

Hyderabad

Work from Office

We are seeking a Data Solutions Architect with deep R&D expertise in Biotech/Pharma to design, implement, and optimize scalable and high-performance data solutions that support enterprise analytics, AI-driven insights, and digital transformation initiatives. This role will focus on data strategy, architecture, governance, security, and operational efficiency, ensuring seamless data integration across modern cloud platforms. The ideal candidate will work closely with R&D and engineering teams, business stakeholders, and leadership to establish a future-ready data ecosystem, balancing performance, cost-efficiency, security, and usability. This position requires expertise in modern cloud-based data architectures, data engineering best practices, and Scaled Agile methodologies. Roles & Responsibilities: Design and implement scalable, modular, and future-proof data architectures that support R&D initiatives in enterprise. Develop enterprise-wide data frameworks that enable governed, secure, and accessible data across various business domains. Define data modeling strategies to support structured and unstructured data, ensuring efficiency, consistency, and usability across analytical platforms. Lead the development of high-performance data pipelines for batch and real-time data processing, integrating APIs, streaming sources, transactional systems, and external data platforms. Optimize query performance, indexing, caching, and storage strategies to enhance scalability, cost efficiency, and analytical capabilities. Establish data interoperability frameworks that enable seamless integration across multiple data sources and platforms. Drive data governance strategies, ensuring security, compliance, access controls, and lineage tracking are embedded into enterprise data solutions. Implement DataOps best practices, including CI/CD for data pipelines, automated monitoring, and proactive issue resolution, to improve operational efficiency. Lead Scaled Agile (SAFe) practices, facilitating Program Increment (PI) Planning, Sprint Planning, and Agile ceremonies, ensuring iterative delivery of enterprise data capabilities. Collaborate with business stakeholders, product teams, and technology leaders to align data architecture strategies with organizational goals. Act as a trusted advisor on emerging data technologies and trends, ensuring that the enterprise adopts cutting-edge data solutions that provide competitive advantage and long-term scalability. What we expect of you Must-Have Skills: Experience in data architecture, enterprise data management, and cloud-based analytics solutions. Well versed in R&D domain of Biotech/Pharma industry and has been instrumental in solving complex problems for them using data strategy. Expertise in Databricks, cloud-native data platforms, and distributed computing frameworks. Strong proficiency in modern data modeling techniques, including dimensional modeling, NoSQL, and data virtualization. Experience designing high-performance ETL/ELT pipelines and real-time data processing solutions. Deep understanding of data governance, security, metadata management, and access control frameworks. Hands-on experience with CI/CD for data solutions, DataOps automation, and infrastructure as code (IaC). Proven ability to collaborate with cross-functional teams, including business executives, data engineers, and analytics teams, to drive successful data initiatives. Strong problem-solving, strategic thinking, and technical leadership skills. Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark, Apache Airflow Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Good-to-Have Skills: Experience with Data Mesh architectures and federated data governance models. Certification in cloud data platforms or enterprise architecture frameworks. Knowledge of AI/ML pipeline integration within enterprise data architectures. Familiarity with BI & analytics platforms for enabling self-service analytics and enterprise reporting. Education and Professional Certifications Doctorate Degree with 3-5 + years of experience in Computer Science, IT or related field OR Master s degree with 6 - 8 + years of experience in Computer Science, IT or related field OR Bachelor s degree with 8-10 + years of experience in Computer Science, IT or related field AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. .

Posted 3 weeks ago

Apply

5.0 - 9.0 years

15 - 16 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. In this vital role will be a key contributor to the Clinical Trial Data & Analytics (CTDA) Team, driving the development of robust data pipelines and platforms to enable advanced analytics and decision-making. Operating within a SAFE Agile product team, this role ensures system performance, minimizes downtime through automation, and supports the creation of actionable insights from clinical trial data. Collaborating with product owners, architects, and engineers, the Data Engineer will implement and enhance analytics capabilities. Ideal candidates are diligent professionals with strong technical skills, a problem-solving approach, and a passion for advancing clinical operations through data engineering and analytics. Roles & Responsibilities: Proficiency in developing interactive dashboards and visualizations using Spotfire, Power BI, and Tableau to provide actionable insights. Expertise in creating dynamic reports and visualizations that support data-driven decision-making and meet collaborator requirements. Ability to analyze complex datasets and translate them into meaningful KPIs, metrics, and trends. Strong knowledge of data visualization standard methodologies, including user-centric design, accessibility, and responsiveness. Experience in integrating data from multiple sources (databases, APIs, data warehouses) into visualizations. Skilled in performance tuning of dashboards and reports to optimize responsiveness and usability. Ability to work with end-users to define reporting requirements, develop prototypes, and implement final solutions. Familiarity with integrating real-time and predictive analytics within dashboards to enhance forecasting capabilities. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Masters degree / Bachelors degree and 5 to 9 years of experience in Computer Science/IT or related field Must-Have Skills: Proven hands-on experience with cloud platforms such as AWS, Azure, and GCP. Proficiency in using Python, PySpark, and SQL, with practical experience in ETL performance tuning. Development knowledge in Databricks. Strong analytical and problem-solving skills to tackle complex data challenges, with expertise in using analytical tools like Spotfire, Power BI, and Tableau. Preferred Qualifications: Good-to-Have Skills: Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Familiarity with SQL/NOSQL database, vector database for large language models Familiarity with prompt engineering, model fine tuning Professional Certifications AWS Certified Data Engineer (preferred) Databricks Certification (preferred) Any SAFe Agile certification (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 3 weeks ago

Apply

1.0 - 3.0 years

16 - 18 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and performing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has deep technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a crucial team member that assists in design and development of the data pipeline Build data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast-paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master s degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelor s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Preferred Qualifications: Must-Have Skills: Hands-on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Solid understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Good understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Professional Certifications Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments) Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 3 weeks ago

Apply

1.0 - 3.0 years

14 - 16 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. In this vital role you will be responsible for designing, building, maintaining , analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Basic Qualifications : Master s degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelor s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Must have Skills : Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark ( PySpark , SparkSQL ), workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools ( eg. SQL) Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Proven ability to optimize query performance on big data platforms Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, cloud data platforms Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Professional Certifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills : Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 3 weeks ago

Apply

6.0 - 9.0 years

15 - 16 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Basic Qualifications: Minimum Experience of 6-9 years Must have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Proven ability to optimize query performance on big data platforms Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, cloud data platforms Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Professional Certifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills : Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 3 weeks ago

Apply

2.0 - 3.0 years

5 - 9 Lacs

Hyderabad

Work from Office

The role is responsible for performance monitoring, maintenance, and reliable operation of BI Platforms, BI servers and database. This role involves managing BI Servers and User Admin Management for different environments, ensuring data is stored and retrieved efficiently, and safeguarding sensitive information and ensuring the uptime, performance, and security of IT infrastructure & Software maintenance. We are seeking a skilled BI Platform Administrator to manage, maintain, and optimize our enterprise Power BI and Tableau platforms . The ideal candidate will ensure seamless performance, governance, user access, platform upgrades, troubleshooting, and best practices across our BI environments. Roles & Responsibilities: Administer and maintain Power BI Service, Power BI Report Server, and Tableau Server/Online/any Cloud platforms (AWS, Azure/GCP). Preferred AWS Cloud experience. Configure, monitor, and optimize performance, capacity, and availability of BI platforms. Set up and manage user roles, permissions, and security policies. Manage BI platform upgrades, patches, and migrations. Monitor scheduled data refreshes and troubleshoot failures. Implement governance frameworks to ensure compliance with data policies. Collaborate with BI developers, data engineers, and business users for efficient platform usage. Automate routine administrative tasks using scripts (PowerShell, Python, etc.). Create and maintain documentation of configurations and operational procedures. Install, configure, and maintain BI tools on different operating systems, servers, and applications to ensure their reliability and performance Monitor Platform performance and uptime, addressing any issues that arise promptly to prevent service interruptions Implement and maintain security measures to protect Platforms from unauthorized access, vulnerabilities, and other threats Manage backup procedures and ensure data is securely backed up and recoverable in case of system failures Provide technical support to users, troubleshooting and resolving issues related to system access, performance, and software Apply operating system updates, patches, and configuration changes as necessary Maintain detailed documentation of Platform configurations, procedures, and change management Work closely with network administrators, database administrators, and other IT professionals to ensure that Platforms are integrated and functioning optimally Install, configure, and maintain database management Platforms (BI), ensuring services are reliable and perform optimally Monitor and optimize database performance, including query tuning, indexing, and resource allocation Maintain detailed documentation of Platform configurations, procedures, and policies Work closely with developers, Date Engineers, system administrators, and other IT staff to support database-related needs and ensure optimal platform performance Basic Qualifications and Experience: Over all 5+ years of experience in maintaining Administration on BI Platforms is preferred. 3+ years of experience administering Power BI Service and/or Power BI Report Server . 2+ years of experience administering Tableau Server or Tableau Cloud . Strong knowledge of Active Directory , SSO/SAML , and Role-Based Access Control (RBAC). Experience with platform monitoring and troubleshooting (Power BI Gateway logs, Tableau logs, etc.). Scripting experience (e.g., PowerShell , DAX , or Python ) for automation and monitoring. Strong understanding of data governance , row-level security , and compliance practices. Experience working with enterprise data sources (SQL Server, Snowflake, Oracle, etc.). Familiarity with capacity planning , load balancing , and scaling strategies for BI tools. Functional Skills: Should Have: Knowledge of Power BI Premium Capacity Management and Tableau Resource Management. Experience integrating BI platforms with CI/CD pipelines and DevOps tools. Hands-on experience in user adoption tracking , audit logging, and license management. Ability to conduct health checks and implement performance tuning recommendations. Understanding of multi-tenant environments or large-scale deployments . Good to Have: Experience with Power BI REST API or Tableau REST API for automation. Familiarity with AWS Services and/or AWS equivalents. Background in data visualization or report development for better user collaboration. Exposure to other BI tools (e.g., Looker, Qlik, MicroStrategy). Knowledge of ITIL practices or experience working in a ticket-based support environment. Experience in a regulated industry (finance, healthcare, etc.) with strong compliance requirements. Education & Experience : Master s degree with 1-2+ years of experience in Business, Engineering, IT or related field OR Bachelor s degree with 2-3+ years of experience in Business, Engineering, IT or related field OR Diploma with 5+ years of experience in Business, Engineering, IT or related field Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements .

Posted 3 weeks ago

Apply

0.0 - 3.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. In this vital role you will design, build and maintain data lake solutions for scientific data that drive business decisions for Research. You will build scalable and high-performance data engineering solutions for large scientific datasets and collaborate with Research stakeholders. The ideal candidate possesses experience in the pharmaceutical or biotech industry, demonstrates strong technical skills, has experience with big data technologies, and understands data architecture and ETL processes Design, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions Contribute to data pipeline projects from inception to deployment, manage scope, timelines, and risks Contribute to data models for biopharma scientific data, data dictionaries, and other documentation to ensure data accuracy and consistency Optimize large datasets for query performance Collaborate with global cross-functional teams including research scientists to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, Software Engineers and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Maintain documentation of processes, systems, and solutions What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelor s degree and 0 to 3 years of Computer Science, IT or related field experience OR Diploma and 4 to 7 years of Computer Science, IT or related field experience Preferred Qualifications: 1+ years of experience in implementing and supporting biopharma scientific research data analytics (software platforms) Functional Skills: Must-Have Skills: Proficiency in SQL and Python for data engineering, test automation frameworks (pytest), and scripting tasks Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Excellent problem-solving skills and the ability to work with large, complex datasets Good-to-Have Skills: A passion for tackling complex challenges in drug discovery with technology and data Strong understanding of data modeling, data warehousing, and data integration concepts Strong experience using RDBMS (e.g. Oracle, MySQL, SQL server, PostgreSQL) Knowledge of cloud data platforms (AWS preferred) Experience with data visualization tools (e.g. Dash, Plotly, Spotfire) Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstorming Experience writing and maintaining technical documentation in Confluence Professional Certifications: Databricks Certified Data Engineer Professional preferred Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 3 weeks ago

Apply

0.0 - 3.0 years

13 - 15 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. In this vital role you areresponsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelor s degree and 0 to 3 years of Computer Science, IT or related field experience OR Diploma and 4 to 7 years of Computer Science, IT or related field experience Preferred Qualifications: Functional Skills: Must-Have Skills : Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), AWS, Redshift, Snowflake, workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools. Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores. Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Good-to-Have Skills: Experience with data modeling, performance tuning on relational and graph databases ( e.g. Marklogic, Allegrograph, Stardog, RDF Triplestore). Understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, cloud data platform Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Professional Certifications : AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. Equal opportunity statement What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

15 - 20 Lacs

Gurugram

Work from Office

Job Profile Summary Create, maintain, and use Standard Operating Procedures (SOPs) for migration execution and ensure long term technical viability and optimization of production deployments and administration. Engage, Consult and Deliver based on interactive customer communications in streamlining project deliverables and scope of work. Capacity Planning: Forecast future database growth based on usage trends and plan for hardware and storage requirements accordingly to ensure scalability and optimal performance. Plan, Create, Manage and Deploy Effective High Availability and Disaster Recovery strategy/Runbooks. Patch Management and Upgrades: Plan and execute Database software upgrades, patches, and service packs. Troubleshooting and Issue Resolution: Investigate and resolve complex database-related issues, including data corruption, performance problems, and connectivity challenges. Automation and Scripting: Contribute to automation scripts and tools to streamline repetitive tasks, improve efficiency, and reduce the risk of human errors. Monitoring and Alerting: Set up monitoring and alerting systems to proactively identify and address potential database issues before they become critical. Performance Analysis and Reporting: Generate performance reports and analysis for stakeholders and management to provide insights into the database environment's health and performance. Documentation: Maintain up-to-date documentation of database configurations, procedures, and troubleshooting steps Ticket Handling: Work to resolve Incident, Changes and Service request under the agreed client SLA. Problem Management: Responsible in resolving problem tickets by creating detailed RCA reports. Understanding Cloud basics and perform duties like security management, storage management, Backup Vaults, Key vaults, Server/DB Monitoring Cost Optimization: Compute and workload analysis, License enhancements and features. Skills List Experience working in Automation with Python/Shell/PLSQL Ability to Deploy, Manage and Troubleshoot HADR config in one of the following tech buckets Oracle (RAC, Data guard, RMAN, Data pump, ASM, Golden Gate) . Experience in troubleshooting performance issues and able to suggest development team with complete analysis Hands on working experience in setting up standby in using Oracle Data guard and configuring DG broker Proficient Skills in SQL Server Architecture, Installation and Configuration, Performance Tuning, High Availability and Disaster Recovery (HADR), Monitoring and Troubleshooting Database Migrations and Upgrades: Experience in planning and executing database migrations and upgrades, including version compatibility, testing, and minimizing downtime. Ability to Deploy, Manage and Troubleshoot HADR config in one of the following tech buckets SQL Server (Always On, FCI, Loshipping, Replication) MySQL or PostgreSQL (Master slave replication, InnoDB cluster Set) Education High school diploma or equivalent required. Bachelors degree in computer science, Computer Information Systems, Management Information Systems, or a directly related field Certifications List Database Specialty Certifications in AWS/Azure. Cloud Associate/Professional Certification

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies