Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 5.0 years
6 - 9 Lacs
Hyderabad
Work from Office
We are seeking an MDM Admin/Infrastructure Resource with 2-5 years of experience to support and maintain our enterprise MDM (Master Data Management) platforms using Informatica MDM and IDQ. This role is critical in ensuring the reliability, availability, and performance of master data solutions across the organization, utilizing modern tools like Databricks and AWS for automation, backup, recovery, and preventive maintenance. The ideal candidate will have strong experience in server maintenance, data recovery, data backup, and MDM software support. Roles & Responsibilities: Administer and maintain customer, product, study master data using Informatica MDM and IDQ solutions. Perform data recovery and data backup processes to ensure master data integrity. Conduct server maintenance and preventive maintenance activities to ensure system reliability. Leverage Unix/Linux, Python, and Databricks for scalable data processing and automation. Collaborate with business and data engineering teams for continuous improvement in MDM solutions. Implement automation processes for data backup, recovery, and preventive maintenance. Utilize AWS cloud services for data storage and compute processes related to MDM. Support MDM software maintenance and upgrades. Track and manage data issues using tools such as JIRA and document processes in Confluence. Apply Life Sciences/Pharma industry context to ensure data standards and compliance. Functional Skills: Must-Have Skills: Strong experience with Informatica MDM, IDQ platforms in administering and maintaining configurations Strong experience in data recovery and data backup processes Strong experience in server maintenance and preventive maintenance activities (Linux/Unix strong hands on and server upgrade experience) Expertise in handling data backups, server backups, MDM products upgrades, server upgrades Good understanding and hands on experience of access control Experience with IDQ, data modeling, and approval workflow/DCR Advanced SQL expertise and data wrangling Knowledge of MDM, data governance, stewardship, and profiling practices Good-to-Have Skills: Familiarity with Databricks and AWS architecture Background in Life Sciences/Pharma industries Familiarity with project tools like JIRA and Confluence Basics of data engineering concepts Basic Qualifications and Experience: Master s degree with 1 - 3 years of experience in Business, Engineering, IT or related field OR Bachelor s degree with 2 - 5 years of experience in Business, Engineering, IT or related field OR Diploma with 6 - 8 years of experience in Business, Engineering, IT or related field Professional Certifications (preferred): Any ETL certification (e.g., Informatica) Any Data Analysis certification (SQL) Any cloud certification (AWS or Azure) Soft Skills: Excellent written and verbal communications skills (English) in translating technology content into business-language at various levels Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem-solving and analytical skills. Strong time and task management skills to estimate and successfully meet project timeline with ability to bring consistency and quality assurance across various projects.
Posted 1 week ago
10.0 - 15.0 years
9 - 14 Lacs
Hyderabad
Work from Office
We are seeking an accomplished and visionary Data Scientist/ GenAI Lead to join Amgen s Enterprise Data Management team. As MDM Data Science/Manager, you will lead the design, development, and deployment of Generative AI and ML models to power data-driven decisions across business domains. This role is ideal for an AI practitioner who thrives in a collaborative environment and brings a strategic mindset to applying advanced AI techniques to solve real-world problems. To succeed in this role, the candidate must have strong AI/ML, Data Science , GenAI experience along with MDM knowledge, hence the candidates having only MDM experience are not eligible for this role. Candidate must have AI/ML, data science and GenAI experience on technologies like ( PySpark / PyTorch , TensorFlow, LLM , Autogen , Hugging FaceVectorDB , Embeddings, RAGs etc ), along with knowledge of MDM (Master Data Management) Roles & Responsibilities: Drive development of enterprise-level GenAI applications using LLM frameworks such as Langchain , Autogen , and Hugging Face. Architect intelligent pipelines using PySpark , TensorFlow, and PyTorch within Databricks and AWS environments. Implement embedding models and manage VectorStores for retrieval-augmented generation (RAG) solutions. Integrate and leverage MDM platforms like Informatica and Reltio to supply high-quality structured data to ML systems. Utilize SQL and Python for data engineering, data wrangling, and pipeline automation. Build scalable APIs and services to serve GenAI models in production. Lead cross-functional collaboration with data scientists, engineers, and product teams to scope, design, and deploy AI-powered systems. Ensure model governance, version control, and auditability aligned with regulatory and compliance expectations. Basic Qualifications and Experience: Master s degree with 8 - 10 years of experience in Data Science, Artificial Intelligence, Computer Science, or related fields OR Bachelor s degree with 10 - 14 years of experience in Data Science, Artificial Intelligence, Computer Science, or related fields OR Diploma with 14 - 16 years of hands-on experience in Data Science, AI/ML technologies, or related technical domains Functional Skills: Must-Have Skills: 10+ years of experience working in AI/ML or Data Science roles, including designing and implementing GenAI solutions. Extensive hands-on experience with LLM frameworks and tools such as Langchain , Autogen , Hugging Face, OpenAI APIs, and embedding models. Strong programming background with Python, PySpark , and experience in building scalable solutions using TensorFlow, PyTorch , and SK-Learn. Proven track record of building and deploying AI/ML applications in cloud environments such as AWS. Expertise in developing APIs, automation pipelines, and serving GenAI models using frameworks like Django, FastAPI , and DataBricks . Solid experience integrating and managing MDM tools (Informatica/Reltio) and applying data governance best practices. Guide the team on development activities and lead the solution discussions Must have core technical capabilities in GenAI , Data Science space Good-to-Have Skills: Prior experience in Data Modeling, ETL development, and data profiling to support AI/ML workflows. Working knowledge of Life Sciences or Pharma industry standards and regulatory considerations. Proficiency in tools like JIRA and Confluence for Agile delivery and project collaboration. Familiarity with MongoDB, VectorStores , and modern architecture principles for scalable GenAI applications. Professional Certifications : Any ETL certification ( e.g. Informatica) Any Data Analysis certification (SQL) Any cloud certification (AWS or AZURE) Data Science and ML Certification Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams
Posted 1 week ago
10.0 - 14.0 years
6 - 9 Lacs
Hyderabad
Work from Office
We are seeking an experienced MDM Manager with 10-14 years of experience to lead strategic development and operations of our Master Data Management (MDM) platforms, with hands-on experience in Informatica or Reltio. This role will involve managing a team of data engineers, architects, and quality experts to deliver high-performance, scalable, and governed MDM solutions that align with enterprise data strategy. To succeed in this role, the candidate must have strong MDM experience along with Data Governance, DQ, Data Cataloging implementation knowledge, hence the candidates must have minimum 6-8 years of core MDM technical experience for this role (Along with total experience in the range of 10-14 years) . Roles & Responsibilities: Lead the implementation and optimization of MDM solutions using Informatica or Reltio platforms. Define and drive enterprise-wide MDM architecture, including IDQ, data stewardship, and metadata workflows. Match/Merge and Survivorship strategy and implementation experience D esign and delivery of MDM processes and data integrations using Unix, Python, and SQL. Collaborate with backend data engineering team and frontend custom UI team for strong integrations and a seamless enhanced user experience respectively Manage cloud-based infrastructure using AWS and Databricks to ensure scalability and performance. Coordinate with business and IT stakeholders to align MDM capabilities with organizational goals. Establish data quality metrics and monitor compliance using automated profiling and validation tools. Promote data governance and contribute to enterprise data modeling and approval workflow (DCRs). Ensure data integrity, lineage, and traceability across MDM pipelines and solutions. Provide mentorship and technical leadership to junior team members and ensure project delivery timelines. Lead custom UI design for better user experience on data stewardship Basic Qualifications and Experience: Master s degree with 8 - 10 years of experience in Business, Engineering, IT or related field OR Bachelor s degree with 10 - 14 years of experience in Business, Engineering, IT or related field OR Diploma with 14 - 16 years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: Deep knowledge of MDM tools (Informatica, Reltio) and data quality frameworks (IDQ) from configuring data assets to building end to end data pipelines and integrations for data mastering and orchestrations of ETL pipelines Very good understanding on reference data, hierarchy and its integration with MDM Hands on experience with custom workflows AVOS , Eclipse etc Strong experience with external data enrichment services like D&B, Address doctor etc Strong experience on match/merge and survivorship rules strategy and implementations Strong experience with group fields, cross reference data and UUIDs Strong understanding of AWS cloud services and Databricks architecture. Proficiency in Python, SQL, and Unix for data processing and orchestration. Experience with data modeling, governance, and DCR lifecycle management. Proven leadership and project management in large-scale MDM implementations. Able to implement end to end integrations including API based integrations, Batch integrations and Flat file based integrations Must have worked on atleast 3 end to end implementations of MDM Good-to-Have Skills: Experience with Tableau or PowerBI for reporting MDM insights. Exposure to Agile practices and tools (JIRA, Confluence). Prior experience in Pharma/Life Sciences. Understanding of compliance and regulatory considerations in master data. Professional Certifications : Any MDM certification ( e.g. Informatica , Reltio etc ) Any Data Analysis certification (SQL) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams
Posted 1 week ago
6.0 - 9.0 years
50 - 55 Lacs
Hyderabad
Work from Office
We are seeking an experienced MDM Senior Data Engineer with 6 - 9 years of experience and expertise in backend engineering to work closely with business on development and operations of our Master Data Management (MDM) platforms, with hands-on experience in Informatica or Reltio and data engineering experience . This role will also involve guiding junior data engineers /analysts , and quality experts to deliver high-performance, scalable, and governed MDM solutions that align with enterprise data strategy. To succeed in this role, the candidate must have strong Data Engineering experience along with MDM knowledge, hence the candidates having only MDM experience are not eligible for this role. Candidate must have data engineering experience on technologies like (SQL, Python, PySpark , Databricks, AWS, API Integrations etc ), along with knowledge of MDM (Master Data Management) Roles & Responsibilities: Develop the MDM backend solutions and implement ETL and Data engineering pipelines using Databricks, AWS, Python/ PySpark , SQL etc Lead the implementation and optimization of MDM solutions using Informatica or Reltio platforms. Perform data profiling and identify the DQ rules need . Define and drive enterprise-wide MDM architecture, including IDQ, data stewardship, and metadata workflows. Manage cloud-based infrastructure using AWS and Databricks to ensure scalability and performance. Ensure data integrity, lineage, and traceability across MDM pipelines and solutions. Provide mentorship and technical leadership to junior team members and ensure project delivery timelines. Help custom UI team for integration with backend data using API or other integration methods for better user experience on data stewardship Basic Qualifications and Experience: Master s degree with 4 - 6 years of experience in Business, Engineering, IT or related field OR Bachelor s degree with 6 - 9 years of experience in Business, Engineering, IT or related field OR Diploma with 1 0 - 12 years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: Strong understanding and hands on experience of Databricks and AWS cloud services. Proficiency in Python, PySpark , SQL, and Unix for data processing and orchestration. Deep knowledge of MDM tools (Informatica, Reltio) and data quality frameworks (IDQ). Must have knowledge on customer master data (HCP, HCO etc ) Experience with data modeling, governance, and DCR lifecycle management. Able to implement end to end integrations including API based integrations, Batch integrations and Flat file-based integrations Strong experience with external data enrichments like D&B Strong experience on match/merge and survivorship rules implementations Very good understanding on reference data and its integration with MDM Hands on experience with custom workflows or building data pipelines /orchestrations Good-to-Have Skills: Experience with Tableau or PowerBI for reporting MDM insights. Exposure or knowledge of DataScience and GenAI capabilities. Exposure to Agile practices and tools (JIRA, Confluence). Prior experience in Pharma/Life Sciences. Understanding of compliance and regulatory considerations in master data. Professional Certifications : Any MDM certification ( e.g. Informatica , Reltio etc ) Databricks Certifications (Data engineer or Architect) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams
Posted 1 week ago
10.0 - 14.0 years
10 - 14 Lacs
Hyderabad
Work from Office
We are seeking an experienced MDM Engineering Manager with 10 14 years of experience to lead strategic development and operations of our Master Data Management (MDM) platforms, with hands-on experience in Informatica or Reltio and data engineering experience . This role will involve managing a team of data engineers, architects, and quality experts to deliver high-performance, scalable, and governed MDM solutions that align with enterprise data strategy. This is a technical role that will require hands-on work. To succeed in this role, the candidate must have strong Data Engineering experience along with MDM knowledge, hence the candidates having only MDM experience are not eligible for this role. Candidate must have data engineering experience on technologies like (SQL, Python, PySpark , Databricks, AWS, API Integrations etc ), along with knowledge of MDM (Master Data Management) Roles & Responsibilities: Lead the implementation and optimization of MDM solutions using Informatica or Reltio platforms. Define and drive enterprise-wide MDM architecture, including IDQ, data stewardship, and metadata workflows. D esign the solution and deliver enhanced MDM processes and data integrations using Unix, Python, and SQL. Manage cloud-based infrastructure using AWS and Databricks to ensure scalability and performance. Coordinate with business and IT stakeholders to align MDM capabilities with organizational goals. Establish data quality metrics and monitor compliance using automated profiling and validation tools. Promote data governance and contribute to enterprise data modeling and approval workflow (DCRs). Ensure data integrity, lineage, and traceability across MDM pipelines and solutions. Provide mentorship and technical leadership to junior team members and ensure project delivery timelines. Lead custom UI design for better user experience on data stewardship Basic Qualifications and Experience: Master s degree with 8 - 10 years of experience in Business, Engineering, IT or related field OR Bachelor s degree with 10 - 14 years of experience in Business, Engineering, IT or related field OR Diploma with 14 - 16 years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: Strong understanding and hands on experience of Databricks and AWS cloud services. Proficiency in Python, PySpark , SQL, and Unix for data processing and orchestration. Deep knowledge of MDM tools (Informatica, Reltio) and data quality frameworks (IDQ). Strong e xperience with data modeling, governance, and DCR lifecycle management. Proven leadership and project management in large-scale MDM implementations. Able to implement end to end integrations including API based integrations, Batch integrations and Flat file based integrations Strong experience with external data enrichments like D&B Strong experience on match/merge and survivorship rules implementations Must have worked on atleast 3 end to end implementations of MDM Very good understanding on reference data and its integration with MDM Hands on experience with custom workflows Good-to-Have Skills: Experience with Tableau or PowerBI for reporting MDM insights. Exposure to Agile practices and tools (JIRA, Confluence). Prior experience in Pharma/Life Sciences. Understanding of compliance and regulatory considerations in master data. Professional Certifications : Any MDM certification ( e.g. Informatica , Reltio etc ) Any Data Analysis certification (SQL , Python, PySpark , Databricks ) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams
Posted 1 week ago
12.0 - 16.0 years
11 - 15 Lacs
Hyderabad
Work from Office
The External Data Assets Lead will be responsible for optimizing spend and reuse of external data. This role is responsible for maintaining a data catalog with harmonized metadata across functions to increase visibility, promote reuse, and lower the annual spend. The External Data Assets Lead will assess investments in external data and will provide recommendations to the Enterprise Data Council to inform investment approval. This role will work with Global Strategic Sourcing and the Cyber Security Team to standardize contracting of data purchases. The External Data Assets Lea will also work closely with the data engineering team and external data providers to manage the lifecycle of the data assets. This role will be responsible for co-defining and operationalizing the business process to capture metadata related to the forecast of data purchases. The person in this role will coordinate activities at the tactical level, interpreting Enterprise Data Council direction and defining operational level impact deliverables and actions to maximize data investments. Roles & Responsibilities: Responsible for cataloging all external data assets, including the harmonization of metadata to increase reuse and inform future data acquisitions. Co-develop and maintain the process to consistently capture external data purchase forecast, focusing on generating the required metadata to support KPIs and reporting. Responsible for working with Global Strategic Sourcing and Cyber Security teams to standardize data contracts to enable the reuse of data assets across functions. In partnership with functional data SMEs, develop internal expertise on the content of external data to increase reuse across teams. This includes, but is not limited to, participating in data seminars to bring together data SMEs from all functions to increase data literacy. In partnership with the Data Engineering team, design data standardization rules to make external data FAIR from the start. Manage a team of Data Specialists and Data Stewards- directly or in a matrix organization structure to maintain the quality of data. In partnership with the Data Privacy and Policy team develop and operationalize data access controls to adhere to the terms of the data contracts to ensure data access controls, compliance, and security requirements are enforced. Maintain policies and ensure compliance with data privacy, security, and contractual policies Publish metrics to measure effectiveness of data reuse, data literacy and reduction in data spend. Functional Skills: Must-Have Skills: Experience managing external data assets used in the life-science industry (e.g., Claims, EHR, etc.) Experience working with data providers, supporting negotiations and vendor management activities. Technical data management skills with in-depth knowledge of Pharma data standards and regulations. Aware of industry trends and priorities and can apply to governance and policies. Experience with data products development life cycle, including the enablement of data dictionaries, business glossary to increase data products reusability and data literacy. Good-to-Have Skills: Ability to successfully execute complex projects in a fast-paced environment and in managing multiple priorities effectively. Ability to manage projects or departmental budgets. Experience with modelling tools (e.g., Visio). Basic programming skills, experience in data visualization and data modeling tools. Experience working with agile development methodologies such as Scaled Agile. Soft Skills: Ability to build business relationships and understand end-to-end data use and needs. Excellent interpersonal skills (team player). People management skills either in matrix or direct line function. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Good presentation and public speaking skills. Strong attention to detail, quality, time management and customer focus. Basic Qualifications: 12 to 15 years of Information Systems experience 4 years of managerial experience directly managing people and leadership experience leading teams, projects, or programs.
Posted 1 week ago
8.0 - 13.0 years
7 - 11 Lacs
Hyderabad
Work from Office
The Data Quality Lead will be responsible for defining, operationalizing, and monitoring data quality capabilities to increase the quality and trust of data across Amgen. This role will be responsible for delivering strategic and tactical data quality and stewardship services. This is a vital role to support Amgen s aspirations for a FAIR data ecosystem that conforms with business needs. This role will interact with Amgen s data owners and product teams worldwide to monitor and improve data related KPIs and remediation plans. Roles & Responsibilities: Develop and implement data quality standards, metrics, and governance frameworks to ensure consistency, accuracy, and reliability of enterprise data across systems and domains. Lead root cause analysis and resolution of data quality issues by collaborating with data stewards, business stakeholders, and Technology teams to identify, prioritize, and remediate data anomalies. Establish data quality monitoring and reporting processes, including dashboards and KPIs, to track progress, highlight trends, and drive continuous improvement initiatives. Functional Skills: Must-Have Skills: Experience managing commercial data quality platforms Technical data management skills with in-depth knowledge of Pharma data standards. Aware of industry trends and priorities and can apply to governance and policies. In-depth knowledge and experience with data masking, data access controls, and technologies to enable a scalable operating model. Good-to-Have Skills: Experience managing industry external data assets (e.g. Claims, EHR, etc.) Ability to successfully execute complex projects in a fast-paced environment and in managing multiple priorities effectively. Ability to manage projects or departmental budgets. Experience with modelling tools (e.g., Visio). Basic programming skills, experience in data visualization and data modeling tools. Experience working with agile development methodologies such as Scaled Agile. Soft Skills: Ability to build business relationships and understand end-to-end data use and needs. Excellent interpersonal skills (team player). People management skills either in matrix or direct line function. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Good presentation and public speaking skills. Strong attention to detail, quality, time management and customer focus. Basic Qualifications: Doctorate degree and 2 years of Information Systems experience, or Master s degree and 6 years of Information Systems experience, or Bachelor s degree and 8 years of Information Systems experience, or Associates degree and 10 years of Information Systems experience, or 4 years of managerial experience directly managing people and leadership experience leading teams, projects, or programs.
Posted 1 week ago
8.0 - 12.0 years
35 - 40 Lacs
Hyderabad
Work from Office
The External Data Analyst will be responsible for optimizing spend and reuse of external data. This role is responsible for maintaining a data catalog with harmonized metadata across functions to increase visibility, promote reuse, and lower the annual spend. The External Data Analyst will assess investments in external data and will provide recommendations to the Enterprise Data Council to inform investment approval. This role will work with Global Strategic Sourcing and the Cyber Security Team to standardize contracting of data purchases. The External Data Analyst will also work closely with the data engineering team and external data providers to manage the lifecycle of the data assets. This role will be responsible for co-defining and operationalizing the business process to capture metadata related to the forecast of data purchases. The person in this role will coordinate activities at the tactical level, interpreting Enterprise Data Council direction and defining operational level impact deliverables and actions to maximize data investments. Roles & Responsibilities: Responsible for cataloging all external data assets, including the harmonization of metadata to increase reuse and inform future data acquisitions. Co-develop and maintain the process to consistently capture external data purchase forecast, focusing on generating the required metadata to support KPIs and reporting. Responsible for working with Global Strategic Sourcing and Cyber Security teams to standardize data contracts to enable the reuse of data assets across functions. In partnership with functional data SMEs, develop internal expertise on the content of external data to increase reuse across teams. This includes, but is not limited to, participating in data seminars to bring together data SMEs from all functions to increase data literacy. In partnership with the Data Engineering team, design data standardization rules to make external data FAIR from the start. Maintain the quality of data. In partnership with the Data Privacy and Policy team develop and operationalize data access controls to adhere to the terms of the data contracts to ensure data access controls, compliance, and security requirements are enforced. Maintain policies and ensure compliance with data privacy, security, and contractual policies Publish metrics to measure effectiveness of data reuse, data literacy and reduction in data spend. Functional Skills: Must-Have Skills: Experience managing external data assets used in the life-science industry (e.g., Claims, EHR, etc.) Experience working with data providers, supporting negotiations and vendor management activities. Technical data management skills with in-depth knowledge of Pharma data standards and regulations. Aware of industry trends and priorities and can apply to governance and policies. Experience with data products development life cycle, including the enablement of data dictionaries, business glossary to increase data products reusability and data literacy. Good-to-Have Skills: Ability to successfully execute complex projects in a fast-paced environment and in managing multiple priorities effectively. Ability to manage projects or departmental budgets. Experience with modelling tools (e.g., Visio). Basic programming skills, experience in data visualization and data modeling tools. Experience working with agile development methodologies such as Scaled Agile. Soft Skills: Ability to build business relationships and understand end-to-end data use and needs. Excellent interpersonal skills (team player). People management skills either in matrix or direct line function. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Good presentation and public speaking skills. Strong attention to detail, quality, time management and customer focus. Basic Qualifications: Master s degree with 8 - 10 years of experience in Business, Engineering, IT or related field OR Bachelor s degree with 8 - 12 years of experience in Business, Engineering, IT or related field OR Diploma with 14 - 16 years of experience in Business, Engineering, IT or related field
Posted 1 week ago
12.0 - 15.0 years
13 - 18 Lacs
Hyderabad
Work from Office
We are seeking a seasoned Senior Engineering Manager (Data Engineering) to drive the development & maintenance of data pipelines developed by data engineering teams focusing on deep domain expertise of HR/Finance data. This role will lead a team of data engineers who will be maintaining data pipelines, and operational frameworks that support enterprise-wide data solutions. The ideal candidate will drive best practices in data engineering, cloud technologies, and Agile development, ensuring robust governance, data quality, and efficiency. The role requires technical expertise, operational excellence and a deep understanding of data solutions to optimize data-driven decision-making. Roles & Responsibilities: Lead and mentor a high performing team of data engineers who will be developing and maintaining the complex data pipelines. Drive the development of data tools and frameworks for managing and accessing data efficiently across the organization. Oversee the implementation of performance monitoring protocols across data pipelines, ensuring real-time visibility, alerts, and automated recovery mechanisms. Coach engineers in building dashboards and aggregations to monitor pipeline health and detect inefficiencies, ensuring optimal performance and cost-effectiveness. Lead the implementation of self-healing solutions, reducing failure points and improving pipeline stability and efficiency across multiple product features. Oversee data governance strategies, ensuring compliance with security policies, regulations, and data accessibility best practices. Guide engineers in data modeling, metadata management, and access control, ensuring structured data handling across various business use cases. Collaborate with business leaders, product owners, and cross-functional teams to ensure alignment of data architecture with product requirements and business objectives. Prepare team members for stakeholder discussions by helping assess data costs, access requirements, dependencies, and availability for business scenarios. Drive Agile and Scaled Agile (SAFe) methodologies, managing sprint backlogs, prioritization, and iterative improvements to enhance team velocity and project delivery. Stay up-to-date with emerging data technologies, industry trends, and best practices, ensuring the organization leverages the latest innovations in data engineering and architecture. Functional Skills: Must-Have Skills: Experience managing a team of data engineers in biotech/pharma domain companies. Experience in designing and maintaining data pipelines and analytics solutions that extract, transform, and load data from multiple source systems. Demonstrated hands-on experience with cloud platforms (AWS) and the ability to architect cost-effective and scalable data solutions. Proficiency in Python, PySpark, SQL. Experience with dimensional data modeling. Experience working with Apache Spark, Apache Airflow. Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops. Experienced with AWS or GCP or Azure cloud services. Understanding of end-to-end project/product life cycle. Well versed with full stack development & DataOps automation, logging frameworks, and pipeline orchestration tools. Strong analytical and problem-solving skills to address complex data challenges. Effective communication and interpersonal skills to collaborate with cross-functional teams. Good-to-Have Skills: Data Engineering Management experience in Biotech/Life Sciences/Pharma Experience using graph databases such as Stardog or Marklogic or Neo4J or Allegrograph, etc. Education and Professional Certifications 12 -15 years of experience in Computer Science, IT or related field AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Project Management certifications preferred Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills
Posted 1 week ago
5.0 - 8.0 years
16 - 18 Lacs
Hyderabad
Work from Office
Let s do this. Let s change the world. We are looking for highly motivated expert Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks, with deep domain knowledge of Manufacturing and/or Process Development and/or Supply Chain in biotech or life sciences or pharma. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Expert in data quality, data validation and verification frameworks Innovate, explore and implement new tools and technologies to enhance efficient data processing Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Deep domain knowledge of Manufacturing and/or Process Development and/or Supply Chain in biotech or life sciences or pharma. Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Master s degree and 3 to 4 + years of Computer Science, IT or related field experience OR Bachelor s degree and 5 to 8 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.
Posted 1 week ago
1.0 - 3.0 years
1 - 4 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Associate IS Bus Sys Analyst to join our Quality Control Product team. You will be responsible for designing, developing, and maintaining software applications and solutions that meet business needs and ensuring the availability and performance of critical systems and applications. This role involves working closely with product managers, designers, and other engineers to create high-quality, scalable software solutions and automating operations, monitoring system health, and responding to incidents to minimize downtime. Roles & Responsibilities: Maintain existing code and configuration: Support and maintain Quality Control applications Development & Deployment: Develop, test, and deploy code based on designs created with the guidance of senior team members. Implement solutions following standard methodologies for code structure and efficiency. Documentation: Generate clear and concise code documentation for new and existing features to ensure smooth handovers and easy future reference. Collaborative Design: Work closely with team members and collaborators to understand project requirements and translate them into functional technical designs. Code Reviews & Quality Assurance: Participate in peer code reviews, providing feedback on adherence to standard methodologies, and ensuring high code quality and maintainability. Testing & Debugging: Assist in writing unit and integration tests to validate new features and functionalities. Support fix and debugging efforts for existing systems to resolve bugs and performance issues. Perform application support and administration tasks such as periodic review, manage incident response and resolution, and security reviews. Continuous Learning: Stay up-to-date with the newest technologies and standard methodologies, with a focus on expanding knowledge in cloud services, automation, and secure software development. What we expect of you Must-Have Skills Solid technical background, including understanding software development processes, databases, and cloud-based systems. Experience configuring Quality Control Platforms (LIMS, ELN, Empower CDS) Experience working with databases, data modeling and data warehousing (Oracle, MySQL). Strong foundational knowledge of testing methodologies. Good-to-Have Skills: Understanding of Quality Control Process. Curiosity of modern technology domain and learning agility Experience with the following technologies: AWS (Amazon Web Services) Services (DynamoDB, EC2, S3, etc.), Application Programming Interface (API) integration and Structured Query Language (SQL), Remote desktop application access management (Citrix Workspace, Middleware Experience (Mulesoft, Datadobi). Superb communication skills, with the ability to convey complex technical concepts. Soft Skills: Excellent analytical and problem-solving skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Basic Qualifications: Bachelor s degree and 1to 3 years of Computer Science, IT or related field experience OR Diploma and 4 to 7years of Computer Science, IT or related field experience
Posted 1 week ago
3.0 - 5.0 years
4 - 7 Lacs
Hyderabad
Work from Office
Let s do this. Let s change the world. We are looking for highly motivated expert Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Expert in data quality, data validation and verification frameworks Innovate, explore and implement new tools and technologies to enhance efficient data processing Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Any Degree and 6-8 years of experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.
Posted 1 week ago
3.0 - 6.0 years
2 - 6 Lacs
Hyderabad
Work from Office
We are looking for an experienced SAP Master Data Governance (MDG) consultant who can support the implementation of SAP Master Data Governance solutions. In this role, you will innovative systems, provide expert technical guidance, configuration, development and maintenance that align with Amgens strategic objectives. You will collaborate closely with Product Owner, Functional, Technical, and other SAP S/4 Functional and technical architects and other functional MDG teams to implement, enhance and optimize MDG Master data governance, quality, replications and Integrations, ensuring SAP MDG delivers maximum value across the organization. Roles & Responsibilities: Collaborate with Product Owners, functional and technical team to understand and deliver data governance requirements Build and implement SAP MDG solutions for MDG. Configure and customize SAP MDG on SAP S/4 Hana accordance with the MDG strategy. Develop and maintain data models, workflows, and business rules within the MDG framework. Collaborate with multi-functional teams to integrate MDG with other SAP modules and external systems. Create comprehensive technical documentation, including design specifications, architecture diagrams, and user guides. Follow Agile software development methods to design, build, implement, and deploy. Functional Skills: Must-Have Skills: Experience in atleast 2 SAP MDG Implementation. Experience with atleast 2 of the MDG Data Models and preferably custom data models Functional understanding of SAP Master Data and MDG Out of the box solution. Technical expertise to build and develop workflows, validations, replication, etc. Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical collaborators. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams. Basic Qualifications: 3 to 6 years of Business, Engineering, IT or related field experience Expertise in the implementation of SAP MDG solution (configuration, design, build, test and deploy) Deep understanding on key SAP MDG concepts - Data Modeling, UI Modelling, Process Modelling, Governance Process, Mass Processing, DRF, DIF, BRF+ and Consolidation Features + DQM. Experience in configuring rule-based Workflows (serial, parallel and combination) and User interface modelling.
Posted 1 week ago
2.0 - 5.0 years
13 - 15 Lacs
Hyderabad
Work from Office
We are looking for an Associate Data Engineer with deep expertise in writing data pipelines to build scalable, high-performance data solutions. The ideal candidate will be responsible for developing, optimizing and maintaining complex data pipelines, integration frameworks, and metadata-driven architectures that enable seamless access and analytics. This role prefers deep understanding of the big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Data Engineer who owns development of complex ETL/ELT data pipelines to process large-scale datasets Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Exploring and implementing new tools and technologies to enhance ETL platform and performance of the pipelines Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Eager to understand the biotech/pharma domains & build highly efficient data pipelines to migrate and deploy complex data across systems Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions What we expect of you Must-Have Skills: Experience in Data Engineering with a focus on Databricks, AWS, Python, SQL, and Scaled Agile methodologies Proficiency & Strong understanding of data processing and transformation of big data frameworks (Databricks, Apache Spark, Delta Lake, and distributed computing concepts) Strong understanding of AWS services and can demonstrate the same Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery, and DevOps practices Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Exposure to APIs, full stack development Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Bachelor s degree and 2 to 5 + years of Computer Science, IT or related field experience OR Master s degree and 1 to 4 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.
Posted 1 week ago
3.0 - 5.0 years
1 - 4 Lacs
Hyderabad
Work from Office
As an Associate BI Engineer , you will support the development and delivery of data-driven solutions that enable business insights and operational efficiency. You will work closely with senior BI engineers, analysts, and stakeholders to build dashboards, analyze data, and contribute to the design of scalable reporting systems. This is an ideal role for early-career professionals looking to grow their technical and analytical skills in a collaborative environment. Roles & Responsibilities: Assist in designing and maintaining dashboards and reports using tools like Power BI, Tableau, or Cognos. Perform basic data analysis to identify trends and support business decisions. Collaborate with team members to gather requirements and translate them into technical specifications. Support data validation, testing, and documentation efforts. Learn and apply best practices in data modeling , visualization, and BI development. Participate in Agile ceremonies and contribute to sprint planning and backlog grooming. Basic Qualifications and Experience: Bachelor s degree and 0 to 3 years of Computer Science, IT or related field experience OR Diploma and 3 to 5 years of Computer Science, IT or related field experience Functional Skills: Exp osure to data visualization tools such as Power BI, Tableau, or Quick S ight . Basic p roficiency in SQL and scripting languages (e.g., Python) for data processing and analysis Familiarity with data modeling, warehousing, and ETL pipelines Understanding of data structures and reporting concepts Strong analytical and problem-solving skills Good-to-Have Skills: Familiarity with Cloud services like AWS (e.g., Redshift, S3, EC2) Understanding of Agile methodologies (Scrum, SAFe ) Knowledge of DevOps, CI/CD practices Familiarity with scientific or healthcare data domains Soft Skills: Strong verbal and written communication skills Willingness to learn and take initiative Ability to work effectively in a team environment Attention to detail and commitment to quality Ability to manage time and prioritize tasks effectively
Posted 1 week ago
9.0 - 12.0 years
15 - 19 Lacs
Hyderabad
Work from Office
We are seeking a Data Solutions Architect with deep expertise in Biotech/Pharma to design, implement, and optimize scalable and high-performance data solutions that support enterprise analytics, AI-driven insights, and digital transformation initiatives. This role will focus on data strategy, architecture, governance, security, and operational efficiency, ensuring seamless data integration across modern cloud platforms. The ideal candidate will work closely with engineering teams, business stakeholders, and leadership to establish a future-ready data ecosystem, balancing performance, cost-efficiency, security, and usability. This position requires expertise in modern cloud-based data architectures, data engineering best practices, and Scaled Agile methodologies. Roles & Responsibilities: Design and implement scalable, modular, and future-proof data architectures that initiatives in enterprise. Develop enterprise-wide data frameworks that enable governed, secure, and accessible data across various business domains. Define data modeling strategies to support structured and unstructured data, ensuring efficiency, consistency, and usability across analytical platforms. Lead the development of high-performance data pipelines for batch and real-time data processing, integrating APIs, streaming sources, transactional systems, and external data platforms. Optimize query performance, indexing, caching, and storage strategies to enhance scalability, cost efficiency, and analytical capabilities. Establish data interoperability frameworks that enable seamless integration across multiple data sources and platforms. Drive data governance strategies, ensuring security, compliance, access controls, and lineage tracking are embedded into enterprise data solutions. Implement DataOps best practices, including CI/CD for data pipelines, automated monitoring, and proactive issue resolution, to improve operational efficiency. Lead Scaled Agile (SAFe) practices, facilitating Program Increment (PI) Planning, Sprint Planning, and Agile ceremonies, ensuring iterative delivery of enterprise data capabilities. Collaborate with business stakeholders, product teams, and technology leaders to align data architecture strategies with organizational goals. Act as a trusted advisor on emerging data technologies and trends, ensuring that the enterprise adopts cutting-edge data solutions that provide competitive advantage and long-term scalability. Must-Have Skills: Experience in data architecture, enterprise data management, and cloud-based analytics solutions. Well versed in domain of Biotech/Pharma industry and has been instrumental in solving complex problems for them using data strategy. Expertise in Databricks, cloud-native data platforms, and distributed computing frameworks. Strong proficiency in modern data modeling techniques, including dimensional modeling, NoSQL, and data virtualization. Experience designing high-performance ETL/ELT pipelines and real-time data processing solutions. Deep understanding of data governance, security, metadata management, and access control frameworks. Hands-on experience with CI/CD for data solutions, DataOps automation, and infrastructure as code (IaC). Proven ability to collaborate with cross-functional teams, including business executives, data engineers, and analytics teams, to drive successful data initiatives. Strong problem-solving, strategic thinking, and technical leadership skills. Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark, Apache Airflow Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Good-to-Have Skills: Experience with Data Mesh architectures and federated data governance models. Certification in cloud data platforms or enterprise architecture frameworks. Knowledge of AI/ML pipeline integration within enterprise data architectures. Familiarity with BI & analytics platforms for enabling self-service analytics and enterprise reporting. Education and Professional Certifications 9 to 12 years of experience in Computer Science, IT or related field AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.
Posted 1 week ago
8.0 - 13.0 years
20 - 25 Lacs
Hyderabad
Work from Office
Let s do this. Let s change the world. In this vital role You will be responsible for designing, developing, and deploying Generative AI and ML models to power data-driven decisions across business domains. This role is ideal for an AI practitioner who thrives in a collaborative environment and brings a strategic approach to applying advanced AI techniques to solve real-world problems. To succeed in this role, the candidate must have strong AI/ML, Data Science, GenAI experience along with MDM (Master Data Management) knowledge, hence the candidates having only MDM experience are not eligible for this role. Roles & Responsibilities: Develop enterprise-level GenAI applications using LLM frameworks. Design and develop thoughtful pipelines within Databricks and AWS environments. Implement embedding models and manage VectorStores for retrieval-augmented generation (RAG) solutions. Integrate and leverage MDM platforms like Informatica and Reltio to supply high-quality structured data to ML systems. Use SQL and Python for data engineering, data wrangling, and pipeline automation. Build scalable APIs and services to serve GenAI models in production. Lead multi-functional collaboration with data scientists, engineers, and product teams to scope, design, and deploy AI-powered systems. Ensure model governance, version control, and auditability aligned with regulatory and compliance expectations. Possesses strong rapid prototyping skills and can quickly translate concepts into working code. Stay updated with the latest trends, advancements and standard process for Veeva Vault Platform ecosystem. Design, develop, and implement applications and modules, including custom reports, SDKs, interfaces, and enhancements. Analyze and understand the functional & technical requirements of applications, solutions and systems, translate them into software architecture and design specifications. Develop and implement unit tests, integration tests, and other testing strategies to ensure the quality of the software following IS change control and GxP Validation process while exhibiting expertise in Risk Based Validation methodology. Work closely with multi-functional teams, including product management, design, and QA, to deliver high-quality software on time. Maintain detailed documentation of software designs, code, and development processes. Work on integrating with other systems and platforms to ensure seamless data flow and functionality. Stay up to date on Veeva Vault Features, new releases and standard processes around Veeva Platform Governance. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Doctorate degree / Masters degree / Bachelors degree and 8 to 13 years Computer Science, IT or related field experience Preferred Qualifications: Functional Skills: Must-Have Skills: 6+ years of experience working in AI/ML or Data Science roles, including designing and implementing GenAI solutions. Extensive hands-on experience with LLM frameworks, OpenAI APIs, and embedding models. Consistent record of building and deploying AI/ML applications in cloud environments such as AWS. Expertise in developing APIs, automation pipelines, and serving GenAI models using frameworks like Django, FastAPI, and DataBricks. Solid experience integrating and managing MDM tools (Informatica/Reltio) and applying data governance standard methodologies. Guide the team on development activities and lead the solution discussions Must have core technical capabilities in GenAI, Data Science space Experience with Veeva Vault platform and its application, including Veeva configuration settings and custom builds. 6-8 years of experience working in global pharmaceutical Industry Experience with version control systems such as Git. Good-to-Have Skills: Prior experience in Data Modeling, ETL development, and data profiling to support AI/ML workflows. Working knowledge of Life Sciences or Pharma industry standards and regulatory considerations. Proficiency in tools like JIRA and Confluence for Agile delivery and project collaboration. Familiarity with relational databases (such as MySQL, SQL server, PostgreSQL etc.) Proficiency in programming languages such as Python, JavaScript or other programming languages Outstanding written and verbal communication skills, and ability to translate technical concepts for non-technical audiences. Experience with ETL Tools (Informatica, Databricks). Experience with API integrations such as MuleSoft. Solid understanding & Proficiency in writing SQL queries. Hands on experience on reporting tools such as Tableau, Spotfire & Power BI. Professional Certifications: Any ETL certificztion (eg. Mulesoy) Veeva Vault Platform Administrator or Equivalent Vault Certification (Mandatory) Any Data Analysis certification (SQL) Any cloud certification (AWS or AZURE) Data Science and ML Certification SAFe for Teams (Preferred) Soft Skills: Excellent analytical and solving skills. Strong verbal and written communication skills. Ability to work effectively with global, virtual teams. Team-oriented, with a focus on achieving team goals. Strong presentation and public speaking skills. Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. Equal opportunity statement We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.
Posted 1 week ago
8.0 - 10.0 years
20 - 25 Lacs
Hyderabad
Work from Office
Let s do this. Let s change the world. In this vital role you will [operationalize the Enterprise Data Council vision across specific domains (Research, Clinical Trials, Commercial, etc.). They will coordinate activities at the tactical level, interpreting Enterprise Data Council direction and defining operational level impact deliverables and actions to build data foundations in specific domains. The Data Strategy and Governance Lead will partner with senior leadership and other Data Governance functional leads to align data initiatives with business goals. They will establish and enforce data governance policies and standards to deliver high-quality data, easy to reuse and connect to accelerate AI innovative solutions to better serve patients Roles & Responsibilities: Responsible for data governance and data management for a given domain of expertise (Research, Development, Supply Chain, etc.). Manage a team of Data Governance Specialists and Data Stewards for a specific domain. Responsible for operationalizing the Enterprise data governance framework and aligning broader collaborator community with their data governance needs, including data quality, data access controls, compliance with privacy and security regulations, foundational master data management, data sharing, communication and change management. Works with Enterprise MDM and Reference Data to enforce standards and data reusability. Drives multi-functional alignment in their domain(s) of expertise to ensure adherence to Data Governance principles. Provides expert guidance on business process and system design to support data governance and data/information modelling objectives. Maintain documentation and act as an expert on data definitions, data standards, data flows, legacy data structures / hierarchies, common data models, data harmonization etc. for assigned domains. Ensure compliance with data privacy, security, and regulatory policies for the assigned domains Publish metrics to measure effectiveness and drive adoption of Data Governance policy and standards, that will be applied to mitigate identified risks across the data lifecycle (e.g., capture / production, aggregation / processing, reporting / consumption). Establish enterprise level standards on the nomenclature, content, and structure of information (structured and unstructured data), metadata, glossaries, and taxonomies. Jointly with Technology teams, business functions, and enterprise teams (e.g., MDM, Enterprise Data Fabric, etc.) define the specifications shaping the development and implementation of data foundations. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master s degree and 8 to 10 years of Information Systems experience OR Bachelor s degree and 10 to 14 years of Information Systems experience, OR Diploma and 14 to 18 years of Information Systems experience 4 years of managerial experience directly managing people and leadership experience leading teams, projects, or programs. Technical skills with in-depth knowledge of Pharma processes with preferred specialization in a domain (e.g., Research, Clinical Trials, Commercial, etc.). Aware of industry trends and priorities and can apply to governance and policies. In-depth knowledge and experience with data governance principles and technology; can design and implement Data Governance operating models to drive Amgen s transformation to be a data driven organization. In-depth knowledge of data management, common data models, metadata management, data quality, master data management, data stewardship, data protection, etc. Experience with data products development life cycle, including the enablement of data dictionaries, business glossary to increase data products reusability and data literacy. Preferred Qualifications: Co-develop the data foundations and data products in collaboration with functions and Digital teams. Ability to successfully implement complex projects in a fast-paced environment and in managing multiple priorities effectively. Ability to manage projects or departmental budgets. Experience with modelling tools (e.g., Visio). Basic programming skills, experience in data visualization and data modeling tools. Experience working with agile development methodologies such as Scaled Agile. Soft Skills: Ability to build business relationships and understand end-to-end data use and needs. Excellent interpersonal skills (team player). People management skills either in matrix or direct line function. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Good presentation and public speaking skills. Good attention to detail, quality, time management and customer focus. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.
Posted 1 week ago
12.0 - 15.0 years
15 - 19 Lacs
Hyderabad
Work from Office
The Data Strategy and Governance Lead will operationalize the Enterprise Data Council vision across specific domains (Research, Clinical Trials, Commercial, etc.). He/She will coordinate activities at the tactical level, interpreting Enterprise Data Council direction and defining operational level impact deliverables and actions to build data foundations in specific domains . The Data Strategy and Governance Lead will partner with senior leadership and other Data Governance functional leads to align data initiatives with business goals. He/she will establish and enforce data governance policies and standards to provide high-quality data, easy to reuse and connect to accelerate AI innovative solutions to better serve patients . Roles & Responsibilities: Responsible for data governance and data management for a given domain of expertise (Research, Development, Supply Chain, etc.). Manage a team of Data Governance Specialists and Data Stewards for a specific domain. Responsible for operationalizing the Enterprise data governance framework and aligning broader stakeholder community with their data governance needs, including data quality, data access controls, compliance with privacy and security regulations, foundational master data management, data sharing, communication and change management. Works with Enterprise MDM and Reference Data to enforce standards and data reusability. Drives cross functional alignment in his/her domain(s) of expertise to ensure adherence to Data Governance principles. Provides expert guidance on business process and system design to support data governance and data/information modelling objectives. Maintain documentation and act as an expert on data definitions, data standards, data flows, legacy data structures / hierarchies, common data models, data harmonization etc. for assigned domains. Ensure compliance with data privacy, security, and regulatory policies for the assigned domains Publish metrics to measure effectiveness and drive adoption of Data Governance policy and standards, that will be applied to mitigate identified risks across the data lifecycle (e.g., capture / production, aggregation / processing, reporting / consumption). Establish enterprise level standards on the nomenclature, content, and structure of information (structured and unstructured data), metadata, glossaries, and taxonomies. Jointly with Technology teams, business functions, and enterprise teams (e.g., MDM, Enterprise Data Architecture, Enterprise Data Fabric, etc.) define the specifications shaping the development and implementation of data foundations . Functional Skills: Must-Have Skills: Technical skills with in-depth knowledge of Pharma processes with preferred specialization in a domain (e.g., Research, Clinical, Commercial, Supply Chain, Finance, etc.). Aware of industry trends and priorities and can apply to governance and policies. In-depth knowledge and experience with data governance principles and technology; can design and implement Data Governance operating models to drive Amgen s transformation to be a data driven organization. In-depth knowledge of data management, common data models, metadata management, data quality, reference & master data management, data stewardship, data protection, etc. Experience with data products development life cycle, including the enablement of data dictionaries, business glossary to increase data products reusability and data literacy. Good-to-Have Skills: Experience adopting industry standards in data products. Experience managing industry external data assets (e.g. Claims, EHR, etc.) Ability to successfully execute complex projects in a fast-paced environment and in managing multiple priorities effectively. Ability to manage projects or departmental budgets. Experience with modelling tools (e.g., Visio). Basic programming skills, experience in data visualization and data modeling tools. Experience working with agile development methodologies such as Scaled Agile. Soft Skills: Ability to build business relationships and understand end-to-end data use and needs. Excellent interpersonal skills (team player). People management skills either in matrix or direct line function. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Good presentation and public speaking skills. Strong attention to detail, quality, time management and customer focus. Basic Qualifications: 12 to 15 years of Information Systems experience 4 years of managerial experience directly managing people and leadership experience leading teams, projects, or programs.
Posted 1 week ago
4.0 - 9.0 years
40 - 45 Lacs
Hyderabad
Work from Office
Let s do this. Let s change the world. We are looking for highly motivated expert Senior Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines to support structured, semi-structured, and unstructured data processing across the Enterprise Data Fabric. Implement real-time and batch data processing solutions, integrating data from multiple sources into a unified, governed data fabric architecture. Optimize big data processing frameworks using Apache Spark, Hadoop, or similar distributed computing technologies to ensure high availability and cost efficiency. Work with metadata management and data lineage tracking tools to enable enterprise-wide data discovery and governance. Ensure data security, compliance, and role-based access control (RBAC) across data environments. Optimize query performance, indexing strategies, partitioning, and caching for large-scale data sets. Develop CI/CD pipelines for automated data pipeline deployments, version control, and monitoring. Implement data virtualization techniques to provide seamless access to data across multiple storage systems. Collaborate with cross-functional teams, including data architects, business analysts, and DevOps teams, to align data engineering strategies with enterprise goals. Stay up to date with emerging data technologies and best practices, ensuring continuous improvement of Enterprise Data Fabric architectures. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Master s degree and 3 to 4 + years of Computer Science, IT or related field experience OR Bachelor s degree and 5 to 8 + years of Computer Science, IT or related field experience OR Diploma and 10 to 12 years of Computer Science, IT or related field experience Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Experience with Data Fabric, Data Mesh, or similar enterprise-wide data architectures. Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Preferred Qualifications: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way.
Posted 1 week ago
8.0 - 13.0 years
11 - 15 Lacs
Hyderabad
Work from Office
Let s do this. Let s change the world. In this vital role you will Provide technical leadership to enhance the culture of innovation, automation, and solving difficult scientific and business challenges. Technical leadership includes providing vision and direction to develop scalable reliable solutions. Provide leadership to select right-sized and appropriate tools and architectures based on requirements, data source format, and current technologies Develop, refactor, research and improve Weave cloud platform capabilities. Understand business drivers and technical needs so our cloud services seamlessly, automatically, and securely provides them the best service. Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development Build strong partnership with partner Build data products and service processes which perform data transformation, metadata extraction, workload management and error processing management to ensure high quality data Provide clear documentation for delivered solutions and processes, integrating documentation Collaborate with business partners to understand user stories and ensure technical solution/build can deliver to those needs Work with multi-functional teams to design and document effective and efficient solutions. Develop organisational change strategies and assist in their implementation. Mentor junior data engineers on standard processes in the industry and in the Amgen data landscape What we expect of you We are all different, yet we all use our unique contributions to serve patients. The professional we seek is someone with these qualifications. Basic Qualifications: Doctorate degree / Masters degree / Bachelors degree and 8 to 13 years of relevant experience Must-Have Skills: Superb communication and interpersonal skills, with the ability to work closely with multi-functional GTM, product, and engineering teams. Minimum of 10+ years overall Software Engineer or Cloud Architect experience Minimum 3+ years in architecture role using public cloud solutions such as AWS Experience with AWS Technology stack Good-to-Have Skills: Familiarity with big data technologies, AI platforms, and cloud-based data solutions. Ability to work effectively across matrixed organizations and lead collaboration between data and AI teams. Passion for technology and customer success, particularly in driving innovative AI and data solutions. Experience working with teams of data scientists, software engineers and business experts to drive insights Experience with AWS Services such as EC2, S3, Redshift/Spectrum, Glue, Athena, RDS, Lambda, and API gateway. Experience with Big Data Technologies (Hadoop, Hive, Hbase, Pig, Spark, etc) Solid understanding of relevant data standards and industry trends Ability to understand new business requirements and prioritize them for delivery Experience working in biopharma/life sciences industry Proficient in one of the coding languages (Python, Java, Scala) Hands on experience writing SQL using any RDBMS (Redshift, Postgres, MySQL, Teradata, Oracle, etc.). Experience with Schema Design & Dimensional data modeling. Experience with software DevOps CI/CD tools, such Git, Jenkins, Linux, and Shell Script Hands on experience using Databricks/Jupyter or similar notebook environment. Experience working with GxP systems Experience working in an agile environment (i.e. user stories, iterative development, etc.) Experience working with test-driven development and software test automation Experience working in a Product environment Good overall understanding of business, manufacturing, and laboratory systems common in the pharmaceutical industry, as well as the integration of these systems through applicable standards. Soft Skills: Excellent analytical and solving skills. Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.
Posted 1 week ago
4.0 - 6.0 years
40 - 45 Lacs
Hyderabad
Work from Office
Let s do this. Let s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and implementing data governance initiatives and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes . Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing. Be a key team member that assists in design and development of the data pipeline. Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems. Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks. Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs. Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency. Implement data security and privacy measures to protect sensitive data. Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions. Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions. Identify and resolve complex data-related challenges. Adhere to standard processes for coding, testing, and designing reusable code/component. Explore new tools and technologies that will help to improve ETL platform performance. Participate in sprint planning meetings and provide estimations on technical implementation. Collaborate and communicate effectively with product teams. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master s degree with 4 - 6 years of experience in Computer Science, IT or related field OR Bachelor s degree with 6 - 8 years of experience in Computer Science, IT or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT or related field. Functional Skills: Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing. Hands on experience with various Python/R packages for EDA, feature engineering and machine learning model training. Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools. Excellent problem-solving skills and the ability to work with large, complex datasets. Strong understanding of data governance frameworks, tools, and standard methodologies. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA). Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development. Strong understanding of data modeling, data warehousing, and data integration concepts. Knowledge of Python/R, Databricks, SageMaker, OMOP. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments). Certified Data Scientist (preferred on Databricks or Cloud environments). Machine Learning Certification (preferred on Databricks or Cloud environments). SAFe for Teams certification (preferred). Soft Skills: Excellent critical-thinking and problem-solving skills. Strong communication and collaboration skills. Demonstrated awareness of how to function in a team setting. Demonstrated presentation skills. Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.
Posted 1 week ago
4.0 - 6.0 years
40 - 45 Lacs
Hyderabad
Work from Office
The Sr Data Engineer for GEN AI solutions across various Process Development functions(Drug Substance, Drug Product, Attribute Sciences & Combination Products) in Operations functions is responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions, working with large datasets, developing reports, supporting and implementing data governance initiatives and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing. Be a key team member that assists in design and development of the data pipeline. Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems. Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks. Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs. Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency. Implement data security and privacy measures to protect sensitive data. Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions. Develop solutions for handling unstructured data in AI pipelines. Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions. Identify and resolve complex data-related challenges. Adhere to standard processes for coding, testing, and designing reusable code/component. Explore new tools and technologies that will help to improve ETL platform performance. Participate in sprint planning meetings and provide estimations on technical implementation. Collaborate and communicate effectively with product teams. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master s degree with 4 - 6 years of experience in Computer Science, IT or related field OR Bachelor s degree with 6 - 8 years of experience in Computer Science, IT or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT or related field. Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing. Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools. Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Excellent problem-solving skills and the ability to work with large, complex datasets. Strong understanding of data governance frameworks, tools, and standard methodologies. Experience in implementing Retrieval-Augmented Generation (RAG) pipelines, integrating retrieval mechanisms with language models. Strong programming skills in Python and familiarity with deep learning frameworks such as PyTorch or TensorFlow. Experience in processing and leveraging unstructured data for GenAI applications Preferred Qualifications: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development. Strong understanding of data modeling, data warehousing, and data integration concepts. Knowledge of Python/R, Databricks. Knowledge of vector databases, including implementation and optimization. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments). Machine Learning Certification (preferred on Databricks or Cloud environments). SAFe for Teams certification (preferred). Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Equal opportunity statement
Posted 1 week ago
4.0 - 6.0 years
4 - 8 Lacs
Hyderabad
Work from Office
The role leverages domain and business process expertise to detail product requirements as epics and user stories, along with supporting artifacts like business process maps, use cases, and test plans for Clinical Trial Data & Analytics (CTDA) Product Team . This role involves working closely with varied business stakeholders across R&D , Data engineers , Data Analysts, and Testers to ensure that the technical requirements for upcoming development are thoroughly elaborated. This enables the delivery team to estimate, plan, and commit to delivery with high confidence and identify test cases and scenarios to ensure the quality and performance of IT Systems. You will analyze business requirements and design information systems solutions. You will collaborate with multi-functional teams to understand business needs, identify system enhancements, and drive system implementation projects. Your solid experience in business analysis, system design, and project management will enable you to deliver innovative and effective technology products. Roles & Responsibilities: Collaborate with System Architects and Product Owners to manage business analysis activities for CTDA systems, ensuring alignment with engineering and product goals Capture the voice of the customer to define business processes and product needs Collaborate with CTDA business stakeholders , Architects and Engineering teams to prioritize release scopes and refine the Product backlog Facilitate the breakdown of Epics into Features and Sprint-Sized User Stories and participate in backlog reviews with the development team Clearly express features in User Stories/requirements so all team members and stakeholders understand how they fit into the product backlog Ensure Acceptance Criteria and Definition of Done are well-defined Stay focused on software development to ensure it meets requirements, providing proactive feedback to stakeholders Develop and execute effective product demonstrations for internal and external stakeholders Help develop and maintain a product roadmap that clearly outlines the planned features and enhancements, timelines, and achievements Identify and manage risks associated with the systems, requirement validation, and user acceptance Develop & maintain documentations of configurations, processes, changes, communication plans and training plans for end users Ensure operational excellence, cybersecurity, and compliance. Collaborate with geographically dispersed teams, including those in the US and other international locations. Foster a culture of collaboration, innovation, and continuous improvement Basic Qualifications and Experience: Master s degree with 4 - 6 years of experience in Computer Science/Information Systems experience with Agile Software Development methodologies OR Bachelor s degree with 6 - 8 years of experience in Computer Science/Information Systems experience with Agile Software Development methodologies OR Diploma with 10 - 12 years of experience in Computer Science/Information Systems experience with Agile Software Development methodologies Functional Skills: Must-Have Skills Proven abilit y in translating business requirements into technical specifications and writing user requirement documents. Experience with Agile software development methodologies (Scrum) Excellent communication skills and the ability to interface with senior leadership with confidence and clarity Strong knowledge of ETL process Familiarity with regulatory requirements for Clinical Trials ( e.g. 21 CFR Part11, ICH) Good-to-Have Skills: Experience in managing product features for PI planning and developing product roadmaps and user journeys Experience maintaining SaaS (software as a system) solutions and COTS (Commercial off the shelf) solutions Technical thought leadership Able to communicate technical or complex subject matters in business terms Experience with data analysis, data modeling, and data visualization solutions such as Tableau and Spotfire Hands on experience writing SQL using any RDBMS (Redshift, Postgres, MySQL, Teradata, Oracle, etc.) Experience with AWS Services (like EC2, S3), Salesforce, Jira, and API gateway, etc. Professional Certifications (please mention if the certification is preferred or mandatory for the role): SAFe for Teams certification (preferred) Certified Business Analysis Professional (Preferred) Soft Skills: Able to work under minimal supervision Skilled in providing oversight and mentoring team members. Demonstrated ability in effectively delegating work Excellent analytical and gap/fit assessment skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills Shift Information: This position operates on the second shift, from 2:00 PM to 10:00 PM IST. Candidates must be willing and able to work during these hours . .
Posted 1 week ago
9.0 - 14.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Career Category Information Systems Job Description Join Amgen s Mission of Serving Patients At Amgen, if you feel like you re part of something bigger, it s because you are. Our shared mission to serve patients living with serious illnesses drives all that we do. Since 1980, we ve helped pioneer the world of biotech in our fight against the world s toughest diseases. With our focus on four therapeutic areas -Oncology , Inflammation, General Medicine, and Rare Disease- we reach millions of patients each year. As a member of the Amgen team, you ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Specialist IS Analyst What you will do Let s do this. Let s change the world. In this vital role you will be part of Enterprise Data Fabric (EDF) Platform t eam. In this role you will be leveraging AI and other automation tools to innovate and provide solutions for business. The role leverages domain and business process expertise to detail product requirements as epics and user stories, along with supporting artifacts like business process maps, use cases, and test plans for Enterprise Data Fabric (EDF) Platform t eam. This role involves working closely with varied business stakeholders - business users, d ata engineers, d ata a nalysts, and t esters to ensure that the technical requirements for upcoming development are thoroughly elaborated. This enables the delivery team to estimate, plan, and commit to delivery with high confidence and identify test cases and scenarios to ensure the quality and performance of IT Systems. In this role you will analyze business requirements and help design solutions for the EDF platform . You will collaborate with multi-functional teams to understand business needs, identify system enhancements, and drive system implementation projects. E xperience in business analysis, system design, and project management will enable this role to deliver innovative and effective technology products. What we expect of you Roles & Responsibilities Collaborate with System Architects and Product Owners to manage business analysis activities for systems, ensuring alignment with engineering and product goals Capture the voice of the customer to define business processes and product needs Collaborate with business stakeholders, Architects and Engineering teams to prioritize release scopes and refine the Product backlog Facilitate the breakdown of Epics into Features and Sprint- s ized User Stories and participate in backlog reviews with the development team Clearly express features in User Stories/requirements so all team members and stakeholders understand how they fit into the product backlog Ensure Acceptance Criteria and Definition of Done are well-defined Stay focused on software development to ensure it meets requirements, providing proactive feedback to stakeholders Develop and execute effective product demonstrations for internal and external stakeholders Help develop and maintain a product roadmap that clearly outlines the planned features and enhancements, timelines, and achievements Identify and manage risks associated with the systems, requirement validation, and user acceptance Develop & maintain documentation of configurations, processes, changes, communication plans and training plans for end users Ensure operational excellence, cybersecurity, and compliance. Collaborate with geographically dispersed teams, including those in the US and other international locations. Foster a culture of collaboration, innovation, and continuous improvement Ability to work flexible hours that align with US time zones Basic Qualifications : Master s degree with9 - 14 years of experience in Computer Science, Business, Engineering, IT or related field OR Bachelor s degree with 10 - 14 years of experience in Computer Science, Business, Engineering, IT or related field OR Diploma with 10 - 1 4 years of experience in Computer Science, Business, Engineering, IT or related field. Must - have Skills : Proven ability in translating business requirements into technical specifications and writing user requirement documents. Able to communicate technical or complex subject matters in business terms Experience with Agile software development methodologies (Scrum) Excellent communication skills and the ability to interface with senior leadership with confidence and clarity Strong knowledge of data engineering process es Experience in managing product features for PI planning and developing product roadmaps and user journeys Technical thought leadership Good-to-have Skills : Experience maintaining SaaS (software as a system) solutions and COTS (Commercial off the shelf) solutions Experience with AWS Services (like EC2, S3), Salesforce, Jira, and API gateway, etc. Hands on experience writing SQL using any RDBMS (Redshift, Postgres, MySQL, Teradata, Oracle, etc.) Experience in understanding micro services architecture and API development Experience with data analysis, data modeling, and data visualization solutions such as Tableau and Spotfire Professional Certifications: SAFe for Teams certification (preferred) Certified Business Analysis Professional (Preferred) Soft Skills : Excellent critical-thinking , analytical and problem-solving skills Strong verbal & written communication and collaboration skills Demonstrated awareness of how to function in a team setting Strong presentation and public speaking skills Ability to work effectively with global, virtual teams Ability to manage multiple priorities successfully High degree of initiative and self-motivation Abi lity to work under minimal supervision Skilled in providing oversight and mentoring team members. Demonstrated ability in effectively delegating work Team-oriented, with a focus on achieving team goals What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color , religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. .
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France