Jobs
Interviews

8521 Pyspark Jobs - Page 30

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As an ideal candidate for this role, you will be responsible for designing and architecting scalable Big Data solutions within the Hadoop ecosystem. Your key duties will include leading architecture-level discussions for data platforms and analytics systems, constructing and optimizing data pipelines utilizing PySpark and other distributed computing tools, and transforming business requirements into scalable data models and integration workflows. It will be crucial for you to guarantee the high performance and availability of enterprise-grade data processing systems. Additionally, you will play a vital role in mentoring development teams and offering guidance on best practices and performance tuning. Your must-have skills for this position include architect-level experience with the Big Data ecosystem and enterprise data solutions, proficiency in Hadoop, PySpark, and distributed data processing frameworks, as well as hands-on experience in SQL and data warehousing concepts. A deep understanding of data lake architecture, data ingestion, ETL, and orchestration tools, along with experience in performance optimization and large-scale data handling will be essential. Your problem-solving, design, and analytical skills should be excellent. While not mandatory, it would be beneficial if you have exposure to cloud platforms such as AWS, Azure, or GCP for data solutions, and possess knowledge of data governance, data security, and metadata management. Joining our team will provide you with the opportunity to work on cutting-edge Big Data technologies, gain leadership exposure, and be directly involved in architectural decisions. This role offers stability as a full-time position within a top-tier tech team, ensuring a work-life balance with a 5-day working schedule. (ref:hirist.tech),

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Data Engineer with 5+ years of experience, you will be responsible for designing and developing scalable, reusable, and efficient data pipelines using modern Data Engineering platforms such as Microsoft Fabric, PySpark, and Data Lakehouse architectures. Your role will involve integrating data from diverse sources, transforming it into actionable insights, and ensuring high standards of data governance and quality. You will play a key role in establishing and enforcing data governance policies, monitoring pipeline performance, and optimizing for efficiency. Key Responsibilities Design and build robust data pipelines using Microsoft Fabric components including Pipelines, Notebooks (PySpark), Dataflows, and Lakehouse architecture. Ingest and transform data from cloud platforms (Azure, AWS), on-prem databases, SaaS platforms (e.g., Salesforce, Workday), and REST/OpenAPI-based APIs. Develop and maintain semantic models and define standardized KPIs for reporting and analytics in Power BI or equivalent BI tools. Implement and manage Delta Tables across bronze/silver/gold layers using Lakehouse medallion architecture within OneLake or equivalent environments. Apply metadata-driven design principles to ensure pipeline parameterization, reusability, and scalability. Monitor, debug, and optimize pipeline performance; implement logging, alerting, and observability mechanisms. Establish and enforce data governance policies including schema versioning, data lineage tracking, role-based access control (RBAC), and audit trail mechanisms. Perform data quality checks including null detection, duplicate handling, schema drift management, outlier identification, and Slowly Changing Dimensions (SCD) type management. Required Skills & Qualifications 5+ years of hands-on experience in Data Engineering or related fields. Solid understanding of data lake/lakehouse architectures, preferably with Microsoft Fabric or equivalent tools (e.g., Databricks, Snowflake, Azure Synapse). Strong experience with PySpark, SQL, and working with dataflows and notebooks. Exposure to BI tools like Power BI, Tableau, or equivalent for data consumption layers. Experience with Delta Lake or similar transactional storage layers. Familiarity with data ingestion from SaaS applications, APIs, and enterprise databases. Understanding of data governance, lineage, and RBAC principles. Strong analytical, problem-solving, and communication skills. Nice to Have Prior experience with Microsoft Fabric and OneLake platform. Knowledge of CI/CD practices in data engineering. Experience implementing monitoring/alerting tools for data pipelines. Join us for the opportunity to work on cutting-edge data engineering solutions in a fast-paced, collaborative environment focused on innovation and learning. Gain exposure to end-to-end data product development and deployment cycles.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a Data Engineer, you will be responsible for designing and building efficient data pipelines using Azure Databricks (PySpark). You will implement business logic for data transformation and enrichment at scale, as well as manage and optimize Delta Lake storage solutions. Additionally, you will develop REST APIs using FastAPI to expose processed data and deploy them on Azure Functions for scalable and serverless data access. Your role will also involve developing and managing Airflow DAGs to orchestrate ETL processes, ingesting and processing data from various internal and external sources on a scheduled basis. You will handle data storage and access using PostgreSQL and MongoDB, writing optimized SQL queries to support downstream applications and analytics. Collaboration is key in this role, as you will work cross-functionally with teams to deliver reliable, high-performance data solutions. It is essential to follow best practices in code quality, version control, and documentation to ensure the success of projects. To excel in this position, you should have at least 5 years of hands-on experience as a Data Engineer and strong expertise in Azure Cloud services. Proficiency in Azure Databricks, PySpark, Delta Lake, Python, and FastAPI for API development is required. Experience with Azure Functions for serverless API deployments, managing ETL pipelines using Apache Airflow, and hands-on experience with PostgreSQL and MongoDB are also essential. Strong SQL skills and experience in handling large datasets will be beneficial for this role.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

kochi, kerala

On-site

The Snowflake Developer will play a crucial role in designing, developing, and implementing data solutions using Snowflake's cloud-based data platform. You will be responsible for writing efficient procedures with Spark or SQL to facilitate data processing, transformation, and analysis. Your Python/Pyspark and SQL skills should be strong, and having some experience in data pipelines or other data engineering aspects is essential. It is important to have knowledge of the AWS platform and be interested in upskilling and eager to learn. Having the right attitude towards learning is key for this role. You should have good expertise in SDLC/Agile and experience in SQL, complex queries, and optimization. Experience in the Spark ecosystem, familiarity with MongoDB data loads, Snowflake, and AWS platform (EMR, Glue, S3) is desired. Hands-on experience in writing advanced SQL queries and familiarity with a variety of databases are important skills to possess. You should have experience in handling end-to-end data testing for complex big data projects, including extensive experience in writing and executing test cases, performing data validations, system testing, and performance checks. Key Skills: - Snowflake development - Python - Pyspark - AWS,

Posted 1 week ago

Apply

9.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Us About DATAECONOMY: We are a fast-growing data & analytics company headquartered in Dublin with offices inDublin, OH, Providence, RI, and an advanced technology center in Hyderabad,India. We are clearly differentiated in the data & analytics space via our suite of solutions, accelerators, frameworks, and thought leadership. Job Description Job Description: We are looking for a highly skilled Senior Data Scientist with 3–9 years of experience specializing in Python, Large Language Models (LLMs), NLP, Machine Learning, and Generative AI . The ideal candidate will have a deep understanding of building intelligent systems using modern AI frameworks and deploying them into scalable, production-grade environments. You will work closely with cross-functional teams to build innovative AI solutions that deliver real business value. Responsibilities Design, develop, and deploy ML/NLP solutions using Python and state-of-the-art AI frameworks. Apply LLMs and Generative AI techniques to solve real-world problems. Build, train, fine-tune, and evaluate models for NLP and GenAI tasks. Collaborate with data engineers, MLOps, and product teams to operationalize models. Contribute to the development of scalable AI services and applications. Analyze large datasets to extract insights and support model development. Maintain clean, modular, and version-controlled code using Git. Requirements Must-Have Skills: 3–10 years of hands-on experience with Python for data science and ML applications. Strong expertise in Machine Learning algorithms and model development. Proficient in Natural Language Processing (NLP) and text analytics. Experience with Large Language Models (LLMs) and Generative AI frameworks (e.g., LangChain, Hugging Face Transformers). Familiarity with model deployment and real-world application integration. Experience with version control systems like Git. Good To Have Experience with PySpark for distributed data processing. Exposure to MLOps practices and model lifecycle management. Familiarity with cloud platforms such as AWS, GCP, or Azure. Knowledge of vector databases (e.g., FAISS, Pinecone) and embeddings. Educational Qualification Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field. Benefits Work with cutting-edge technologies in a collaborative and forward-thinking environment. Opportunities for continuous learning, skill development, and career growth. Exposure to high-impact projects in AI and data science.

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

navi mumbai, maharashtra

On-site

You should have 6-8 years of experience with a deep understanding of the Spark framework, along with hands-on experience in Spark SQL and Pyspark. Your expertise should include Python programming and familiarity with common Python libraries. Strong analytical skills are essential, especially in database management, including writing complex queries, query optimization, debugging, user-defined functions, views, and indexes. Your problem-solving abilities will be crucial in designing, implementing, and maintaining efficient data models and pipelines. Experience with Big Data technologies is a must, while familiarity with any ETL tool would be advantageous. As part of your responsibilities, you will be working on projects to deliver, review, and design PySpark and Spark SQL-based data engineering analytics solutions. Your tasks will involve writing clean, efficient, reusable, testable, and scalable Python logic for analytical solutions. Emphasis will be on building solutions for data cleaning, data scraping, and exploratory data analysis, ensuring compatibility with any BI tool. Collaboration with Data Analysts/BI developers to provide clean and processed data will be essential. You will design data processing pipelines using ETL techniques, develop and deliver complex requirements to achieve business goals, and work with unstructured, structured, and semi-structured data and their respective databases. Effective coordination with internal engineering and development teams to understand requirements and develop solutions is critical. Communication with stakeholders to grasp business logic and provide optimal data engineering solutions will also be part of your role. It is important to adhere to best coding practices and standards throughout your work.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

At EY, you will have the opportunity to shape a career that reflects your uniqueness, supported by a global network, inclusive environment, and cutting-edge technology to empower you to reach your full potential. Your individual voice and perspective are crucial in contributing to EY's continuous improvement. Join our team to create an exceptional journey for yourself and contribute to building a better working world for all. In EY's GDS Tax Technology team, the focus is on developing, implementing, and integrating technological solutions that enhance client service and support engagement teams. As a member of the core Tax practice, you will gain in-depth tax technical knowledge along with exceptional database, data analytics, and programming skills. With ever-evolving regulations, tax departments are required to manage, organize, and analyze vast amounts of data. Meeting these complex regulatory demands often involves gathering data from various systems and departments within an organization. Handling the diverse data sources efficiently presents significant challenges and time constraints for companies. Collaborating closely with partners, clients, and tax technical experts, the GDS Tax Technology team at EY designs and implements technology solutions that add value, streamline processes, and equip clients with innovative tools for Tax support. The GDS Tax Technology team engages with clients and professionals in areas such as Federal Business Tax Services, Partnership Compliance, Corporate Compliance, Indirect Tax Services, Human Capital, and Internal Tax Services. Providing solution architecture, application development, testing, and maintenance support, the team contributes proactively and responsively to the global TAX service line. EY is currently looking for a Data Engineer - Staff to join the Tax Technology practice in India. Key Responsibilities: - Proficiency in Azure Databricks is a must. - Strong expertise in Python and PySpark programming. - Sound knowledge of Azure SQL Database and Azure SQL Datawarehouse. - Design, maintain, and optimize data layer components for new and existing systems, including databases, stored procedures, ETL packages, and SQL queries. - Experience with Azure data platform offerings. - Effective communication with team members and stakeholders. Qualifications & Experience Required: - 1.5 to 3 years of experience in Azure Data Platform (Azure Databricks) with a strong grasp of Python and PySpark. - Excellent verbal and written communication skills. - Ability to function as an individual contributor. - Familiarity with Azure Data Factory, SSIS, or other ETL tools. Join EY in its mission to build a better working world, where diverse teams across 150+ countries leverage data and technology to provide trust through assurance and support clients in their growth and transformation. EY teams across various disciplines strive to address complex global challenges through innovative solutions and insightful questions.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

Your role at Prudential is to design, build, and maintain data pipelines to ingest data from multiple sources into the cloud data platform. It is essential to ensure that these pipelines are constructed according to defined standards and documented comprehensively. Data governance standards must be adhered to and enforced to maintain data integrity and compliance. Additionally, you will be responsible for implementing data quality rules to ensure the accuracy and reliability of the data. As part of your responsibilities, you will need to implement data security and protection controls around Databricks Unity Catalog. You will be utilizing Azure Data Factory, Azure Databricks, and other Azure services to build and optimize data pipelines. Proficiency in SQL, Python/PySpark, and other programming languages for data processing and transformation is crucial. Staying updated with the latest Azure technologies and best practices is essential for this role. You will also provide technical guidance and support to team members and stakeholders. Detailed documentation of data pipelines, processes, and data quality rules must be maintained. Debugging, fine-tuning, and optimizing large-scale data processing jobs will be part of your routine tasks. Generating reports and dashboards to monitor data pipeline performance and data quality metrics is also important. Collaboration with data teams across Asia and Africa to understand data requirements and deliver solutions will be required in this role. Overall, your role at Prudential will involve designing, building, and maintaining data pipelines, ensuring data integrity, implementing data quality rules, and collaborating with various teams to deliver effective data solutions.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As a Data Scientist at Rockwell Automation, you will be part of an analytics team working alongside engineers, product managers, and partners to drive AI-powered features for the platform. Collaborating with process engineers and operations leads, you will be responsible for developing predictive-maintenance models, enhancing throughput and yield, and implementing cost-saving strategies across manufacturing lines. Your role will involve managing end-to-end projects, including scoping data-collection architectures, prototyping machine-learning solutions, deploying models into the IIoT platform, and establishing real-time monitoring dashboards. Additionally, you will mentor junior analysts, engage in pilot projects with R&D, and contribute to shaping the roadmap for advanced-analytics capabilities. If you enjoy tackling complex industrial challenges, translating diverse data sources into actionable business insights, and evolving as a trusted analytics partner for clients, Rockwell Automation offers an environment where you can advance both your career and your clients" success. Reporting to the Lead Sr Solution Architect, you will be based at our Electronics City office in Bengaluru, following a hybrid work model. Your Responsibilities: - Leading end-to-end data-science projects by defining hypotheses, designing experiments, building features, training models, and deploying them in production. - Collaborating with Engineering to integrate ML services into the microservices architecture and with Marketing for A/B testing data-driven campaigns. - Creating scalable ETL pipelines and designing data schemas to support analytics and modeling at scale. - Developing monitoring dashboards and automated retraining workflows to ensure model accuracy. Essential Qualifications: - 6-10 years of experience in Python, SQL, Pandas, scikit-learn, and PySpark. - Proficiency in supervised and unsupervised ML techniques, advanced statistics, computer vision, and generative-AI projects. - Familiarity with Docker, Kubernetes, cloud ML platforms, and communicating data insights effectively. Preferred Qualifications: - Familiarity with BI tools, MLOps frameworks, FastAPI, and Linux environments. What We Offer: - Comprehensive benefits package including mindfulness programs, volunteer time off, donation matching, employee assistance programs, wellbeing initiatives, and professional development resources. - Commitment to fostering a diverse, inclusive, and authentic workplace. At Rockwell Automation, we value diversity and encourage candidates who are interested in the role to apply even if their experience does not align perfectly with every qualification listed. You might be the ideal fit for this position or other opportunities within the organization.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

As an AI/ML Computational Science Assoc Mgr at Accenture, you will play a crucial role in the Technology for Operations team, serving as a trusted advisor and partner to Accenture Operations. Your responsibilities will involve providing innovative and secure technologies to assist clients in developing an intelligent operating model, ultimately driving exceptional results. Collaborating closely with the sales, offering, and delivery teams, you will identify and develop cutting-edge solutions in areas such as Application Hosting Operations (AHO), Infrastructure Management (ISMT), and Intelligent Automation. To excel in this role, you should possess a strong background in Artificial Intelligence (AI) and a deep understanding of its foundational principles, concepts, techniques, and tools. Proficiency in Python programming language, Python Software Development, PySpark, Microsoft SQL Server, and Microsoft SQL Server Integration Services (SSIS) is essential. Additionally, your ability to work effectively in a team, coupled with excellent written and verbal communication skills, numerical aptitude, and a results-oriented mindset will be critical for success. In this position, you will be tasked with analyzing and solving moderately complex problems, often creating new solutions by leveraging existing methods and procedures. You will need to align your work with the strategic direction set by senior management, with primary upward interaction with direct supervisors or team leads. Your decision-making will impact the team you are a part of, and at times, other teams as well. Depending on the situation, you may manage medium-small sized teams or work efforts either independently or within a client or Accenture. It is important to note that this role may involve working in rotational shifts to meet business requirements effectively. If you are a seasoned professional with 10 to 14 years of experience, possessing the necessary qualifications and skills mentioned above, and are eager to contribute to Accenture's mission of delivering technology-driven solutions with human ingenuity, we encourage you to explore this exciting opportunity further.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

indore, madhya pradesh

On-site

You are looking for a Databricks Developer with 4-6 years of experience in Pyspark, Cloud Platform, and SQL. You should have proven experience in implementing data solutions on the Databricks platform, including setting up Databricks clusters, working in Databricks modules, and creating data pipelines for ingesting and transforming data from various sources. Your role will involve developing Spark-based ETL workflows and data pipelines within Databricks for data preparation and transformation. As a Databricks Developer, you should have a Bachelor's or Master's degree with 4-6 years of professional experience. You must have a solid understanding of Databricks fundamentals and hands-on experience in configuring, setting up, and managing Databricks clusters, workspaces, and notebooks for optimal performance. Your responsibilities also include implementing data pipelines, ensuring data quality, consistency, and reliability, and developing ETL processes as required. Knowledge of Performance Optimization, Monitoring, Automation, data governance, compliance, and security best practices is important for this role. Proficiency in Pyspark, Databricks Notebooks, SQL, Python, Scala, or Java is required, along with strong problem-solving and troubleshooting skills. Excellent communication and collaboration skills are also essential. Certifications in Databricks would be beneficial. The mandatory skills for this position include Databricks, Pyspark, and SQL. The job requires 4 to 6 years of experience and a degree in BE/B.Tech/MCA/M.Tech. If you possess the required skills and experience, and are looking for a challenging opportunity in Noida, Indore, Pune, or Gurgaon, this Databricks Developer role could be the right fit for you.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

The ideal candidate for this position in Gurugram should have a strong proficiency in Python coding, as the interview process will be focused on coding skills. While experience in Pyspark and ADF is preferred, a minimum of 2 years of experience is required in this secondary area. The main responsibilities of this role include reviewing and optimizing existing code, working on text parsers and scrappers, and reviewing incoming pipelines. Experience with ADF pipelines is also a key requirement. In addition to technical skills, the candidate should possess certain additional qualities for Position 1. These include the ability to provide guidance to the team on a regular basis, offer technical support, collaborate with other leads, and ensure faster deliverables. There is also an emphasis on exploring how AI can be integrated into the work processes, including the use or creation of LLMs to enhance the Go-To-Market strategy.,

Posted 1 week ago

Apply

2.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Engineer at our company, you will play a crucial role in designing, developing, and optimizing data pipelines and workflows in a cloud-based environment. Your expertise in PySpark, Snowflake, and AWS will be key as you leverage these technologies for data processing and analytics. Your responsibilities will include designing and implementing scalable ETL pipelines using PySpark on AWS, developing and optimizing data workflows for Snowflake integration, and managing and configuring various AWS services such as S3, Lambda, Glue, EMR, and Redshift. Collaboration with data analysts and business teams to understand requirements and deliver solutions will be essential, along with ensuring data security and compliance with best practices in AWS and Snowflake environments. Monitoring and troubleshooting data pipelines and workflows for performance and reliability, as well as writing efficient, reusable, and maintainable code for data processing and transformation, will also be part of your role. To excel in this position, you should have strong experience with AWS services like S3, Lambda, Glue, and MSK, proficiency in PySpark for large-scale data processing, hands-on experience with Snowflake for data warehousing and analytics, and a solid understanding of SQL and database optimization techniques. Knowledge of data lake and data warehouse architectures, familiarity with CI/CD pipelines and version control systems like Git, as well as strong problem-solving and debugging skills are also required. Experience with Terraform or CloudFormation for infrastructure as code, knowledge of Python for scripting and automation, familiarity with Apache Airflow for workflow orchestration, and understanding of data governance and security best practices will be beneficial. Certification in AWS or Snowflake is a plus. You should hold a Bachelor's degree in Computer Science, Engineering, or a related field with 6 to 10 years of experience, including 5+ years of experience in AWS cloud engineering and 2+ years of experience with PySpark and Snowflake. Join us in our Technology team as a valuable member of the Digital Software Engineering job family, working full-time to contribute your most relevant skills while continuously growing and expanding your expertise.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

The role of AI Engineer at our company involves developing essential software features to ensure timely delivery while meeting the company's performance and quality standards. As an AI Engineer, you will be part of our team working on various data science initiatives, ranging from traditional analytics to cutting-edge AI applications. This position provides a great opportunity to gain experience in machine learning and artificial intelligence across different domains. Your core responsibilities will include developing and maintaining data processing pipelines using Python, creating and deploying GenAI applications through APIs and frameworks like semantic kernel and Langchain, conducting data analysis and generating insightful visualizations, collaborating with different teams to understand business requirements and implement AI solutions, contributing to the development and upkeep of language model evaluations and other AI applications, writing clean, documented, and tested code following best practices, and monitoring application performance while making necessary improvements. To qualify for this role, you should hold a Bachelor's degree in Computer Science, Statistics, Mathematics, or a related field, along with at least 2 years of hands-on experience in applied data science and machine learning. Proficiency in Python programming, familiarity with LLMs and their applications, experience with cloud platforms such as AWS, Azure, or GCP, version control using Git, strong analytical and problem-solving skills, and knowledge of PyTorch and Deep Learning fundamentals are essential requirements. In terms of technical skills, you should have knowledge of LLMs and Generative AI platforms, be familiar with popular LLM APIs, have experience with frameworks like Semantic Kernel, LangChain, OpenAI, and Google Vertex AI, be proficient in programming languages including Python, SQL, and R, have expertise in libraries such as HuggingFace, Transformers, Matplotlib, and Seaborn, be skilled in ML/Statistics with tools like scikit-learn, statsmodels, and PyTorch, understand data processing with Pandas, NumPy, and PySpark, and have experience working with cloud platforms like AWS, GCP, or Azure. This full-time permanent position is located in Bengaluru and is with the Merkle brand.,

Posted 1 week ago

Apply

6.0 - 11.0 years

15 - 22 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Primary skills: Pyspark/Hadoop/Scala NP : immediate to 60 Days

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

As a part of the AI & Data Platform organization, the Enterprise Business Intelligence (EBI) team is central to NXP’s data analytic success, providing and maintaining data solutions, platforms and methods that enable self-service business creation of solutions that drive the business forward. Data Engineers within EBI have responsibilities across solution design, delivery, and support. In this role you will work with Product Owners, Architects, Data Scientists, and other stakeholders to design, build and maintain ETL/ ELT pipelines, data pipelines and jobs, combining data from multiple source systems into one or multiple target systems. Solutions delivered must adhere to EBI and IT architectural principles pertaining to capacity planning, performance management, data security, data privacy, lifecycle management and regulatory compliance. Assisting the Operational Support team with analysis and investigation of issues is also expected, as needed. Required Skills And Qualifications Proven experience as a Data Engineer Hands on experience in ETL design and development concepts (3+ years) Experience with AWS and Azure cloud platforms and their data service offerings Proficiency in SQL, PySpark, Python Experience with GitHub, GitLab, CI/CD Knowledge of advanced analytic concepts including AI/ML Strong problem-solving skills and ability to work in a fast-paced and collaborative environment Excellent oral and written communication skills Preferred Skills & Qualifications Experience with Agile / DevOps Proficiency in SQL (Databricks, Teradata) Experience with DBT More information about NXP in India...

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

As a part of the AI & Data Platform organization, the Enterprise Business Intelligence (EBI) team is central to NXP’s data analytic success, providing and maintaining data solutions, platforms and methods that enable self-service business creation of solutions that drive the business forward. Data Engineers within EBI have responsibilities across solution design, delivery, and support. In this role you will work with Product Owners, Architects, Data Scientists, and other stakeholders to design, build and maintain ETL/ ELT pipelines, data pipelines and jobs, combining data from multiple source systems into one or multiple target systems. Solutions delivered must adhere to EBI and IT architectural principles pertaining to capacity planning, performance management, data security, data privacy, lifecycle management and regulatory compliance. Assisting the Operational Support team with analysis and investigation of issues is also expected, as needed. Required Skills And Qualifications Proven experience as a Data Engineer Hands on experience in ETL design and development concepts (5+ years) Experience with AWS and Azure cloud platforms and their data service offerings Proficiency in SQL, PySpark, Python Experience with GitHub, GitLab, CI/CD Knowledge of advanced analytic concepts including AI/ML Strong problem-solving skills and ability to work in a fast-paced and collaborative environment Excellent oral and written communication skills Preferred Skills & Qualifications Experience with Agile / DevOps Proficiency in SQL (Databricks, Teradata) Experience with DBT More information about NXP in India...

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more careers.bms.com/working-with-us . Summary As a Data Engineer based out of our BMS Hyderabad you are part of the Data Platform team along with supporting the larger Data Engineering community, that delivers data and analytics capabilities for Data Platforms and Data Engineering Community. The ideal candidate will have a strong background in data engineering, DataOps, cloud native services, and will be comfortable working with both structured and unstructured data. Key Responsibilities The Data Engineer will be responsible for designing, building, and maintaining the data products, evolution of the data products, and utilize the most suitable data architecture required for our organization's data needs. Serves as the Subject Matter Expert on Data & Analytics Solutions. Accountable for delivering high quality, data products and analytic ready data solutions. Develop and maintain ETL/ELT pipelines for ingesting data from various sources into our data warehouse. Develop and maintain data models to support our reporting and analysis needs. Optimize data storage and retrieval to ensure efficient performance and scalability. Collaborate with data architects, data analysts and data scientists to understand their data needs and ensure that the data infrastructure supports their requirements. Ensure data quality and integrity through data validation and testing. Implement and maintain security protocols to protect sensitive data. Stay up-to-date with emerging trends and technologies in data engineering and analytics Closely partner with the Enterprise Data and Analytics Platform team, other functional data teams and Data Community lead to shape and adopt data and technology strategy. Accountable for evaluating Data enhancements and initiatives, assessing capacity and prioritization along with onshore and vendor teams. Knowledgeable in evolving trends in Data platforms and Product based implementation Manage and provide guidance for the data engineers supporting projects, enhancements, and break/fix efforts. Has end-to-end ownership mindset in driving initiatives through completion Comfortable working in a fast-paced environment with minimal oversight Mentors and provide career guidance to other team members effectively to unlock full potential. Prior experience working in an Agile/Product based environment. Provides strategic feedback to vendors on service delivery and balances workload with vendor teams. Qualifications & Experience Hands-on experience working on implementing and operating data capabilities and cutting-edge data solutions, preferably in a cloud environment. Breadth of experience in technology capabilities that span the full life cycle of data management including data lakehouses, master/reference data management, data quality and analytics/AI ML. Ability to craft and architect data solutions, automation pipelines to productionize solutions. Hands-on experience developing and delivering data, ETL solutions with some of the technologies like AWS data services (Glue, Redshift, Athena, lakeformation, etc.). Cloudera Data Platform, Tableau labs is a plus. Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Strong programming skills in languages such as Python, PySpark, R, PyTorch, Pandas, Scala etc. Experience with SQL and database technologies such as MySQL, PostgreSQL, Presto, etc. Experience with cloud-based data technologies such as AWS, Azure, or GCP (Preferably strong in AWS) Strong analytical and problem-solving skills Excellent communication and collaboration skills Functional knowledge or prior experience in Lifesciences Research and Development domain is a plus Experience and expertise in establishing agile and product-oriented teams that work effectively with teams in US and other global BMS site. Initiates challenging opportunities that build strong capabilities for self and team Demonstrates a focus on improving processes, structures, and knowledge within the team. Leads in analyzing current states, deliver strong recommendations in understanding complexity in the environment, and the ability to execute to bring complex solutions to completion. AWS Data Engineering/Analytics certification is a plus. If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information https //careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description You are a strategic thinker passionate about driving solutions in financial analysis. You have found the right team. As a Quant Analytics Analyst within the Analytics Team, you will leverage your expertise in tools like SAS, SQL, Python, and Alteryx to deliver complex analytical projects. You will build statistical models, deploy them, and generate impactful insights with measurable P&L impact. You will collaborate with finance, marketing, technology, and business teams to present clear and concise results and recommendations. Job Responsibilities Experience in retail finance or Credit card finance within a bank a big plus Perform detailed quantitative analysis and data deep dive to answer questions pertaining to key business objectives Create and maintain dashboard to track key business p&l metrics Provide informative business financial information and coordinate business financial planning and budget management Partner effectively with finance, marketing and technology and business teams Present results and recommendations clearly and concisely Required Qualifications, Capabilities, And Skills Excellent knowledge of SQL and able to write complex sql queries. Experience in using Data extraction and Data analysis tools like SQL/Pyspark Sql/ Alteryx /Python/SAS Minimum 3 years’ experience working on analytical projects involving product analysis and statistical modeling Good knowledge of statistical modeling techniques related to time series forecasting, regression(line/logistic), ML model like XGBoost and NLP Have experience in building analytical dashboards using tools like Tableau, Quicksight, SAS Visual analytics, streamlit etc. Excellent communication (verbal and written) skills Proficiency in Microsoft Office (especially Excel) Basic understanding of cloud ops on AWS Preferred Qualifications, Capabilities, And Skills Bachelor’s degree in Engineering required, CFA/FRM/ MBA (Finance) an advantage but not mandatory ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our Consumer & Community Banking division serves our Chase customers through a range of financial services, including personal banking, credit cards, mortgages, auto financing, investment advice, small business loans and payment processing. We’re proud to lead the U.S. in credit card sales and deposit growth and have the most-used digital solutions – all while ranking first in customer satisfaction. The CCB Data & Analytics team responsibly leverages data across Chase to build competitive advantages for the businesses while providing value and protection for customers. The team encompasses a variety of disciplines from data governance and strategy to reporting, data science and machine learning. We have a strong partnership with Technology, which provides cutting edge data and analytics infrastructure. The team powers Chase with insights to create the best customer and business outcomes.

Posted 1 week ago

Apply

40.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking an experienced MDM Engineer with 8–12 years of experience to lead development and operations of our Master Data Management (MDM) platforms, with hands-on experience in data engineering experience. This role will involve handling the backend data engineering solution within MDM team. This is a technical role that will require hands-on work. To succeed in this role, the candidate must have strong Data Engineering experience. Candidate must have experience on technologies like (SQL, Python, PySpark, Databricks, AWS, API Integrations etc). Roles & Responsibilities: Develop distributed data pipelines using PySpark on Databricks for ingesting, transforming, and publishing master data Write optimized SQL for large-scale data processing, including complex joins, window functions, and CTEs for MDM logic Implement match/merge algorithms and survivorship rules using Informatica MDM or Reltio APIs Build and maintain Delta Lake tables with schema evolution and versioning for master data domains Use AWS services like S3, Glue, Lambda, and Step Functions for orchestrating MDM workflows Automate data quality checks using IDQ or custom PySpark validators with rule-based profiling Integrate external enrichment sources (e.g., D&B, LexisNexis) via REST APIs and batch pipelines Design and deploy CI/CD pipelines using GitHub Actions or Jenkins for Databricks notebooks and jobs Monitor pipeline health using Databricks Jobs API, CloudWatch, and custom logging frameworks Implement fine-grained access control using Unity Catalog and attribute-based policies for MDM datasets Use MLflow for tracking model-based entity resolution experiments if ML-based matching is applied Collaborate with data stewards to expose curated MDM views via REST endpoints or Delta Sharing Basic Qualifications and Experience: 8 to 13 years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: Advanced proficiency in PySpark for distributed data processing and transformation Strong SQL skills for complex data modeling, cleansing, and aggregation logic Hands-on experience with Databricks including Delta Lake, notebooks, and job orchestration Deep understanding of MDM concepts including match/merge, survivorship, and golden record creation Experience with MDM platforms like Informatica MDM or Reltio, including REST API integration Proficiency in AWS services such as S3, Glue, Lambda, Step Functions, and IAM Familiarity with data quality frameworks and tools like Informatica IDQ or custom rule engines Experience building CI/CD pipelines for data workflows using GitHub Actions, Jenkins, or similar Knowledge of schema evolution, versioning, and metadata management in data lakes Ability to implement lineage and observability using Unity Catalog or third-party tools Comfort with Unix shell scripting or Python for orchestration and automation Hands on experience on RESTful APIs for ingesting external data sources and enrichment feeds Good-to-Have Skills: Experience with Tableau or PowerBI for reporting MDM insights. Exposure to Agile practices and tools (JIRA, Confluence). Prior experience in Pharma/Life Sciences. Understanding of compliance and regulatory considerations in master data. Professional Certifications : Any MDM certification (e.g. Informatica, Reltio etc) Any Data Analysis certification (SQL, Python, PySpark, Databricks) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. GCF Level 05A

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Data Engineer -R&D-Multi Omics What You Will Do Let’s do this. Let’s change the world. In this vital role you will be responsible for development and maintenance of software in support of target/biomarker discovery at Amgen. Design, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions Contribute to data pipeline projects from inception to deployment, manage scope, timelines, and risks Contribute to data models for biopharma scientific data, data dictionaries, and other documentation to ensure data accuracy and consistency Optimize large datasets for query performance Collaborate with global cross-functional teams including research scientists to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, Software Engineers and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Maintain documentation of processes, systems, and solutions What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. The role requires proficiency in scientific software development (e.g. Python, R, Rshiny, Plotly Dash, etc), and some knowledge of CI/CD processes and cloud computing technologies (e.g. AWS, Google Cloud, etc). Basic Qualifications: Master’s degree/Bachelors Degree and 5 to 9 years of Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field experience. Preferred Qualifications: 5+ years of experience in designing and supporting biopharma scientific research data analytics (software platforms. Functional Skills: Must-Have Skills: Proficiency with SQL and Python for data engineering, test automation frameworks (pytest), and scripting tasks Hands on experience with big data technologies and platforms, such as Databricks (or equivalent), Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Excellent problem-solving skills and the ability to work with large, complex datasets Good-to-Have Skills: Experience the git, CICD and the software development lifecycle Experience with SQL and relational databases (e.g PostgreSQL, MySQL, Oracle) or Databricks Experience with cloud computing platforms and infrastructure (AWS preferred) Experience using and adopting Agile Framework A passion for tackling complex challenges in drug discovery with technology and data Basic understanding of data modeling, data warehousing, and data integration concepts Experience with data visualization tools (e.g. Dash, Plotly, Spotfire) Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstorming Experience writing and maintaining technical documentation in Confluence Professional Certifications: Databricks Certified Data Engineer Professional preferred Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills High degree of initiative and self-motivation. Demonstrated presentation skills Ability to manage multiple priorities successfully. Team-oriented with a focus on achieving team goals. What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What You Will Do Let’s do this. Let’s change the world. In this vital role you will manage and oversee the development of robust Data Architectures, Platforms, Frameworks, Data product Solutions, while mentoring and guiding a small team of data engineers & architects. You will be responsible for leading the development, implementation, and management of enterprise-level data engineering frameworks and solutions that support the organization's data-driven strategic initiatives. You will continuously strive for innovation in the technologies and practices used for data engineering and build enterprise scale data frameworks and expert data engineers. This role will closely collaborate with counterparts in US and EU. You will collaborate with cross-functional teams, including platform, functional IT, and business stakeholders, to ensure that the solutions that are built align with business goals and are scalable, secure, and efficient. Roles & Responsibilities: Architect & Implement scalable, high-performance, Enterprise Scale Modern Data Platforms & applications that include data analysis, data ingestion, storage, data transformation (data pipelines), and analytics. Evaluate the new trends & features in data platforms area and build rapid prototypes Build Data Solution Architectures and Frameworks to accelerate the Data Engineering processes Build frameworks to improve the re-usability, reduce the development time and cost of data management & governance Integrate AI into data engineering practices to bring efficiency through automation Build best practices in Data Platforms capability and ensure their adoption across the product teams Build and nurture strong relationships with stakeholders, emphasizing value-focused engagement and partnership to align data initiatives with broader business goals. Lead and motivate a high-performing data platforms team to deliver exceptional results. Provide expert guidance and mentorship to the data engineering team, fostering a culture of innovation and best practices. Collaborate with counterparts in US and EU and work with business functions, functional IT teams, and others to understand their data needs and ensure the solutions meet the requirements. Engage with business stakeholders to understand their needs and priorities, ensuring that data and analytics solutions built deliver real value and meet business objectives. Drive adoption of the data and analytics platforms & Solutions by partnering with the business stakeholders and functional IT teams in rolling out change management, trainings, communications, etc. Talent Growth & People Leadership: Lead, mentor, and manage a high-performing team of engineers, fostering an environment that encourages learning, collaboration, and innovation. Focus on nurturing future leaders and providing growth opportunities through coaching, training, and mentorship. Recruitment & Team Expansion: Develop a comprehensive talent strategy that includes recruitment, retention, onboarding, and career development and build a diverse and inclusive team that drives innovation, aligns with Amgen's culture and values, and delivers business priorities Organizational Leadership: Work closely with senior leaders within the function and across the Amgen India site to align engineering goals with broader organizational objectives and demonstrate leadership by contributing to strategic discussions What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: 12 to 17 years with computer science and engineering preferred, other Engineering fields will be considered 10+ years of experience in building Data Platforms, Data Engineering, working in COE development or product building 5+ years of Hands-on experience working with Big Data Platforms & Solutions using AWS and Databricks 5+ years of experience in leading enterprise scale data engineering solution development. Experience building enterprise scale data lake, data fabric solutions on cloud leveraging modern approaches like Data Mesh Demonstrated proficiency in leveraging cloud platforms (AWS, Azure, GCP) for data engineering solutions. Strong understanding of cloud architecture principles and cost optimization strategies. Hands-on experience using Databricks, PySpark, Python, SQL Proven ability to lead and develop high-performing data engineering teams. Strong problem-solving, analytical, and critical thinking skills to address complex data challenges. Preferred Qualifications: Experience in Integrating AI with Data Platforms & Engineering and building AI ready data platforms Prior experience in data modeling especially star-schema modeling concepts. Familiarity with ontologies, information modeling, and graph databases. Experience working with agile development methodologies such as Scaled Agile. Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops. Education and Professional Certifications SAFe for Teams certification (preferred) Databricks certifications AWS cloud certification Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

12.0 - 17.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role Description: Let’s do this. Let’s change the world. We are looking for highly motivated expert Principal Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks with domain expertise in R&D domain. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Architect and maintain robust, scalable data pipelines using Databricks, Spark, and Delta Lake, enabling efficient batch and real-time processing. Lead efforts to evaluate, adopt, and integrate emerging technologies and tools that enhance productivity, scalability, and data delivery capabilities. Drive performance optimization efforts, including Spark tuning, resource utilization, job scheduling, and query improvements. Identify and implement innovative solutions that streamline data ingestion, transformation, lineage tracking, and platform observability. Build frameworks for metadata-driven data engineering, enabling reusability and consistency across pipelines. Foster a culture of technical excellence, experimentation, and continuous improvement within the data engineering team. Collaborate with platform, architecture, analytics, and governance teams to align platform enhancements with enterprise data strategy. Define and uphold SLOs, monitoring standards, and data quality KPIs for production pipelines and infrastructure. Partner with cross-functional teams to translate business needs into scalable, governed data products. Mentor engineers across the team, promoting knowledge sharing and adoption of modern engineering patterns and tools. Collaborate with cross-functional teams, including data architects, business analysts, and DevOps teams, to align data engineering strategies with enterprise goals. Stay up to date with emerging data technologies and best practices, ensuring continuous improvement of Enterprise Data Fabric architectures. Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Experience with Data Fabric, Data Mesh, or similar enterprise-wide data architectures. Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications 12 to 17 years of experience in Computer Science, IT or related field AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status.We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 1 week ago

Apply

40.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Site Reliability Engineering Manager/Cloud Engineering Manager About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Let’s do this. Let’s change the world. We are looking for a Site Reliability Engineer/Cloud Engineer (SRE2) to work on the performance optimization, standardization, and automation of Amgen’s critical infrastructure and systems. This role is crucial to ensuring the reliability, scalability, and cost-effectiveness of our production systems. The ideal candidate will work on operational excellence through automation, incident response, and proactive performance tuning, while also reducing infrastructure costs. You will work closely with cross-functional teams to establish best practices for service availability, efficiency, and cost control. Roles & Responsibilities: Lead and motivate a high-performing Test Automation team to deliver exceptional results. Provide expert guidance and mentorship to the Test Automation team, fostering a culture of innovation and best practices System Reliability, Performance Optimization & Cost Reduction: Ensure the reliability, scalability, and performance of Amgen’s infrastructure, platforms, and applications. Proactively identify and resolve performance bottlenecks and implement long-term fixes. Continuously evaluate system design and usage to identify opportunities for cost optimization, ensuring infrastructure efficiency without compromising reliability. Automation & Infrastructure as Code (IaC): Drive the adoption of automation and Infrastructure as Code (IaC) across the organization to streamline operations, minimize manual interventions, and enhance scalability. Implement tools and frameworks (such as Terraform, Ansible, or Kubernetes) that increase efficiency and reduce infrastructure costs through optimized resource utilization. Standardization of Processes & Tools: Establish standardized operational processes, tools, and frameworks across Amgen’s technology stack to ensure consistency, maintainability, and best-in-class reliability practices. Champion the use of industry standards to optimize performance and increase operational efficiency. Monitoring, Incident Management & Continuous Improvement: Implement and maintain comprehensive monitoring, alerting, and logging systems to detect issues early and ensure rapid incident response. Lead the incident management process to minimize downtime, conduct root cause analysis, and implement preventive measures to avoid future occurrences. Foster a culture of continuous improvement by leveraging data from incidents and performance monitoring. Collaboration & Cross-Functional Leadership: Partner with software engineering, and IT teams to integrate reliability, performance optimization, and cost-saving strategies throughout the development lifecycle. Act as a SME for SRE principles and advocate for best practices for assigned Projects. Capacity Planning & Disaster Recovery: Execute capacity planning processes to support future growth, performance, and cost management. Maintain disaster recovery strategies to ensure system reliability and minimize downtime in the event of failures. Must-Have Skills: Experienced with AWS/Azure Cloud Services Good knowledge on any visualization tools like Power BI , Tableau SQL/Python/Pyspark /Spark Knowledge Proficient in CI/CD (Jenkins/Gitlab), Observability, IAC, Gitops etc Experience with containerization (Docker) and orchestration tools (Kubernetes) to optimize resource usage and improve scalability. Ability to learn new technologies quickly. Strong problem-solving and analytical skills. Excellent communication and teamwork skills. Good-to-Have Skills: Knowledge of cloud-native technologies and strategies for cost optimization in multi-cloud environments. Familiarity with distributed systems, databases, and large-scale system architectures. Bachelor’s degree in computer science and engineering preferred, other Engineering field is considered Databricks Knowledge/Exposure is good to have (need to upskill if hired) Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. Basic Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field. 9-11+ years of experience in IT infrastructure, with at least 7+ years in Site Reliability Engineering or related fields. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 1 week ago

Apply

12.0 - 17.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About The Role Role Description: Let’s do this. Let’s change the world. We are looking for highly motivated expert Principal Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Architect and maintain robust, scalable data pipelines using Databricks, Spark, and Delta Lake, enabling efficient batch and real-time processing. Lead efforts to evaluate, adopt, and integrate emerging technologies and tools that enhance productivity, scalability, and data delivery capabilities. Drive performance optimization efforts, including Spark tuning, resource utilization, job scheduling, and query improvements. Identify and implement innovative solutions that streamline data ingestion, transformation, lineage tracking, and platform observability. Build frameworks for metadata-driven data engineering, enabling reusability and consistency across pipelines. Foster a culture of technical excellence, experimentation, and continuous improvement within the data engineering team. Collaborate with platform, architecture, analytics, and governance teams to align platform enhancements with enterprise data strategy. Define and uphold SLOs, monitoring standards, and data quality KPIs for production pipelines and infrastructure. Partner with cross-functional teams to translate business needs into scalable, governed data products. Mentor engineers across the team, promoting knowledge sharing and adoption of modern engineering patterns and tools. Collaborate with cross-functional teams, including data architects, business analysts, and DevOps teams, to align data engineering strategies with enterprise goals. Stay up to date with emerging data technologies and best practices, ensuring continuous improvement of Enterprise Data Fabric architectures. Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Experience with Data Fabric, Data Mesh, or similar enterprise-wide data architectures. Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications 12 to 17 years of experience in Computer Science, IT or related field AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status.We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies