Home
Jobs

3345 Databricks Jobs - Page 43

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 11.0 years

8 - 18 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Work from Office

Naukri logo

Role & responsibilities Data Engineer, Expertise in AWS, Databricks and Pyspark

Posted 1 week ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 9 to 11 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills An inclination to mentor; an ability to lead and deliver medium sized components independently Technical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management Expertise around Data : Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Data Governance: A strong grasp of principles and practice including data quality, security, privacy and compliance Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes. File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Experience of using a Job scheduler e.g., Autosys. Exposure to Business Intelligence tools e.g., Tableau, Power BI Certification on any one or more of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Location(s): Quay Building 8th Floor, Bagmane Tech Park, Bengaluru, IN Line Of Business: Data Estate(DE) Job Category Engineering & Technology Experience Level: Experienced Hire At Moody's, we unite the brightest minds to turn today’s risks into tomorrow’s opportunities. We do this by striving to create an inclusive environment where everyone feels welcome to be who they are???with the freedom to exchange ideas, think innovatively, and listen to each other and customers in meaningful ways. If you are excited about this opportunity but do not meet every single requirement, please apply! You still may be a great fit for this role or other open roles. We are seeking candidates who model our values: invest in every relationship, lead with curiosity, champion diverse perspectives, turn inputs into actions, and uphold trust through integrity. Skills And Competencies Proficiency in Kubernetes and Amazon EKS (2+ years required): Essential for managing containerized applications and ensuring high availability and security in cloud-native environments. Strong expertise in AWS serverless technologies (required): Including Lambda, API Gateway, EventBridge, and Step Functions, to build scalable and cost-efficient solutions. Hands-on experience with Terraform (2+ years required): Critical for managing Infrastructure as Code (IaC) across multiple environments, ensuring consistency and repeatability. CI/CD pipeline development using GitHub Actions (required): Necessary for automating deployments and supporting agile development practices. Scripting skills in Python, Bash, or PowerShell (required): Enables automation of operational tasks and enhances infrastructure management capabilities. Experience with Databricks and Apache Kafka (preferred): Valuable for teams working with data pipelines, MLOps workflows, and event-driven architectures. Education Bachelor’s degree in Computer Science or equivalent experience Responsibilities Design, automate, and manage scalable cloud infrastructure using Kubernetes, AWS, Terraform, and CI/CD pipelines . Design and manage cloud-native infrastructure using container orchestration platforms, ensuring high availability, scalability, and security across environments. Implement and maintain Infrastructure as Code (IaC) using tools like Terraform to provision and manage multi-environment cloud resources consistently and efficiently. Develop and optimize continuous integration and delivery (CI/CD) pipelines to automate application and infrastructure deployments, supporting agile development cycles. Monitor system performance and reliability by configuring observability tools for logging, alerting, and metrics collection, and proactively address operational issues. Collaborate with cross-functional teams to align infrastructure solutions with application requirements, ensuring seamless deployment and performance optimization. Document technical processes and architectural decisions through runbooks, diagrams, and knowledge-sharing resources to support operational continuity and team onboarding. About The Team Our Data Estate DevOps team is responsible for enabling the scalable, secure, and automated infrastructure that powers Moody’s enterprise data platform. We ensure the seamless deployment, monitoring, and performance of data pipelines and services that deliver curated, high-quality data to internal and external consumers. We Contribute To Moody’s By Accelerating data delivery and operational efficiency through automation, observability, and infrastructure-as-code practices that support near real-time data processing and remediation. Supporting data integrity and governance by enabling traceable, auditable, and resilient systems that align with regulatory compliance and GenAI readiness. Empowering innovation and analytics by maintaining a modular, interoperable platform that integrates internal and third-party data sources for downstream research models, client workflows, and product applications. By joining our team, you will be part of exciting work in cloud-native DevOps, data engineering, and platform automation, supporting global data operations across 29 countries and contributing to Moody’s mission of delivering integrated perspectives on risk and growth. Moody’s is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, sexual orientation, gender expression, gender identity or any other characteristic protected by law. Candidates for Moody's Corporation may be asked to disclose securities holdings pursuant to Moody’s Policy for Securities Trading and the requirements of the position. Employment is contingent upon compliance with the Policy, including remediation of positions in those holdings as necessary. Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Karnataka, India

On-site

Linkedin logo

Who You’ll Work With A data engineer will work in the Data and Artificial Intelligence organisation of Nike and will focus on building highly complex and performant data pipelines that'll drive Nike's data driven strategies for the future of sports. Who We Are Looking For In this role, we are looking for self-driven individuals who have deep technical knowledge in the big data domain. This role requires the individual to be an excellent problem solver who'll design and implement complex data pipelines which solve business problems of Nike. The core competencies required for this role include - Bachelor’s degree in computer science engineering 2+ years of hands-on experience in data engineering field In depth big data tech stack knowledge Expertise in pyspark and SQL Expertise in databricks, snowflake, airflow Excellent written and verbal communication skills What You’ll Work On As a data engineer you'll be a key pillar of the data engineering team. You will collaborate closely with other engineers to deliver key changes to data pipelines that drive Nike's data strategy On a day-to-day basis, you'll focus on - Building, enhancing, and troubleshooting complex data pipelines Collaborating with product managers, engineers, analysts to build, enhance and troubleshoot data pipelines Collaborate with senior, lead and principal engineers to define and implement quality standards across data pipelines Contribute towards the design and architecture of data pipelines Implement data quality and reliability measures across data pipelines Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Karnataka, India

On-site

Linkedin logo

Who You’ll Work With This role is part of the Nike’s Content Technology team within Consumer Product and Innovation (CP&I) organization, working very closely with the globally distributed Engineering and Product teams. This role will roll up to the Director Software Engineering based out of Nike India Tech Centre. Who We Are Looking For We are looking for experienced Technology focused and hands on Lead Engineer to join our team in Bengaluru, India. As a Senior Data Engineer, you will play a key role in ensuring that our data products are robust and capable of supporting our Data Engineering and Business Intelligence initiatives. A data engineer with 2+ years of experience in data engineering. Proficient in SQL, Python, PySpark, and Apache Airflow (or similar workflow management tools). Hands-on experience with Databricks, Snowflake, and cloud platforms (AWS/GCP/Azure). Good understanding of Spark, Delta Lake, Medallion architecture, and ETL/ELT processes. Solid data modeling and data profiling skills. Familiarity with Agile methodologies (Scrum/Kanban). Awareness of DevOps practices in data engineering (automated testing, security administration, workflow orchestration) Exposure to Kafka or real-time data processing Strong communication and collaboration skills. Preferred: familiarity with Tableau or similar BI tools exposure to GenAI/ML pipelines Nice to have: Databricks certifications for data engineer, developer, or Apache Spark. What You’ll Work On Build and maintain ETL/ELT pipelines and reusable data components. Collaborate with peers and stakeholders to gather data requirements. Participate in code reviews and contribute to quality improvements. Monitor and troubleshoot data pipelines for performance and reliability. Support CI/CD practices in data engineering workflows. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Mantas Scenario Developer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Hands on Mantas ( Oracle FCCM ) expert throughout the full development life cycle, including: requirements analysis, functional design, technical design, programming, testing, documentation, implementation, and on-going technical support Translate business needs (BRD) into effective technical solutions and documents (FRD/TSD) Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Mantas: Expert in Oracle Mantas/FCCM, Scenario Manager, Scenario Development, thorough knowledge and hands on experience in Mantas FSDM, DIS, Batch Scenario Manager Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Mantas Scenario Developer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Hands on Mantas ( Oracle FCCM ) expert throughout the full development life cycle, including: requirements analysis, functional design, technical design, programming, testing, documentation, implementation, and on-going technical support Translate business needs (BRD) into effective technical solutions and documents (FRD/TSD) Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Mantas: Expert in Oracle Mantas/FCCM, Scenario Manager, Scenario Development, thorough knowledge and hands on experience in Mantas FSDM, DIS, Batch Scenario Manager Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Design, build, and maintain scalable data infrastructure and pipelines Oversee data collection, transformation, and storage strategies to ensure data integrity and efficiency Collaborate with cross-functional teams including data science, analytics, product management, and IT Ensure optimal data pipeline architecture and troubleshoot issues as they arise Establish processes for data quality, validation, monitoring, and reporting Stay updated with the latest technologies and implement best practices in data engineering Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelors or Masters degree in Computer Science, Engineering, or related field 5+ years of experience in data engineering 5+ years of experience in SQL, Python, PySpark, and/or Scala programming languages 5+ years of hands-on experience with Databricks or similar big data processing platforms Solid understanding of data warehousing, ETL (Extract, Transform, Load) processes, and data modeling Experience in implementing data governance frameworks and security best practices Familiarity with big data technologies (e.g. Spark, Kafka) Proven excellent problem-solving skills and attention to detail At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities The ability to be a team player The ability and skill to train other people in procedural and technical topics Strong communication and collaboration skills Preferred Education Master's Degree Required Technical And Professional Expertise Able to write complex SQL queries ; Having experience in Azure Databricks Preferred Technical And Professional Experience Excellent communication and stakeholder management skills Show more Show less

Posted 1 week ago

Apply

3.0 - 8.0 years

9 - 16 Lacs

Pune

Work from Office

Naukri logo

We are looking for a skilled Azure Data Engineer to design, develop, optimize data pipelines for following 1, SQL+ETL+AZURE+Python+Pyspark+Databricks 2, SQL+ADF+ Azure 3, SQL+Python+Pyspark - Strong proficiency in SQL for data manipulation querying Required Candidate profile - Python and PySpark for data engineering tasks. - Exp with Databricks for big data processing analytics. - Knowledge of data modeling, warehousing, governance. - CI/CD pipelines for data deployment. Perks and benefits Perks and Benefits

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Position: BI Developer and Data Analyst: Skills: Power BI, Databricks, SQL, Python, ETL, RedShift or Athena, AWS Services (beyond QuickSight) Experience: 4+ Responsibilities Design, develop, and maintain interactive and insightful dashboards using QuickSight. Conduct advanced data analysis to identify trends, patterns, and anomalies, providing meaningful interpretations and recommendations. Collaborate with stakeholders across different departments to understand their data needs and translate them into effective analytical solutions. Write and optimize SQL queries to extract, transform, and load data from various data sources. Utilize Python for data manipulation, automation of tasks, and statistical analysis. Ensure data accuracy, integrity, and consistency across all dashboards and analyses. Document dashboard specifications, data sources, and analytical methodologies. Stay up-to-date with the latest trends and best practices in data visualization and analytics. Qualifications Bachelor's degree in a quantitative field such as Data Science, Statistics, Mathematics, Computer Science, or a related discipline. Required Skills Data visualization best practices. Proven experience in developing advanced dashboards and performing data analysis. Ability to create clear, intuitive, and impactful visualizations (charts, graphs, tables, KPIs) that effectively communicate insights. Extensive experience with AWS QuickSight (or similar BI tool): Hands-on experience in building, publishing, and maintaining interactive dashboards and reports. QuickSight data sources: Experience connecting QuickSight to various data sources, especially those common in AWS environments (e.g., S3, Redshift, Athena, RDS, Glue). QuickSight dataset creation and management: Proficiency in creating, transforming, and optimizing datasets within QuickSight, including calculated fields, parameters, and filters. Performance optimization: Knowledge of how to optimize QuickSight dashboards and data for speed and scalability. Preferred Skills Experience with other data visualization tools. Familiarity with machine learning concepts. Show more Show less

Posted 1 week ago

Apply

7.0 - 12.0 years

18 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Urgently Hiring for Senior Azure Data Engineer Job Location- Bangalore Minimum exp - 7yrs- 11yrs Keywords Databricks, Pyspark, SCALA, SQL, Live / Streaming data, batch processing data Share CV Mohini.sharma@adecco.com OR Call 9740521948 Roles and Responsibilities: The Data Engineer will work on data engineering projects for various business units, focusing on delivery of complex data management solutions by leveraging industry best practices. They work with the project team to build the most efficient data pipelines and data management solutions that make data easily available for consuming applications and analytical solutions. A Data engineer is expected to possess strong technical skills. Key Characteristics Technology champion who constantly pursues skill enhancement and has inherent curiosity to understand work from multiple dimensions. Interest and passion in Big Data technologies and appreciates the value that can be brought in with an effective data management solution. Has worked on real data challenges and handled high volume, velocity, and variety of data. Excellent analytical & problem-solving skills, willingness to take ownership and resolve technical challenges. Contributes to community building initiatives like CoE, CoP. Mandatory skills: Azure - Master ELT - Skill Data Modeling - Skill Data Integration & Ingestion - Skill Data Manipulation and Processing - Skill GITHUB, Action, Azure DevOps - Skill Data factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest - Skill Optional skills: Experience in project management, running a scrum team. Experience working with BPC, Planning. Exposure to working with external technical ecosystem. MKDocs documentation Share CV Mohini.sharma@adecco.com OR Call 9740521948

Posted 1 week ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Position : Senior Azure Data Engineer (Only Immediate Joiner) Location : Bangalore Mode of Work : Work from Office Experience : 7 years relevant experience Job Type : Full Time (On Roll) Job Description Roles and Responsibilities: The Data Engineer will work on data engineering projects for various business units, focusing on delivery of complex data management solutions by leveraging industry best practices. They work with the project team to build the most efficient data pipelines and data management solutions that make data easily available for consuming applications and analytical solutions. A Data engineer is expected to possess strong technical skills. Key Characteristics Technology champion who constantly pursues skill enhancement and has inherent curiosity to understand work from multiple dimensions. Interest and passion in Big Data technologies and appreciates the value that can be brought in with an effective data management solution. Has worked on real data challenges and handled high volume, velocity, and variety of data. Excellent analytical & problem-solving skills, willingness to take ownership and resolve technical challenges. Contributes to community building initiatives like CoE, CoP. Mandatory skills: Azure - Master ELT - Skill Data Modeling - Skill Data Integration & Ingestion - Skill Data Manipulation and Processing - Skill GITHUB, Action, Azure DevOps - Skill Data factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest - Skill Optional skills: Experience in project management, running a scrum team. Experience working with BPC, Planning. Exposure to working with external technical ecosystem. MKDocs documentation Interested candidates kindly share your CV and below details to usha.sundar@adecco.com 1) Present CTC (Fixed + VP) - 2) Expected CTC - 3) No. of years experience - 4) Notice Period - 5) Offer-in hand - 6) Reason of Change - 7) Present Location -

Posted 1 week ago

Apply

3.0 - 5.0 years

4 - 6 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a seasoned Engineering Manager (Data Engineering) to lead the end-to-end management of enterprise data assets and operational data workflows. This role is critical in ensuring the availability, quality, consistency, and timeliness of data across platforms and functions, supporting analytics, reporting, compliance, and digital transformation initiatives. You will be responsible for the day-to-day data operations, manage a team of data professionals, and drive process excellence in data intake, transformation, validation, and delivery. You will work closely with cross-functional teams including data engineering, analytics, IT, governance, and business stakeholders to align operational data capabilities with enterprise needs. Roles & Responsibilities: Lead and manage the enterprise data operations team, responsible for data ingestion, processing, validation, quality control, and publishing to various downstream systems. Define and implement standard operating procedures for data lifecycle management, ensuring accuracy, completeness, and integrity of critical data assets. Oversee and continuously improve daily operational workflows, including scheduling, monitoring, and troubleshooting data jobs across cloud and on-premise environments. Establish and track key data operations metrics (SLAs, throughput, latency, data quality, incident resolution) and drive continuous improvements. Partner with data engineering and platform teams to optimize pipelines, support new data integrations, and ensure scalability and resilience of operational data flows. Collaborate with data governance, compliance, and security teams to maintain regulatory compliance, data privacy, and access controls. Serve as the primary escalation point for data incidents and outages, ensuring rapid response and root cause analysis. Build strong relationships with business and analytics teams to understand data consumption patterns, prioritize operational needs, and align with business objectives. Drive adoption of best practices for documentation, metadata, lineage, and change management across data operations processes. Mentor and develop a high-performing team of data operations analysts and leads. Functional Skills: Must-Have Skills: Experience managing a team of data engineers in biotech/pharma domain companies. Experience in designing and maintaining data pipelines and analytics solutions that extract, transform, and load data from multiple source systems. Demonstrated hands-on experience with cloud platforms (AWS) and the ability to architect cost-effective and scalable data solutions. Experience managing data workflows in cloud environments such as AWS, Azure, or GCP. Strong problem-solving skills with the ability to analyze complex data flow issues and implement sustainable solutions. Working knowledge of SQL, Python, or scripting languages for process monitoring and automation. Experience collaborating with data engineering, analytics, IT operations, and business teams in a matrixed organization. Familiarity with data governance, metadata management, access control, and regulatory requirements (e.g., GDPR, HIPAA, SOX). Excellent leadership, communication, and stakeholder engagement skills. Well versed with full stack development & DataOps automation, logging frameworks, and pipeline orchestration tools. Strong analytical and problem-solving skills to address complex data challenges. Effective communication and interpersonal skills to collaborate with cross-functional teams. Good-to-Have Skills: Data Engineering Management experience in Biotech/Life Sciences/Pharma Experience using graph databases such as Stardog or Marklogic or Neo4J or Allegrograph, etc. Education and Professional Certifications Doctorate Degree with 3-5 + years of experience in Computer Science, IT or related field OR Masters degree with 6 - 8 + years of experience in Computer Science, IT or related field OR Bachelors degree with 10 - 12 + years of experience in Computer Science, IT or related field AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills

Posted 1 week ago

Apply

1.0 - 3.0 years

4 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data deliver actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and driving data governance initiatives, and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has deep technical skills and provides administration support for Master Data Management (MDM) and Data Quality platform, including solution architecture, inbound/outbound data integration (ETL), Data Quality (DQ), and maintenance/tuning of match rules. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Collaborate and communicate with MDM Developers, Data Architects, Product teams, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to standard processes for coding, testing, and designing reusable code/component Participate in sprint planning meetings and provide estimations on technical implementation As a SME, work with the team on MDM related product installation, configuration, customization and optimization Responsible for the understanding, documentation, maintenance, and additional creation of master data related data-models (conceptual, logical, and physical) and database structures Review technical model specifications and participate in data quality testing Collaborate with Data Quality & Governance Analyst and Data Governance Organization to monitor and preserve the master data quality Create and maintain system specific master data data-dictionaries for domains in scope Architect MDM Solutions, including data modeling and data source integrations from proof-of-concept through development and delivery Develop the architectural design for Master Data Management domain development, base object integration to other systems and general solutions as related to Master Data Management Develop and deliver solutions individually or as part of a development team Approves code reviews and technical work Maintains compliance with change control, SDLC and development standards Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Basic Qualifications: Masters degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelors degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience. Preferred Qualifications: Expertise in architecting and designing Master Data Management (MDM) solutions. Practical experience with AWS Cloud, Databricks, Apache Spark, workflow orchestration, and optimizing big data processing performance. Familiarity with enterprise source systems and consumer systems for master and reference data, such as CRM, ERP, and Data Warehouse/Business Intelligence. At least 2 to 3 years of experience as an MDM developer using Informatica MDM or Reltio MDM, along with strong proficiency in SQL. Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development. Good understanding of data modeling, data warehousing, and data integration concepts. Experience with development using Python, React JS, cloud data platforms. Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments). Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.

Posted 1 week ago

Apply

2.0 - 7.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do In this vital role you will be a member of the Transformation Analytics team, supporting Amgens Technology & Workforce Strategy. We are seeking a highly motivated leader who will maximise financial and data analytics with critical thinking to enable enterprise-wide workforce transformation by connecting data across our people, finances and capabilities to enable business insights and decisions. In this role, you will lead and supervise an Analytics pod consisting of 1-3 junior resources. You will be responsible to deliver critical analytics and insights throughout the value journey, including developing resource, financial and capability baselines, developing taxonomies, analytics data packs, business cases, value confirmation and value realization. In addition, you will collaborate with our Technology teams to support ongoing automation and continuous improvement. Ultimately, you will lead a team that will enable development of a sustainable platform for ongoing transformation. The ideal candidate will leverage their analytical acumen to challenge the status quo, drive continuous improvement, and embed digital ways of working into our end-to-end processes. Team Leadership : Lead a small pod of 1-3 analysts to support key deliverables below Lead development of deliverables for workforce strategy : Lead team to deliver financial and data analyses across baselining, case, value confirmation, value extraction and value realization. Collaborate with cross-functional global teams to ensure accurate analytics and insights delivery operating in the "following-the-sun" model Activity-Based Cost Analysis : Lead analyses of total workforce spend and allocation. Recommend strategic cost-saving initiatives and optimizations based on analysis findings. Vendor Analysis : Lead evaluation of vendor performance and impact on organizational efficiency. Develop strategies for vendor optimization and cost management. Opportunity Analysis : Lead identifying and prioritizing areas of opportunity across the organization. Collaborate with cross-functional teams to implement organizational improvements. System Updates & Scenario Modeling : Lead teams in integrating data across multiple systems and functions, to help optimize data and financial models that support key processes across the value journey of our transformation. Help ensure central systems and scenario modeling tools are updated with accurate data and assumptions. Lead the development of new modeling tools to enhance predictive capabilities. External Market Analysis : Direct the analysis of external market trends and their impact on workforce strategy. Provide strategic recommendations based on market insights to drive organizational competitiveness. Job Responsibilities: Supervise small team focused on the development of comprehensive resource and financial analyses for Tech and workforce strategy program Lead reporting of insights on Tech & Workforce Strategy program Manage creation of complex cases, incorporating multiple data sources Manage data validation of the Value Confirmation analysis, review executive summary outputs Direct teams in identifying and reconciling deviations from the approved case Lead analysis efforts to optimize workforce and resource allocation, driving efficiency across the business Lead development of models for long-term planning and decision-making, ensuring strategic alignment Facilitate data integration across multiple teams to ensure insights are aligned with business priorities and performance objectives Provide guidance for design and implementation of automated data workflows for complex datasets, collaborating with business and IT teams to expand automation and integrate with existing systems Collaborate with cross-functional global teams to ensure accurate insights delivery, operating in the "following-the-sun" model Interact with various finance and resource planning groups across Amgen to understand impact to budget/long range plans Influence cross-functional partners to demonstrate single source of truth of transformation data being built in OPI&A (Organization planning, insights & analytics) Basic Qualifications: Doctorate degree and 2 years of Data Analytics / Finance experience OR Masters degree and 8 to 10 years of applicable experience OR Bachelors degree and 10 to 14 years of data science, finance, business, statistics, applied mathematics, business analytics, engineering, computer science or related field experience OR Diploma and 14 to 18 years of Data Analytics, Science & Technology Management or Finance experience 4+ years of managerial experience directly managing people and/or leadership experience leading teams, projects, programs or directing the allocation of resources Leader who can connect the dots across matrixed organization Proficiency in Microsoft Excel Passion for data exploration & visualization and building narratives to drive data-driven business transformation Intellectual curiosity, with ability to learn new concepts, and methods. Energy for applying technical skills to solving complex business problems with elegant data analyses and Digital Products Experience working in highly collaborative cross-functional environments Proficiency in financial modeling, data analytics and business intelligence tools Understanding of financial data and systems Preferred Qualifications: Masters degree in finance, data science, business, sciences statistics, data mining, applied mathematics, business analytics, engineering, computer science or related field, or Chartered Accountant and 6-8 years of relevant experience in consulting and/or financial planning & analysis (MBA Preferred) Mastery of complex financial modeling (Excel) 4+ years of managerial experience directly managing people and/or leadership experience leading teams, projects, programs or directing the allocation of resources Understanding of global Finance, HR & Procurement systems and data Understanding of HR/Procurement/Global Sourcing operations Experience in budgeting, forecasting, and strategic planning Understanding of impacts of business decisions to financial statements (P&L, B/S, Cash Flow) Understanding of / experience in the Bio-Pharmaceutical industry Business transformation experience involving recent technology advancements Prior multinational corporate experience (capability center or other) Experience with Oracles Hyperion/EPM, SAP, Anaplan, PowerBI / Tableau Familiar with scripting languages like SQL or Python, and AWS services like S3 and Redshift Experience with data analysis, data modeling, and data visualization solutions such as Tableau, Alteryx, Databricks, PowerBI Experience performing data analysis across one or more areas of the business to derive business logic for data integration Experience working with business partners to identify complex functionality and translate it into requirements Experience in preparing executive communication to include written and oral presentations. Experience in financial planning, analysis, and reporting, ERP systems and financial software Soft Skills: Effective communication and people skills. Elevated level of integrity and ethical standards. Problem-solving and critical thinking capabilities. Ability to influence and lead change. Adaptability to a dynamic and challenging environment.

Posted 1 week ago

Apply

1.0 - 3.0 years

3 - 5 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do In this vital role you will support an ambitious program to evolve how Amgen does forecasting, moving from batch processes (e.g., sales forecasting to COGS forecast, clinical study forecasting) to a more continuous process. The hardworking professional we seek is curious by nature, organizationally and data savvy, with a strong record of Finance transformation, partner management and accomplishments in Finance, Accounting, or Procurement. This role will help redesign existing processes to incorporate Artificial Intelligence and Machine Learning capabilities to significantly reduce time and resources needed to build forecasts. As the Next Gen Forecasting Senior Associate at Amgen India, you will drive innovation and continuous improvement in Finances planning, reporting and data processes with a focus on maximizing current technologies and adapting new technologies where relevant. This individual will collaborate with cross-functional teams and support business objectives. This role reports directly to the Next Gen Forecasting Manager in Hyderabad, India. Roles & Responsibilities: Priorities can often change in a fast-paced technology environment like Amgens, so this role includes, but is not limited to, the following: Support implementation of real-time / continuous forecasting capabilities Establish baseline analyses, define current and future state using traditional approaches and emerging digital technologies Identify which areas would benefit most from automation / AI / ML Identify additional process / governance changes to move from batch to continuous forecasting Closely partner with Business, Accounting, FP&A, Technology and other impacted functions to define and implement proposed changes Partners with Amgen Technology function to support both existing and new finance platforms Partners with local and global teams on use cases for Artificial Intelligence (AI), Machine Learning (ML) and Robotic Process Automation (RPA) Collaborate with cross-functional teams and Centers of Excellence globally to drive operational efficiency Contributes to a learning environment and enhances learning methodologies of technical tools where applicable. Serve as local financial systems and financial data subject matter expert, supporting local team with questions Supports global finance teams and business partners with centrally delivered financial reporting via tableau and other tools Supports local adoption of Anaplan for operating expense planning / tracking What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Masters degree and 1 to 3 years of Finance experience OR Bachelors degree and 3 to 5 years of Finance experience OR Diploma and 7 to 9 years of Finance experience Consistent record of launching new finance capabilities Proficiency in data analytics and business intelligence tools. Experience with finance reporting and planning system technologies Experience with technical support of financial platforms Knowledge of financial management and accounting principles. Experience with ERP systems Resourceful individual who can connect the dots across matrixed organization Preferred Qualifications: Experience in pharmaceutical and/or biotechnology industry. Experience in financial planning, analysis, and reporting. Experience with global finance operations. Knowledge of advanced financial modeling techniques. Business performance management Finance transformation experience involving recent technology advancements Prior multinational capability center experience Experience with Oracle Hyperion/EPM, S4/SAP, Anaplan, Tableau/PowerBI, DataBricks, Alteryx, data lakes, data structures Soft Skills: Excellent project management abilities. Strong communication and interpersonal skills. High level of integrity and ethical standards. Problem-solving and critical thinking capabilities. Ability to influence and motivate change. Adaptability to a dynamic and fast-paced environment. Strong organizational and time management skills

Posted 1 week ago

Apply

1.0 - 3.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do In this vital role you will responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives, and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing. Be a key team member that assists in the design and development of the data pipeline. Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems. Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks. Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs. Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency. Implement data security and privacy measures to protect sensitive data. Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions. Collaborate and communicate effectively with product teams. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications and Experience Masters degree and 1 to 3 years of experience in Computer Science, IT, or related field OR Bachelors degree and 3 to 5 years of experience in Computer Science, IT, or related field OR Diploma and 7 to 9 years of experience in Computer Science, IT, or related field Must-Have Skills: Hands-on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing. Proficiency in data analysis tools (e.g., SQL) and experience with data visualization tools. Excellent problem-solving skills and the ability to work with large, complex datasets. Preferred Qualifications: Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development. Strong understanding of data modeling, data warehousing, and data integration concepts. Knowledge of Python/R, Databricks, SageMaker, cloud data platforms. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments). Certified Data Scientist (preferred on Databricks or Cloud environments). Machine Learning Certification (preferred on Databricks or Cloud environments). Soft Skills: Excellent critical-thinking and problem-solving skills. Strong communication and collaboration skills. Demonstrated awareness of how to function in a team setting. Demonstrated presentation skills.

Posted 1 week ago

Apply

1.0 - 3.0 years

3 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

The role is responsible for designing, developing, and maintaining software solutions for Research scientists. Additionally, it involves automating operations, monitoring system health, and responding to incidents to minimize downtime. You will join a multi-functional team of scientists and software professionals that enables technology and data capabilities to evaluate drug candidates and assess their abilities to affect the biology of drug targets. This team implements scientific software platforms that enable the capture, analysis, storage, and reporting for our Large Molecule Discovery Research team (Design, Make, Test and Analyze processes). The team also interfaces heavily with teams supporting our in vitro assay management systems and our compound inventory platforms. The ideal candidate possesses experience in the pharmaceutical or biotech industry, strong technical skills, and full stack software engineering experience (spanning SQL, back-end, front-end web technologies, automated testing). Roles & Responsibilities: Work closely with product team, business team including scientists, and other collaborators Analyze and understand the functional and technical requirements of applications, solutions and systems and translate them into software architecture and design specifications Design, develop, and implement applications and modules, including custom reports, interfaces, and enhancements Develop and execute unit tests, integration tests, and other testing strategies to ensure the quality of the software Conduct code reviews to ensure code quality and adherence to standard methodologies Create and maintain documentation on software architecture, design, deployment, disaster recovery, and operations Provide ongoing support and maintenance for applications, ensuring that they operate smoothly and efficiently Stay updated with the latest technology and security trends and advancements What we expect of you We are all different, yet we all use our unique contributions to serve patients. The professional we seek is a with these qualifications. Basic Qualifications: RMasters degree with 1 - 3 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelors degree with 4 - 6 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma with 7 - 9 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Preferred Qualifications and Experience: 1+ years of experience in implementing and supporting biopharma scientific software platforms Functional Skills: Proficient in Java or Python Proficient in at least one JavaScript UI Framework (e.g. ExtJS, React, or Angular) Proficient in SQL (e.g. Oracle, PostgreSQL, Databricks) Preferred Qualifications: Experience with event-based architecture and serverless AWS services such as EventBridge, SQS, Lambda or ECS. Experience with Benchling Hands-on experience with Full Stack software development Strong understanding of software development methodologies, mainly Agile and Scrum Working experience with DevOps practices and CI/CD pipelines Experience of infrastructure as code (IaC) tools (Terraform, CloudFormation) Experience with monitoring and logging tools (e.g., Prometheus, Grafana, Splunk) Experience with automated testing tools and frameworks Experience with big data technologies (e.g., Spark, Databricks, Kafka) Experience with leveraging the use of AI-assistants (e.g. GitHub Copilot) to accelerate software development and improve code quality Professional Certifications : AWS Certified Cloud Practitioner preferred Soft Skills: Excellent problem solving, analytical, and troubleshooting skills Strong communication and interpersonal skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to learn quickly & work independently Team-oriented, with a focus on achieving team goals Ability to manage multiple priorities successfully Strong presentation and public speaking skills

Posted 1 week ago

Apply

4.0 - 6.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do We are seeking a Manager - Software Engineering to lead the design and development of software data and analytics applications to drive business decisions for Research. The ideal candidate possesses a deep understanding of software engineering principles, coupled with strong leadership and problem-solving skills. This position requires close collaboration with business analysts, scientists, and other engineers to create high-quality, scalable software solutions. You will continuously strive for innovation in the technologies and practices used for software engineering and develop a team of expert software engineers. You will collaborate with multi-functional teams, including, platform, functional IT, and business collaborators, to ensure that the solutions that are built align with business goals and are scalable, secure, and efficient. Roles & Responsibilities: Talent Growth & People Leadership: Lead, mentor, and manage a high-performing team of engineers, fostering an environment that encourages learning, collaboration, and innovation. Focus on nurturing future leaders and providing growth opportunities through coaching, training, and mentorship. Partner closely with product team owners, the business team including scientists, and other collaborators to lead the software engineering solutioning, ensuring deliverables are completed on time and within scope to deliver real value and meet business objectives Design, develop, and implement applications and modules, including custom reports, interfaces, and enhancements Analyze and understand the functional and technical requirements of applications, solutions and systems and translate them into software architecture and design specifications Develop and implement unit tests, integration tests, and other testing strategies to ensure the quality of the software Work closely with product team, business team including scientists, and other collaborators What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Masters degree with 4 - 6 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelors degree with 6 - 8 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Excellent leadership and project management skills, with the ability to manage multiple priorities simultaneously Proficient in General Purpose High Level Language (e.g. NodeJS/Koa, Python or Java) Proficient in Javascript UI Framework (e.g. React or ExtJs) Proficient in SQL (e.g. Oracle, PostGres or Databricks) Preferred Qualifications: 3+ years of experience in implementing and supporting custom software development for drug discovery Strong understanding of cloud platforms (e.g AWS) and containerization technologies (e.g., Docker, Kubernetes) Experience in automated testing tools and frameworks (e.g. Jest, Playwright, Cypress or Selenium) Experience with DevOps practices and CI/CD pipelines Experience with big data technologies (e.g., Spark, Databricks) Experienced with API integration, serverless, microservices architecture Experience with monitoring and logging tools (e.g., Prometheus, Grafana, Splunk) Experience of infrastructure as code (IaC) tools (Terraform, CloudFormation) Experience with version control systems like Git Strong understanding of software development methodologies, mainly Agile and Scrum Strong problem solving, analytical skills; Ability to learn quickly & work independently; Excellent communication and interpersonal skills Professional Certifications AWS Certified Cloud Practitioner preferred Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills

Posted 1 week ago

Apply

1.0 - 3.0 years

3 - 6 Lacs

Hyderabad

Work from Office

Naukri logo

Roles & Responsibilities: Design, develop, and implement applications and modules, including custom reports, interfaces, and enhancements Analyze and understand the functional and technical requirements of applications, solutions and systems and translate them into software architecture and design specifications Develop and execute unit tests, integration tests, and other testing strategies to ensure the quality of the software Identify and resolve software bugs and performance issues Work closely with cross-functional teams, including product management, design, and QA, to deliver high-quality software on time Maintain detailed documentation of software designs, code, and development processes Customize modules to meet specific business requirements Work on integrating with other systems and platforms to ensure seamless data flow and functionality Provide ongoing support and maintenance for applications, ensuring that they operate smoothly and efficiently Possesses strong rapid prototyping skills and can quickly translate concepts into working code Contribute to both front-end and back-end development using cloud technology Develop innovative solution using generative AI technologies Create and maintain documentation on software architecture, design, deployment, disaster recovery, and operations Identify and resolve technical challenges effectively Stay updated with the latest trends and advancements Work closely with product team, business team including scientists, and other stakeholders What we expect of you We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Masters degree and 1 to 3 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelors degree and 3 to 5 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma and 7 to 9 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Preferred Qualifications: Experience in implementing and supporting biopharma scientific software platforms Functional Skills: Must-Have Skills: Proficient in a General Purpose High Level Language (e.g. Python, Java, C#.NET) Proficient in a Javascript UI Framework (e.g. React, ExtJs) Proficient in SQL (e.g. Oracle, PostGres, Databricks) Experience with event-based architecture (e.g. Mulesoft, AWS EventBridge, AWS Kinesis, Kafka) Good-to-Have Skills: Strong understanding of software development methodologies, mainly Agile and Scrum Hands-on experience with Full Stack software development Strong understanding of cloud platforms (e.g AWS) and containerization technologies (e.g., Docker, Kubernetes) Working experience with DevOps practices and CI/CD pipelines Experience with big data technologies (e.g., Spark, Databricks) Experience with API integration, serverless, microservices architecture (e.g. Mulesoft, AWS Kafka) Experience with monitoring and logging tools (e.g., Prometheus, Grafana, Splunk) Experience of infrastructure as code (IaC) tools (Terraform, CloudFormation) Experience with version control systems like Git Experience with automated testing tools and frameworks Experience with Benchling, Revvity, IDBS, or similar LIMS/ELN platforms Professional Certifications (please mention if the certification is preferred or mandatory for the role): AWS Certified Cloud Practitioner preferred Soft Skills: Excellent problem solving, analytical, and troubleshooting skills Strong communication and interpersonal skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to learn quickly & work independently Team-oriented, with a focus on achieving team goals Ability to manage multiple priorities successfully Strong presentation and public speaking skills

Posted 1 week ago

Apply

5.0 - 8.0 years

8 - 12 Lacs

Pune

Work from Office

Naukri logo

Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Do 1. Instrumental in understanding the requirements and design of the product/ software Develop software solutions by studying information needs, studying systems flow, data usage and work processes Investigating problem areas followed by the software development life cycle Facilitate root cause analysis of the system issues and problem statement Identify ideas to improve system performance and impact availability Analyze client requirements and convert requirements to feasible design Collaborate with functional teams or systems analysts who carry out the detailed investigation into software requirements Conferring with project managers to obtain information on software capabilities 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries,Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Mandatory Skills: DataBricks - Data Engineering.

Posted 1 week ago

Apply

1.0 - 3.0 years

3 - 6 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking an MDM Associate Analyst with 2 5 years of development experience to support and enhance our enterprise MDM (Master Data Management) platforms using Informatica/Reltio. This role is critical in delivering high-quality master data solutions across the organization, utilizing modern tools like Databricks and AWS to drive insights and ensure data reliability. The ideal candidate will have strong SQL, data profiling, and experience working with cross-functional teams in a pharma environment. To succeed in this role, the candidate must have strong experience on MDM (Master Data Management) on configuration (L3 Configuration, Assets creati on, Data modeling etc ) , ETL and data mappings (CAI, CDI ) , data mastering (Match/Merge and Survivorship rules) , source and target integrations ( RestAPI , Batch integration, Integration with Databricks tables etc ) Roles & Responsibilities: Analyze and manage customer master data using Reltio or Informatica MDM solutions. Perform advanced SQL queries and data analysis to validate and ensure master data integrity. Leverage Python, PySpark, and Databricks for scalable data processing and automation. Collaborate with business and data engineering teams for continuous improvement in MDM solutions. Implement data stewardship processes and workflows, including approval and DCR mechanisms. Utilize AWS cloud services for data storage and compute processes related to MDM. Contribute to metadata and data modeling activities. Track and manage data issues using tools such as JIRA and document processes in Confluence. Apply Life Sciences/Pharma industry context to ensure data standards and compliance. Basic Qualifications and Experience: Masters degree with 1 - 3 years of experience in Business, Engineering, IT or related field OR Bachelors degree with 2 - 5 years of experience in Business, Engineering, IT or related field OR Diploma with 6 - 8 years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: Strong experience with Informatica or Reltio MDM platforms in building configurations from scratch (Like L3 configuration or Data modeling, Assets creations, Setting up API integrations, Orchestration) Strong experience in building data mappings, data profiling, creating and implementation business rules for data quality and data transformation Strong experience in implementing match and merge rules and survivorship of golden records Expertise in integrating master data records with downstream systems Very good understanding of DWH basics and good knowledge on data modeling Experience with IDQ, data modeling and approval workflow/DCR. Advanced SQL expertise and data wrangling. Exposure to Python and PySpark for data transformation workflows. Knowledge of MDM, data governance, stewardship, and profiling practices. Good-to-Have Skills: Familiarity with Databricks and AWS architecture. Background in Life Sciences/Pharma industries. Familiarity with project tools like JIRA and Confluence. Basics of data engineering concepts. Professional Certifications : Any ETL certification (e.g. Informatica) Any Data Analysis certification (SQL, Python, Databricks) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

2.0 - 7.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do We are seeking a highly skilled Machine Learning Engineer with a strong MLOps background to join our team. You will play a pivotal role in building and scaling our machine learning models from development to production. Your expertise in both machine learning and operations will be essential in creating efficient and reliable ML pipelines. Roles & Responsibilities: Collaborate with data scientists to develop, train, and evaluate machine learning models. Build and maintain MLOps pipelines, including data ingestion, feature engineering, model training, deployment, and monitoring. Leverage cloud platforms (AWS, GCP, Azure) for ML model development, training, and deployment. Implement DevOps/MLOps best practices to automate ML workflows and improve efficiency. Develop and implement monitoring systems to track model performance and identify issues. Conduct A/B testing and experimentation to optimize model performance. Work closely with data scientists, engineers, and product teams to deliver ML solutions. Guide and mentor junior engineers in the team Stay updated with the latest trends and advancements Basic Qualifications: Doctorate degree and 2 years of Computer Science, Statistics, and Data Science, Machine Learning experience OR Masters degree and 8 to 10 years of Computer Science, Statistics, and Data Science, Machine Learning experience OR Bachelors degree and 10 to 14 years of Computer Science, Statistics, and Data Science, Machine Learning experience OR Diploma and 14 to 18 years of years of Computer Science, Statistics, and Data Science, Machine Learning experience Preferred Qualifications: Must-Have Skills: Strong foundation in machine learning algorithms and techniques Experience in MLOps practices and tools (e.g., MLflow, Kubeflow, Airflow); Experience in DevOps tools (e.g., Docker, Kubernetes, CI/CD) Proficiency in Python and relevant ML libraries (e.g., TensorFlow, PyTorch, Scikit-learn) Outstanding analytical and problem-solving skills; Ability to learn quickly; Excellent communication and interpersonal skills Good-to-Have Skills: Experience with big data technologies (e.g., Spark), and performance tuning in query and data processing Experience with data engineering and pipeline development Experience in statistical techniques and hypothesis testing, experience with regression analysis, clustering and classification Knowledge of NLP techniques for text analysis and sentiment analysis Experience in analyzing time-series data for forecasting and trend analysis Familiar with AWS, Azure, or Google Cloud; Familiar with Databricks platform for data analytics and MLOps Professional Certifications Cloud Computing and Databricks certificate preferred Soft Skills: Excellent analytical and fixing skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within…. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities 3-5 years of experience as AI/ML engineer or similar role. Strong knowledge of machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). Hands-on experience with model development and deployment processes. Proficiency in programming languages such as Python. Experience with data preprocessing, feature engineering, and model evaluation techniques. Familiarity with cloud platforms (e.g., AWS) and containerization (e.g., Docker, Kubernetes). Familiarity with version control systems (e.g., GitHub). Proficiency in data manipulation and analysis using libraries such as NumPy and Pandas. Good to have knowledge of deep learning, ML Ops: Kubeflow, MLFlow, Nextflow. Knowledge on text Analytics, NLP, Gen AI Mandatory Skill Sets ML Ops, AI / ML Preferred Skill Sets ML Ops, AI / ML Years Of Experience Required 4 - 8 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Full Stack Development Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 week ago

Apply

Exploring Databricks Jobs in India

Databricks is a popular technology in the field of big data and analytics, and the job market for Databricks professionals in India is growing rapidly. Companies across various industries are actively looking for skilled individuals with expertise in Databricks to help them harness the power of data. If you are considering a career in Databricks, here is a detailed guide to help you navigate the job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Chennai
  5. Mumbai

Average Salary Range

The average salary range for Databricks professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum

Career Path

In the field of Databricks, a typical career path may include: - Junior Developer - Senior Developer - Tech Lead - Architect

Related Skills

In addition to Databricks expertise, other skills that are often expected or helpful alongside Databricks include: - Apache Spark - Python/Scala programming - Data modeling - SQL - Data visualization tools

Interview Questions

  • What is Databricks and how is it different from Apache Spark? (basic)
  • Explain the concept of lazy evaluation in Databricks. (medium)
  • How do you optimize performance in Databricks? (advanced)
  • What are the different cluster modes in Databricks? (basic)
  • How do you handle data skewness in Databricks? (medium)
  • Explain how you can schedule jobs in Databricks. (medium)
  • What is the significance of Delta Lake in Databricks? (advanced)
  • How do you handle schema evolution in Databricks? (medium)
  • What are the different file formats supported by Databricks for reading and writing data? (basic)
  • Explain the concept of checkpointing in Databricks. (medium)
  • How do you troubleshoot performance issues in Databricks? (advanced)
  • What are the key components of Databricks Runtime? (basic)
  • How can you secure your data in Databricks? (medium)
  • Explain the role of MLflow in Databricks. (advanced)
  • How do you handle streaming data in Databricks? (medium)
  • What is the difference between Databricks Community Edition and Databricks Workspace? (basic)
  • How do you set up monitoring and alerting in Databricks? (medium)
  • Explain the concept of Delta caching in Databricks. (advanced)
  • How do you handle schema enforcement in Databricks? (medium)
  • What are the common challenges faced in Databricks projects and how do you overcome them? (advanced)
  • How do you perform ETL operations in Databricks? (medium)
  • Explain the concept of MLflow Tracking in Databricks. (advanced)
  • How do you handle data lineage in Databricks? (medium)
  • What are the best practices for data governance in Databricks? (advanced)

Closing Remark

As you prepare for Databricks job interviews, make sure to brush up on your technical skills, stay updated with the latest trends in the field, and showcase your problem-solving abilities. With the right preparation and confidence, you can land your dream job in the exciting world of Databricks in India. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies