Home
Jobs

1693 Data Engineering Jobs - Page 49

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 6.0 years

6 - 11 Lacs

Noida

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibility Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B.E / MCA / B.Tech / MTECH /MS Graduation (Minimum 16 years of formal education, Correspondence courses are not relevant) 2+ years of experience on Azure database offering like SQL DB, Postgres DB, constructing data pipelines using Azure data factory, design and development of analytics using Azure data bricks and snowpark 3+ years of experience in constructing large and complex SQL queries on terabytes of warehouse database system 2+ years of experience on Cloud based DWSnowflake, Azure SQL DW 2+ years of experience in data engineering and working on large data warehouse including design and development of ETL / ELT Good Knowledge on Agile practices - Scrum, Kanban Knowledge on Kubernetes, Jenkins, CI / CD Pipelines, SonarQube, Artifactory, GIT, Unit Testing Main tech experience Dockers, Kubernetes and Kafka DatabaseAzure SQL databases Knowledge on Apache Kafka and Data Streaming Main tech experience Terraform and Azure.. Ability to identify system changes and verify that technical system specifications meet the business requirements Solid problem solving, analytical kills, Good communication and presentation skills, Good attitude and self-motivated Solid problem solving, analytical kills Proven good communication and presentation skills Proven good attitude and self-motivated Preferred Qualifications 2+ years of experience on working with cloud native monitoring and logging tool like Log analytics 2+ years of experience in scheduling tools on cloud either using Apache Airflow or logic apps or any native/third party scheduling tool on cloud Exposure on ATDD, Fortify, SonarQube Unix scripting, DW concepts, ETL FrameworksScala / Spark, DataStage At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 4 weeks ago

Apply

3.0 - 7.0 years

11 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together Primary Responsibilities Support the full data engineering lifecycle including research, proof of concepts, design, development, testing, deployment, and maintenance of data management solutions Utilize knowledge of various data management technologies to drive data engineering projects Lead data acquisition efforts to gather data from various structured or semi-structured source systems of record to hydrate client data warehouse and power analytics across numerous health care domains Leverage combination of ETL/ELT methodologies to pull complex relational and dimensional data to support loading DataMart’s and reporting aggregates Eliminate unwarranted complexity and unneeded interdependencies Detect data quality issues, identify root causes, implement fixes, and manage data audits to mitigate data challenges Implement, modify, and maintain data integration efforts that improve data efficiency, reliability, and value Leverage and facilitate the evolution of best practices for data acquisition, transformation, storage, and aggregation that solve current challenges and reduce the risk of future challenges Effectively create data transformations that address business requirements and other constraints Partner with the broader analytics organization to make recommendations for changes to data systems and the architecture of data platforms Support the implementation of a modern data framework that facilitates business intelligence reporting and advanced analytics Prepare high level design documents and detailed technical design documents with best practices to enable efficient data ingestion, transformation and data movement Leverage DevOps tools to enable code versioning and code deployment Leverage data pipeline monitoring tools to detect data integrity issues before they result into user visible outages or data quality issues Leverage processes and diagnostics tools to troubleshoot, maintain and optimize solutions and respond to customer and production issues Continuously support technical debt reduction, process transformation, and overall optimization Leverage and contribute to the evolution of standards for high quality documentation of data definitions, transformations, and processes to ensure data transparency, governance, and security Ensure that all solutions meet the business needs and requirements for security, scalability, and reliability Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s Degree (preferably in information technology, engineering, math, computer science, analytics, engineering or other related field) 3+ years of experience in Microsoft Azure Cloud, Azure Data Factory, Data Bricks, Spark, Scala / Python , ADO. 5+ years of combined experience in data engineering, ingestion, normalization, transformation, aggregation, structuring, and storage 5+ years of combined experience working with industry standard relational, dimensional or non-relational data storage systems 5+ years of experience in designing ETL/ELT solutions using tools like Informatica, DataStage, SSIS , PL/SQL, T-SQL, etc. 5+ years of experience in managing data assets using SQL, Python, Scala, VB.NET or other similar querying/coding language 3+ years of experience working with healthcare data or data to support healthcare organizations At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 4 weeks ago

Apply

6.0 - 11.0 years

16 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. The Optum Technology Digital team is on a mission to disrupt the healthcare industry, transforming UHG into an industry-leading Consumer brand. We deliver hyper-personalized digital solutions that empower direct-to-consumer, digital-first experiences, educating, guiding, and empowering consumers to access the right care at the right time. Our mission is to revolutionize healthcare for patients and providers by delivering cutting-edge, personalized and conversational digital solutions. We’re Consumer Obsessed, ensuring they receive exceptional support throughout their healthcare journeys. As we drive this transformation, we're revolutionizing customer interactions with the healthcare system, leveraging AI, cloud computing, and other disruptive technologies to tackle complex challenges. Serving UnitedHealth Group's digital technology needs, the Consumer Engineering team impacts millions of lives through UnitedHealthcare & Optum. We are seeking a dynamic individual who embodies modern engineering culture - someone with deep engineering expertise within a digital product model, a passion for innovation, and a relentless drive to enhance the consumer experience. Our ideal candidate thrives in an agile, fast-paced rapid-prototyping environment, embraces DevOps and continuous integration/continuous deployment (CI/CD) practices, and champions the Voice of the Customer. If you are driven by the pursuit of excellence, eager to innovate, and excited to make a tangible impact within a team that embraces modern technologies and consumer-centric strategies, while prioritizing robust cyber-security protocols, we invite you to explore this exciting opportunity with us. Join our team and be at the forefront of shaping the future of healthcare, where your unique skills will not only be recognized but celebrated. Primary Responsibilities Design and implement data models to analyse business, system, and security events for real-time insights and threat detection Conduct exploratory data analysis (EDA) to understand patterns and relationships across large data sets, and develop hypotheses for new model development Develop dashboards and reports to present actionable insights to business and security teams Build and automate near real-time analytics workflows on AWS, leveraging services like Kinesis, Glue, Redshift, and QuickSight Collaborate with AI/ML engineers to develop and validate data features for model inputs Interpret and communicate complex data trends to stakeholders and provide recommendations for data-driven decision-making Ensure data quality and governance standards, collaborating with data engineering teams to build quality data pipelines Develop data science algorithms & generate actionable insights as per platform needs and work closely with cross capability teams throughout solution development lifecycle from design to implementation & monitoring Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B. Tech or Master’s degree or equivalent experience 12+ years of experience in data engineering roles in Data Warehouse 3+ years of experience as a Data Scientist with a focus on building models for analytics and insights in AWS environments Experience with AWS data and analytics services (e.g., Kinesis, Glue, Redshift, Athena, TimeStream) Hands-on experience with statistical analysis, anomaly detection and predictive modelling Proficiency with SQL, Python, and data visualization tools like QuickSight, Tableau, or Power BI Proficiency in data wrangling, cleansing, and feature engineering Preferred Qualifications Experience in security data analytics, focusing on threat detection and prevention Knowledge of AWS security tools and understanding of cloud data security principles Familiarity with deploying data workflows using CI/CD pipelines in AWS environments Background in working with real-time data streaming architectures and handling high-volume event-based data

Posted 4 weeks ago

Apply

4.0 - 7.0 years

10 - 14 Lacs

Noida

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together Primary Responsibilities Co-ordinate with the team to support 24*7 operations Subject Matter Expert for Day to Day Operations, Process and ticket queue management Perform team management along with managing process and operational escalations Leverage latest technologies and analyze large volumes of data to solve complex problems facing health care industry Develop, test, and support new and preexisting programs related to data interfaces Support operations by identifying, researching and resolving performance and production issues Participate in War Room activities to monitor status and coordinate with multiple groups to address production performance concerns, mitigate client risks and communicate status Work with engineering teams to build tools/features necessary for production operations Build and improve standard operation procedures and troubleshooting documents Report on metrics to surface meaningful results and identify areas for efficiency gain Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor/master’s degree in computer science or Information Technology or equivalent work experience 3+ years of experience with UNIX Shell Scripting 2+ years of experience in RDBMS like Oracle and writing queries in SQL and Pl/SQL, Postgres 2+ years of experience working with Production Operations processes and team 2+ years of experience working with server-side Administration with OS flavors such as Redhat or CentOS Experience in understanding performance metrics and developing them to measure progress against KPI Ability to develop and manage multiple projects with minimal direction and supervision Soft Skills Highly organized with strong analytical skills and excellent attention to details Excellent time management and problems solving skills and capacity to lead diverse talent, work cross-functionally and build consensus on difficult issues Flexible to adjust to evolving business needs with ability to understand objectives and communicate with non-technical partners Solid organization skills, very detail oriented, with careful attention to work processes Takes ownership of responsibilities and follows through hand-offs to other groups Enjoys a fast-paced environment and the opportunity to learn new skills High-performing, motivated and goal-driven Preferred Qualifications Experience delegating tasks, providing timely feedback to team to accomplish a task or solve a problem Experience in scripting languages like PERL, Bash/Shell or Python Experience with Continuous Integration (CI) tools, Jenkins or Bamboo preferred Experience working in an agile environment US Healthcare industry experience Experience working across teams and proven track record in solution focused problem-solving skills Familiarity with Cloud Based Technologies Comfortable working in a rapidly changing environment where documentation for job execution aren’t yet fully fleshed out Knowledgeable in building and/or leveraging operational reliability metrics to understand the health of the production support process An eye for improving technical and business processes with proven experience creating standard operating procedures and other technical process documentation At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 4 weeks ago

Apply

7.0 - 12.0 years

18 - 22 Lacs

Hyderabad

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. We are looking for a talented and hands-on Azure Engineer to join our team. The ideal candidate will have significant experience working on Azure, as well as a solid background in cloud data engineering, data pipelines, and analytics solutions. You will be responsible for designing, building, and managing scalable data architectures, enabling seamless data integration, and leveraging advanced analytics capabilities to drive business insights. Primary Responsibilities Azure Platform Implementation: Develop, manage, and optimize data pipelines using AML workspace on Azure Design and implement end-to-end data processing workflows, leveraging Databricks notebooks and jobs for data transformation, modeling, and analysis Build and maintain scalable data models in Databricks using Apache Spark for big data processing Integrate Databricks with other Azure services, including Azure Data Lake, Azure Synapse, and Azure Blob Storage Data Engineering & ETL Development: Design and implement robust ETL/ELT pipelines to ingest, transform, and load large volumes of data Optimize data processing jobs for performance, reliability, and scalability Use Apache Spark and other Databricks features to process structured, semi-structured, and unstructured data efficiently Azure Cloud Architecture: Work with Azure cloud services to design and deploy cloud-based data solutions Architect and implement data lakes, data warehouses, and analytics solutions within the Azure ecosystem Ensure security, compliance, and governance best practices for cloud-based data solutions Collaboration & Analytics: Collaborate with data scientists, analysts, and business stakeholders to deliver actionable insights Build advanced analytics models and solutions using Databricks, leveraging Python, SQL, and Spark-based technologies Provide guidance and technical expertise to other teams on best practices for working with Databricks and Azure Performance Optimization & Monitoring: Monitor and optimize the performance of data pipelines and Databricks jobs Troubleshoot and resolve performance and reliability issues within the data engineering pipelines Ensure high availability, fault tolerance, and efficient resource utilization on Databricks Continuous Improvement: Stay up-to-date with the latest features of Databricks, Azure, and related technologies Continuously improve data architectures, pipelines, and processes for better performance and scalability Propose and implement innovative solutions to meet evolving business needs Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 10+ years of hands-on experience with Azure ecosystem Solid experience with cloud-based data engineering, particularly with Azure services (Azure Data Lake, Azure Synapse, Azure Blob Storage, etc.) Experience with Databricks notebooks and managing Databricks environments Hands-on experience with data storage technologies (Data Lake, Data Warehouse, Blob Storage) Solid knowledge of SQL and Python for data processing and transformation Familiarity with cloud infrastructure management on Azure and using Azure DevOps for CI/CD Solid understanding of data modeling, data warehousing, and data lake architectures Expertise in building and managing ETL/ELT pipelines using Apache Spark, Databricks, and other related technologies Proficiency in Apache Spark (PySpark, Scala, SQL) Proven solid problem-solving skills with a proactive approach to identifying and addressing issues Proven ability to communicate complex technical concepts to non-technical stakeholders Proven excellent collaboration skills to work effectively with cross-functional teams Preferred Qualifications Certifications in Azure (Azure Data Engineer, Azure Solutions Architect) Experience with advanced analytics techniques, including machine learning and AI, using Databricks Experience with other big data processing frameworks or platforms Experience with data governance and security best practices in cloud environments Knowledge of DevOps practices and CI/CD pipelines for cloud environments At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 4 weeks ago

Apply

7.0 - 12.0 years

27 - 32 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking a highly skilled and experienced Senior Manager, AI/ML to lead our dynamic team. The ideal candidate will have a robust development background in any programming stack, including but not limited to Java, .Net, Python, or Node.js. This role requires hands-on experience with AI/ML, preferably using cloud platforms such as AWS Bedrock, Google Vertex AI, or Azure AI. A solid understanding of Generative AI and the ability to manage multiple projects end-to-end is essential. Primary Responsibilities :Leadership and Strategy :Lead and mentor a team of AI/ML engineers and data scientist sDevelop and implement AI/ML strategies aligned with business goal sDrive innovation and stay updated with the latest industry trends and technologie sTechnical Expertise :Utilize solid development skills in programming languages such as Java, .Net, Python, or Node.j sOversee the design, development, and deployment of AI/ML models and solution sEnsure best practices in coding, testing, and deploymen tProject Management :Manage multiple AI/ML projects simultaneously, ensuring timely delivery and qualit yCollaborate with cross-functional teams to define project requirements and deliverable sMonitor project progress and adjust plans as necessary to meet objective sStakeholder Engagement :Communicate effectively with stakeholders to understand their needs and provide technical insight sPresent project updates, results, and recommendations to senior managemen tInnovation and Research :Conduct research to identify new opportunities for AI/ML application sEncourage a culture of continuous learning and improvement within the tea mComply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do s o Required Qualification s:Bachelor’s or Master’s degree in Computer Science, Engineering, or a related fie ld15+ years of experience in software developme nt3+ years of experience delivering solutions with a focus on AI/ MLProven experience in leading and managing technical tea msProven solid background in programming languages such as Java, .Net, Python, or Node. jsProven excellent problem-solving and analytical skill s.Proven solid communication and interpersonal skill s.Proficiency in frameworks such as Scikit Learn, TensorFlow, and PyTor chProven ability to possess robust data engineering skills in EDA, building and maintaining AI/ML data pipelin esProven ability to work in a fast-paced, dynamic environme nt Preferred Qualificatio ns:Degrees, Diploma or Certifications in the field of AI /MLExperience with cloud platforms AI Stacks like AWS Bedrock, Azure AI, GCP Vertex AI #Exet ech

Posted 4 weeks ago

Apply

3.0 - 7.0 years

11 - 15 Lacs

Gurugram

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Design, develop, and maintain scalable data/code pipelines using Azure Databricks, Apache Spark, and Scala Collaborate with data engineers, data scientists, and business stakeholders to understand data requirements and deliver high-quality data solutions Optimize and tune Spark applications for performance and scalability Implement data processing workflows, ETL processes, and data integration solutions Ensure data quality, integrity, and security throughout the data lifecycle Troubleshoot and resolve issues related to data processing and pipeline failures Stay updated with the latest industry trends and best practices in big data technologies Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience 6+ years Proven experience with Azure Databricks, Apache Spark, and Scala 6+ years experience with Microsoft Azure Experience with data warehousing solutions and ETL tools Solid understanding of distributed computing principles and big data processing Proficiency in writing complex SQL queries and working with relational databases Proven excellent problem-solving skills and attention to detail Solid communication and collaboration skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 4 weeks ago

Apply

4.0 - 7.0 years

10 - 14 Lacs

Gurugram

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together Primary Responsibilities As a Senior Data Engineering Analyst, you will be instrumental in driving our data initiatives and enhancing our data infrastructure to support strategic decision-making and business operations. You will lead the design, development, and optimization of complex data pipelines and architectures, ensuring the efficient collection, storage, and processing of large volumes of data from diverse sources. Leveraging your advanced expertise in data modeling and database management, you will ensure that our data systems are scalable, reliable, and optimized for high performance A core aspect of your role will involve developing and maintaining robust ETL (Extract, Transform, Load) processes to facilitate seamless data integration and transformation, thereby supporting our analytics and reporting efforts. You will implement best practices in data warehousing and data lake management, organizing and structuring data to enable easy access and analysis for various stakeholders across the organization. Ensuring data quality and integrity will be paramount; you will establish and enforce rigorous data validation and cleansing procedures to maintain high standards of accuracy and consistency within our data repositories In collaboration with cross-functional teams, including data scientists, business analysts, and IT professionals, you will gather and understand their data requirements, delivering tailored technical solutions that align with business objectives. Your ability to communicate complex technical concepts to non-technical stakeholders will be essential in fostering collaboration and ensuring alignment across departments. Additionally, you will mentor and provide guidance to junior data engineers and analysts, promoting a culture of continuous learning and professional growth within the data engineering team Take a proactive role in performance tuning and optimization of our data systems, identifying and resolving bottlenecks to enhance efficiency and reduce latency. Staying abreast of the latest advancements in data engineering technologies and methodologies, you will recommend and implement innovative solutions that drive our data capabilities forward. Your strategic input will be invaluable in planning and executing data migration and integration projects, ensuring seamless transitions between systems with minimal disruption to operations Maintaining comprehensive documentation of data processes, architectural designs, and technical specifications will be a key responsibility, supporting knowledge sharing and maintaining organizational standards. You will generate detailed reports on data quality, system performance, and the effectiveness of data engineering initiatives, providing valuable insights to inform strategic decisions. Additionally, you will oversee data governance protocols, ensuring compliance with relevant data protection regulations and industry standards, thereby safeguarding the integrity and security of our data assets Leadership and expertise will contribute significantly to the enhancement of our data infrastructure, enabling the organization to leverage data-driven insights for sustained growth and competitive advantage. By fostering innovation, ensuring data excellence, and promoting best practices, you will play a critical role in advancing our data engineering capabilities and supporting the overall success of the business Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, Engineering, or a related field Experience5+ years in data engineering, data analysis, or a similar role with a proven track record Technical Skills: Advanced proficiency in SQL and experience with relational databases (Oracle, MySQL, SQL Server) Expertise in ETL processes and tools Solid understanding of data modeling, data warehousing, and data lake architectures Proficiency in programming languages such as Python or Java Familiarity with cloud platforms (Azure Platform) and their data services Knowledge of data governance principles and data protection regulations (GDPR, HIPAA, CCPA) Soft Skills: Proven excellent analytical and problem-solving abilities Solid communication and collaboration skills Leadership experience and the ability to mentor junior team members Proven proactive mindset with a commitment to continuous learning and improvement Preferred Qualifications Relevant certifications Experience with version control systems (Git) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 4 weeks ago

Apply

4.0 - 7.0 years

8 - 15 Lacs

Hyderabad

Hybrid

Naukri logo

We are seeking a highly motivated Senior Data Engineer OR Data Engineer within Envoy Global's tech team to join us on a full time, permanent basis. This role is responsible for designing, developing, and documenting data pipelines and ETL jobs to enable data migration, data integration and data warehousing. That includes ETL jobs, reports, dashboards and data pipelines. The person in this role will work closely with Data Architect, BI & Analytics team and Engineering teams to deliver data assets for Data Security, DW and Analytics. As our Senior Data Engineer OR Data Engineer, you will be required to: Design, build, test and maintain cloud-based data pipelines to acquire, profile, cleanse, consolidate, transform, integrate data Design and develop ETL processes for the Data Warehouse lifecycle (staging of data, ODS data integration, EDW and data marts) and Data Security (Data archival, Data obfuscation, etc.). Build complex SQL queries on large datasets and performance tune as needed Design and develop data pipelines and ETL jobs using SSIS and Azure Data Factory Maintain ETL packages and supporting data objects for our growing BI infrastructure Carry out monitoring, tuning, and database performance analysis Facilitate integration of our application with other systems by developing data pipelines Prepare key documentation to support the technical design in technical specifications Collaborate and work alongside with other technical professionals (BI Report developers, Data Analysts, Architect) Communicate clearly and effectively with stakeholders To apply for this role, you should possess the following skills, experience and qualifications: Design, Develop, and Document Data Pipelines and ETL Jobs: Create and maintain robust data pipelines and ETL (Extract, Transform, Load) processes to support data migration, integration, and warehousing. Data Assets Delivery: Collaborate with Data Architects, BI & Analytics teams, and Engineering teams to deliver high-quality data assets for data security, data warehousing (DW), and analytics. ETL Jobs, Reports, Dashboards, and Data Pipelines: Develop and manage ETL jobs, generate reports, create dashboards, and ensure the smooth operation of data pipelines. 3+ years of experience as a SSIS ETL developer, Data Engineer or a related role 2+ years of experience using Azure Data Factory Knowledgeable in Data Modelling and Data warehouse concepts Experience working with Azure stack Demonstrated ability to write SQL/TSQL queries to retrieve/modify data Knowledge and know-how to troubleshoot potential issues, and experience with best practices around database operations Ability to work in an Agile environment Should you have a deep passion for technology and a desire to thrive in a rapidly evolving and creative environment, we would be delighted to receive your application. Please provide your updated resume, highlighting your relevant experience and the reasons you believe you would be a valuable member of our team. We look forward to reviewing your subm

Posted 4 weeks ago

Apply

3.0 - 6.0 years

14 - 18 Lacs

Mysuru

Work from Office

Naukri logo

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong and proven background in Information Technology & working knowledge of .NET Core, C#, REST API, LINQ, Entity Framework, XUnit. Troubleshooting issues related to code performance. Working knowledge of Angular 15 or later, Typescript, Jest Framework, HTML 5 and CSS 3 & MS SQL Databases, troubleshooting issues related to DB performance Good understanding of CQRS, mediator, repository pattern. Good understanding of CI/CD pipelines and SonarQube & messaging and reverse proxy Preferred technical and professional experience Good understanding of AuthN and AuthZ techniques like (windows, basic, JWT). Good understanding of GIT and it’s process like Pull request. Merge, pull, commit Methodology skills like AGILE, TDD, UML

Posted 4 weeks ago

Apply

5.0 - 10.0 years

8 - 13 Lacs

Gurugram

Work from Office

Naukri logo

Job Summary As a Data Engineer at Synechron, you will play a pivotal role in harnessing data to drive business value. Your expertise will be essential in developing and maintaining data pipelines, ensuring data integrity, and facilitating analytics that inform strategic decisions. This role contributes significantly to our business objectives by optimizing data processing and enabling insightful reporting across the organization. Software Requirements Required: AWS Redshift (3+ years of experience) Spark (3+ years of experience) Python (3+ years of experience) Complex SQL (3+ years of experience) Shell scripting (2+ years of experience) Docker (2+ years of experience) Kubernetes (2+ years of experience) Bitbucket (2+ years of experience) Preferred: DBT Dataiku Kubernetes cluster management Overall Responsibilities Develop and optimize data pipelines using big data technologies, ensuring seamless data flow and accessibility. Collaborate with cross-functional teams to translate business requirements into technical solutions. Ensure high data quality and integrity in analytics and reporting processes. Implement data architecture and modeling best practices to support strategic objectives. Troubleshoot and resolve data-related issues, maintaining a service-first mentality to enhance customer satisfaction. Technical Skills (By Category) Programming Languages: EssentialPython, SQL PreferredShell scripting Databases/Data Management: EssentialAWS Redshift, Hive, Presto PreferredDBT Cloud Technologies: EssentialAWS PreferredKubernetes, Docker Frameworks and Libraries: EssentialSpark PreferredDataiku Development Tools and Methodologies: EssentialBitbucket, Airflow or Argo Workflows Experience Requirements 6-7 years of experience in data engineering or related roles. Strong understanding of data & analytics concepts, with proven experience in big data technologies. Experience in the financial services industry preferred but not required. Alternative pathwaysSignificant project experience in data architecture and analytics. Day-to-Day Activities Design and implement scalable data pipelines. Participate in regular team meetings to align on project goals and deliverables. Collaborate with stakeholders to refine data processes and analytics. Make informed decisions on data management strategies and technologies. Qualifications Bachelors degree in Computer Science, Data Engineering, or a related field (or equivalent experience). Certifications in AWS or relevant data engineering technologies preferred. Commitment to continuous professional development in data engineering and analytics. Professional Competencies Strong critical thinking and problem-solving capabilities, with a focus on innovation. Effective communication skills and stakeholder management. Ability to work collaboratively in a team-oriented environment. Adaptability and a willingness to learn new technologies and methodologies. Excellent time and priority management to meet deadlines and project goals.

Posted 4 weeks ago

Apply

4.0 - 9.0 years

19 - 25 Lacs

Hyderabad

Work from Office

Naukri logo

Job Summary Synechron is seeking a highly skilled Data Engineer to join our dynamic team. This role is essential in designing and implementing data solutions that drive business success and innovation. As a Data Engineer, you will collaborate with cross-functional teams to transform business requirements into scalable and efficient data architectures, contributing significantly to our business objectives through enhanced data management and insights. Software Requirements Required: Strong understanding of mobile, cloud, IoT, and blockchain technologies. Proficiency in software development life cycle (SDLC) and Agile methodologies. Preferred: Familiarity with specific tools and platforms related to cloud (e.g., AWS, Azure), IoT frameworks, and blockchain ecosystems. Overall Responsibilities Collaborate with cross-functional teams to understand technology requirements and design data solutions that meet business needs. Develop technical specifications and detailed documentation for new features and enhancements. Stay updated with the latest technology trends and suggest integration into existing solutions. Conduct code reviews to ensure codebase quality and maintainability. Provide technical support and resolve issues for team members. Ensure software solutions are tested thoroughly and meet quality standards through collaboration with the testing team. Technical Skills (By Category) Programming Languages: RequiredProficiency in Java, Python. PreferredExperience with Node.js. Databases/Data Management: RequiredExperience with SQL and NoSQL databases. Cloud Technologies: PreferredFamiliarity with AWS, Azure, or Google Cloud Platform. Frameworks and Libraries: PreferredExperience with data processing frameworks like Apache Spark or Hadoop. Development Tools and Methodologies: RequiredExperience with Agile, Scrum, Git, JIRA, Confluence. Experience Requirements Minimum of 4+ years of experience in software development with a focus on data engineering. Experience working in Agile environments and participating in code reviews. Industry experience in technology, finance, or similar domains is preferred. Alternative experience pathways include equivalent roles in data-centric projects or startups. Day-to-Day Activities Engage in daily stand-up meetings and project planning sessions. Collaborate with cross-functional teams to capture business requirements and design solutions. Write, test, and deploy data software solutions. Participate in code reviews, offering and receiving constructive feedback. Stay informed on current technology trends and advancements. Provide technical support and resolve issues within the team. Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or a related field. Relevant certifications such as AWS Certified Data Analytics or Google Professional Data Engineer are preferred. Commitment to continuous professional development and learning. Professional Competencies Strong critical thinking and problem-solving abilities. Effective leadership and teamwork skills. Excellent communication and stakeholder management capabilities. Adaptability to new technologies and changing requirements. Innovative mindset with a focus on data-driven solutions. Strong time management and prioritization abilities.

Posted 4 weeks ago

Apply

3.0 - 7.0 years

14 - 19 Lacs

Bengaluru

Work from Office

Naukri logo

Job Summary Synechron is seeking a motivated and skilled Data Engineer specializing in Google Cloud Platform (GCP) to join our innovative team. This role is integral to designing, implementing, and managing scalable and secure cloud-based data solutions that drive our business objectives forward. The Data Engineer will play a key role in ensuring optimal performance and security of cloud solutions, working collaboratively with clients and internal stakeholders. Software Requirements Required Software Skills: Google Cloud Platform (GCP)Proficiency in core services and tools. AWS or AzureBasic understanding and experience. VirtualizationExperience with virtualization technologies. Networking and SecurityStrong skills in cloud networking and security protocols. Preferred Software Skills: Familiarity with other cloud platforms such as AWS or Azure beyond basic use. Experience with data management tools and libraries. Overall Responsibilities Design, implement, and manage cloud-based data solutions tailored to client needs. Ensure solutions are secure, scalable, and optimized for performance. Collaborate with clients and stakeholders to identify, troubleshoot, and resolve technical issues. Participate in project planning and management, contributing to timelines and resource allocation. Enhance industry knowledge and best practices within Synechron. Technical Skills (By Category) Programming Languages: RequiredProficiency in programming or scripting languages commonly used in cloud environments (e.g., Python, SQL). Databases/Data Management: EssentialExperience with cloud-based data storage solutions. Cloud Technologies: EssentialGoogle Cloud Platform expertise. PreferredAWS and Azure familiarity. Frameworks and Libraries: PreferredKnowledge of data processing frameworks like Apache Beam or Kafka. Development Tools and Methodologies: RequiredAgile development experience. Security Protocols: EssentialUnderstanding of cloud security best practices. Experience Requirements 2-3 years of experience in a similar role or related field. Relevant experience in cloud computing, IT infrastructure, or related fields. Hands-on experience with GCP, and familiarity with AWS or Azure is a plus. Day-to-Day Activities Collaborate with team members and clients to develop cloud-based solutions. Conduct regular meetings and updates to track progress and address challenges. Deliver high-quality data solutions and ensure project milestones are met. Exercise decision-making authority in technical design and implementation. Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Relevant certifications in cloud technologies are preferred. Commitment to continuous professional development and staying updated on industry trends. Professional Competencies Strong critical thinking and problem-solving capabilities. Ability to work collaboratively within a team and lead where necessary. Excellent written and verbal communication skills for effective stakeholder management. Adaptability to evolving technologies and learning new tools. Innovative mindset with a focus on optimization and efficiency. Effective time and priority management skills.

Posted 4 weeks ago

Apply

4.0 - 9.0 years

11 - 16 Lacs

Pune

Work from Office

Naukri logo

Job Summary Synechron is seeking a seasoned Senior Data Engineer with expertise in Scala and Spark to join our Data Engineering team. This role is critical in processing and transforming large datasets, contributing to Synechrons business objectives by harnessing the power of big data technologies. The Associate Specialist will leverage their extensive experience in data engineering to drive innovative solutions and ensure efficient data processing. Software Requirements Required Software Skills: Proficiency in Scala and Spark for data processing and development. Working knowledge of SQL and experience with relational databases. Preferred Software Skills: Familiarity with other big data technologies and frameworks. Overall Responsibilities Lead and execute data engineering projects using Scala and Spark, ensuring high-quality data solutions. Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs. Develop and maintain scalable data pipelines and processing systems. Conduct code reviews to ensure the quality and maintainability of the codebase. Stay updated with the latest advancements in big data technologies and incorporate them into existing solutions. Troubleshoot and resolve technical issues, providing technical support to team members. Technical Skills (By Category) Programming Languages: RequiredScala, SQL PreferredKnowledge of additional programming languages used in big data environments. Databases/Data Management: EssentialExperience with SQL databases and data management principles. Frameworks and Libraries: EssentialApache Spark Development Tools and Methodologies: RequiredFamiliarity with Agile methodologies. Experience Requirements 7+ years of experience in big data, with significant exposure to Scala and Spark. Proven track record in data engineering and processing large-scale datasets. Day-to-Day Activities Participate in daily stand-up meetings and project planning sessions. Collaborate with cross-functional teams to gather data requirements and design solutions. Develop, test, and deploy data processing applications using Scala and Spark. Conduct code reviews and provide feedback to other developers. Stay informed about the latest trends in big data technologies. Provide technical support and troubleshoot issues as they arise. Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. Relevant certifications in data engineering or big data technologies are preferred. Commitment to continuous professional development and staying updated on industry trends. Professional Competencies Strong critical thinking and problem-solving capabilities. Effective teamwork and leadership abilities. Excellent communication and stakeholder management skills. Adaptability to new technologies and learning orientation. Innovation mindset to drive creative solutions and improvements. Effective time and priority management skills.

Posted 4 weeks ago

Apply

3.0 - 7.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Overall Responsibilities: Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes. Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives. Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations. Software Requirements: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Familiarity with Hadoop, Kafka, and other distributed computing tools. Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Strong scripting skills in Linux. Category-wise Technical Skills: PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux. Experience: 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Proven track record of implementing data engineering best practices. Experience in data ingestion, transformation, and optimization on the Cloudera Data Platform. Day-to-Day Activities: Design, develop, and maintain ETL pipelines using PySpark on CDP. Implement and manage data ingestion processes from various sources. Process, cleanse, and transform large datasets using PySpark. Conduct performance tuning and optimization of ETL processes. Implement data quality checks and validation routines. Automate data workflows using orchestration tools. Monitor pipeline performance and troubleshoot issues. Collaborate with team members to understand data requirements. Maintain documentation of data engineering processes and configurations. Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field. Relevant certifications in PySpark and Cloudera technologies are a plus. Soft Skills: Strong analytical and problem-solving skills. Excellent verbal and written communication abilities. Ability to work independently and collaboratively in a team environment. Attention to detail and commitment to data quality.

Posted 4 weeks ago

Apply

3.0 - 7.0 years

8 - 12 Lacs

Pune

Work from Office

Naukri logo

Job Summary Synechron is seeking a talented Data Engineer with a strong focus on Python to lead the development and implementation of projects using emerging technologies. This role is critical in driving innovation and improving business processes by leveraging cutting-edge solutions. As a Data Engineer, you will contribute to achieving strategic business objectives through technology leadership and collaboration with cross-functional teams. Software Requirements Required: Python (Advanced proficiency) SQL (Intermediate proficiency) Git (Version control) Docker (Containerization tools) Preferred: Apache Kafka (Streaming platform) TensorFlow or PyTorch (Machine learning libraries) Overall Responsibilities Lead the development and implementation of projects utilizing emerging technologies such as blockchain, IoT, and AI. Mentor and guide team members to ensure the successful delivery of projects. Identify and evaluate new technology solutions to enhance business processes. Collaborate with cross-functional teams to ensure alignment with organizational strategies. Stay up-to-date with the latest technological advancements and industry trends. Technical Skills (By Category) Programming Languages: RequiredPython (Advanced) PreferredJava, Scala Databases/Data Management: RequiredSQL, NoSQL databases like MongoDB PreferredPostgreSQL Cloud Technologies: RequiredAWS, Azure PreferredGoogle Cloud Platform Frameworks and Libraries: RequiredPandas, NumPy PreferredTensorFlow, PyTorch Development Tools and Methodologies: RequiredAgile, Scrum methodologies PreferredDevOps practices Security Protocols: RequiredUnderstanding of data security principles Experience Requirements At least 5 years of experience in software development and leading technology projects. Proven track record of delivering projects using emerging technologies. Experience in mentoring and guiding junior team members. Experience in working with cross-functional teams. Preference for candidates with experience in the financial services industry. Day-to-Day Activities Manage the development and delivery of projects using emerging technologies. Provide technical guidance and mentorship to junior team members. Collaborate with cross-functional teams to ensure alignment with organizational strategies. Evaluate and recommend new technology solutions to improve business processes. Stay informed about the latest technological advancements and industry trends. Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or a related field. Relevant certifications in emerging technologies are preferred. Continuous professional development in new technologies and methodologies. Professional Competencies Strong critical thinking and problem-solving capabilities. Leadership and teamwork abilities to manage and inspire teams. Exceptional communication skills for stakeholder management. Adaptability and a strong learning orientation. Innovation mindset to encourage creative solutions. Effective time and priority management skills.

Posted 4 weeks ago

Apply

1.0 - 4.0 years

10 - 14 Lacs

Pune

Work from Office

Naukri logo

Overview Design, develop, and maintain data pipelines and ETL/ELT processes using PySpark/Databricks. Optimize performance for large datasets through techniques such as partitioning, indexing, and Spark optimization. Collaborate with cross-functional teams to resolve technical issues and gather requirements. Responsibilities Ensure data quality and integrity through data validation and cleansing processes. Analyze existing SQL queries, functions, and stored procedures for performance improvements. Develop database routines like procedures, functions, and views. Participate in data migration projects and understand technologies like Delta Lake/warehouse. Debug and solve complex problems in data pipelines and processes. Qualifications Bachelor’s degree in computer science, Engineering, or a related field. Strong understanding of distributed data processing platforms like Databricks and BigQuery. Proficiency in Python, PySpark, and SQL programming languages. Experience with performance optimization for large datasets. Strong debugging and problem-solving skills. Fundamental knowledge of cloud services, preferably Azure or GCP. Excellent communication and teamwork skills. Nice to Have: Experience in data migration projects. Understanding of technologies like Delta Lake/warehouse. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com

Posted 4 weeks ago

Apply

2.0 - 5.0 years

15 - 19 Lacs

Mumbai

Work from Office

Naukri logo

Overview The Data Technology team at MSCI is responsible for meeting the data requirements across various business areas, including Index, Analytics, and Sustainability. Our team collates data from multiple sources such as vendors (e.g., Bloomberg, Reuters), website acquisitions, and web scraping (e.g., financial news sites, company websites, exchange websites, filings). This data can be in structured or semi-structured formats. We normalize the data, perform quality checks, assign internal identifiers, and release it to downstream applications. Responsibilities As data engineers, we build scalable systems to process data in various formats and volumes, ranging from megabytes to terabytes. Our systems perform quality checks, match data across various sources, and release it in multiple formats. We leverage the latest technologies, sources, and tools to process the data. Some of the exciting technologies we work with include Snowflake, Databricks, and Apache Spark. Qualifications Core Java, Spring Boot, Apache Spark, Spring Batch, Python. Exposure to sql databases like Oracle, Mysql, Microsoft Sql is a must. Any experience/knowledge/certification on Cloud technology preferrably Microsoft Azure or Google cloud platform is good to have. Exposures to non sql databases like Neo4j or Document database is again good to have. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com

Posted 4 weeks ago

Apply

15.0 - 22.0 years

50 - 100 Lacs

Noida, Gurugram, Bengaluru

Hybrid

Naukri logo

*Job Title:* Product Engineering Leader *Location:* NCR / Bangalore / Noida / Gurugram Position Summary We are seeking a Product Engineering Leader with a proven track record in building and scaling B2B/B2E enterprise-grade products across multi-cloud environments (AWS, Azure, GCP). This role requires a visionary technologist with deep engineering expertise, capable of driving the full lifecycle of product development from ideation to delivery — in data-centric and workflow-driven domains. Key Responsibilities Lead Engineering Teams: Direct and mentor high-performing engineering teams in developing scalable, secure, and performant enterprise software products. End-to-End Product Ownership: Drive product architecture, design, and implementation to delivery, ensuring rapid time-to-market with high-quality outcomes. Customer-Centric Solutions: Collaborate with customers to understand business needs and translate them into robust technical solutions. Cross-Functional Collaboration: Work closely with Product Managers, Owners, and Business Stakeholders to align technology initiatives with business objectives. Technical Thought Leadership: Evangelize engineering best practices and product-focused thinking to drive innovation and alignment with non-functional requirements (NFRs) such as performance, reliability, scalability, usability, and cost-efficiency. Cloud-Native Product Development: Build and manage data-driven applications across AWS, Azure, and Google Cloud platforms. Data Engineering Expertise: Lead initiatives that handle large-scale data sets, driving architecture that supports complex data pipelines and analytics. Domain Expertise (Preferred): Any experience in Life Sciences, Commercial/Pharma, or Incentive Compensation is highly desirable. Behavioral Competencies Product Mindset: Strong understanding of Agile methodologies and iterative development; ability to define incremental paths to product vision. Team & Task Management: Effective at planning and prioritizing work, tracking progress, and enabling teams to meet objectives. Clear Communicator: Strong verbal and written communication skills with the ability to explain complex technical topics to varied audiences. Required Qualifications Education: Bachelor's or Master’s degree in Computer Science or a related field from a Tier 1 or Tier 2 institution. Experience: 18+ years in IT with 7+ years in product development and core engineering leadership roles. Demonstrated experience building scalable enterprise software products. Technology Stack & Tools Frontend: React.js Backend: Python, PySpark Data & Storage: Snowflake, PostgreSQL Cloud: AWS, Azure, GCP Containerization: Docker, EKS Others: Exposure to AI/GenAI technologies is a strong plus Alternative Stacks: Strong experience in Java, JEE, or .NET is acceptable with the right product engineering background. Why Join Us? Be part of a high-impact team shaping enterprise software solutions that drive real business outcomes. Collaborate with innovative minds and industry leaders. Tackle complex, data-driven challenges in cutting-edge technology environments.ABOUT AXTRIA Axtria is a global provider of cloud software and data analytics to the Life Sciences industry. We help Life Sciences companies transform the product commercialization journey to drive sales growth and improve healthcare outcomes for patients. We are acutely aware that our work impacts millions of patients and lead passionately to improve their lives. Since our founding in 2010, technology innovation has been our winning differentiation, and we continue to leapfrog competition with platforms that deploy Artificial Intelligence and Machine Learning. Our cloud-based platforms - Axtria DataMax™, Axtria InsightsIQ™, Axtria SalesIQ™, and Axtria MarketingIQ™ - enable customers to efficiently manage data, leverage data science to deliver insights for sales and marketing planning, and manage end-to-end commercial operations. With customers in over 20 countries, Axtria is one of the biggest global commercial solutions providers in the Life Sciences industry. We continue to win industry recognition for growth and are featured in some of the most aspirational lists - INC 5000, Deloitte FAST 500, NJBiz FAST 50, SmartCEO Future 50, Red Herring 100, and several other growth and technology awards. Axtria is looking for exceptional talent to join our rapidly growing global team. People are our biggest perk! Our transparent and collaborative culture offers a chance to work with some of the brightest minds in the industry. Axtria Institute, our in-house university, offers the best training in the industry and an opportunity to learn in a structured environment. A customized career progression plan ensures every associate is setup for success and able to do meaningful work in a fun environment. We want our legacy to be the leaders we produce for the industry. Will you be next?

Posted 4 weeks ago

Apply

4.0 - 8.0 years

10 - 18 Lacs

Bangalore Rural, Bengaluru

Hybrid

Naukri logo

4+ years of data engineering experience Understanding of modern data platforms including data lakes and data warehouse, with good knowledge of the underlying architecture, preferably in Snowflake Experience in assembling large, complex sets of data that meets non-functional and functional business requirements Working experience of scripting, data science, and analytics (SQL, Python, PowerShell, JavaScript) Experience working with cloud based systems - Azure & Snowflake data warehouses Working knowledge of CI/CD Working knowledge of building data integrity checks as part of delivery of applications Experience working with Kafka technologies preferred

Posted 4 weeks ago

Apply

6.0 - 11.0 years

15 - 25 Lacs

Hyderabad

Work from Office

Naukri logo

- data engineering - SQL and Python, with the ability to translate complexity into efficient code - Azure Data Factory and/or Apache Airflow - Experience working on different type of databases and data warehouse technologies

Posted 4 weeks ago

Apply

1.0 - 3.0 years

10 - 15 Lacs

Kolkata, Gurugram, Bengaluru

Hybrid

Naukri logo

Salary: 10 to 16 LPA Exp: 1 to 3 years Location: Gurgaon / Bangalore/ Kolkata (Hybrid) Notice: Immediate to 30 days..!! Key Skills: GCP, Cloud, Pubsub, Data Engineer

Posted 4 weeks ago

Apply

3.0 - 8.0 years

15 - 30 Lacs

Gurugram, Bengaluru

Hybrid

Naukri logo

Salary: 15 to 30 LPA Exp: 3 to 8 years Location: Gurgaon / Bangalore (Hybrid) Notice: Immediate to 30 days..!! Key Skills: GCP, Cloud, Pubsub, Data Engineer

Posted 4 weeks ago

Apply

5.0 - 10.0 years

20 - 27 Lacs

Hyderabad

Work from Office

Naukri logo

Position: Experienced Data Engineer We are seeking a skilled and experienced Data Engineer to join our fast-paced and innovative Data Science team. This role involves building and maintaining data pipelines across multiple cloud-based data platforms. Requirements: A minimum of 5 years of total experience, with at least 3-4 years specifically in Data Engineering on a cloud platform. Key Skills & Experience: Proficiency with AWS services such as Glue, Redshift, S3, Lambda, RDS , Amazon Aurora ,DynamoDB ,EMR, Athena, Data Pipeline , Batch Job. Strong expertise in: SQL and Python DBT and Snowflake OpenSearch, Apache NiFi, and Apache Kafka In-depth knowledge of ETL data patterns and Spark-based ETL pipelines. Advanced skills in infrastructure provisioning using Terraform and other Infrastructure-as-Code (IaC) tools. Hands-on experience with cloud-native delivery models, including PaaS, IaaS, and SaaS. Proficiency in Kubernetes, container orchestration, and CI/CD pipelines. Familiarity with GitHub Actions, GitLab, and other leading DevOps and CI/CD solutions. Experience with orchestration tools such as Apache Airflow and serverless/FaaS services. Exposure to NoSQL databases is a plus

Posted 4 weeks ago

Apply

1.0 - 5.0 years

3 - 7 Lacs

Mumbai

Work from Office

Naukri logo

Data Engineer identifies the business problem and translates these to data services and engineering outcomes. You will deliver data solutions that empower better decision making and flexibility of your solution that scales to respond to broader business questions. key responsibilities As a Data Engineer, you are a full-stack data engineer that loves solving business problems. You work with business leads, analysts and data scientists to understand the business domain and engage with fellow engineers to build data products that empower better decision making. You are passionate about data quality of our business metrics and flexibility of your solution that scales to respond to broader business questions. If you love to solve problems using your skills, then come join the Team Searce. We have a casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses on productivity and creativity, and allows you to be part of a world-class team while still being yourself. Consistently strive to acquire new skills on Cloud, DevOps, Big Data, AI and ML Understand the business problem and translate these to data services and engineering outcomes Explore new technologies and learn new techniques to solve business problems creatively Think big! and drive the strategy for better data quality for the customers Collaborate with many teams - engineering and business, to build better data products preferred qualifications Over 1-2 years of experience with Hands-on experience of any one programming language (Python, Java, Scala) Understanding of SQL is must Big data (Hadoop, Hive, Yarn, Sqoop) MPP platforms (Spark, Pig, Presto) Data-pipeline & schedular tool (Ozzie, Airflow, Nifi) Streaming engines (Kafka, Storm, Spark Streaming) Any Relational database or DW experience Any ETL tool experience Hands-on experience in pipeline design, ETL and application development Good communication skills Experience in working independently and strong analytical skills Dependable and good team player Desire to learn and work with new technologies Automation in your blood

Posted 4 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies