Home
Jobs

1693 Data Engineering Jobs - Page 48

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 7.0 years

7 - 9 Lacs

Pune

Work from Office

Naukri logo

New Opportunity :FullStack Engineer. Location :Pune (Onsite). Company :Apptware Solutions Hiring. Experience :4+ years. We're looking for a skilled Full Stack Engineer to join our team. If you have experience in building scalable applications and working with modern technologies, this role is for you. Role & Responsibilities. Develop product features to help customers easily transform data. Design, implement, deploy, and support client-side and server-side architectures, including web applications, CLI, and SDKs. Minimum Requirements. 4+ years of experience as a Full Stack Developer or similar role. Hands-on experience in a distributed engineering role with direct operational responsibility (on-call experience preferred). Proficiency in at least one back-end language (Node.js, TypeScript, Python, or Go). Front-end development experience with Angular or React, HTML, CSS. Strong understanding of web applications, backend APIs, CI/CD pipelines, and testing frameworks. Familiarity with NoSQL databases (e.g. DynamoDB) and AWS services (Lambda, API Gateway, Cognito, etc.). Bachelor's degree in Computer Science, Engineering, Math, or equivalent experience. Strong written and verbal communication skills. Preferred Skills. Experience with AWS Glue, Spark, or Athena. Strong understanding of SQL and data engineering best practices. Exposure to Analytical EDWs (Snowflake, Databricks, Big Query, Cloudera, Teradata). Experience in B2B applications, SaaS offerings, or startups is a plus. (ref:hirist.tech). Show more Show less

Posted 4 weeks ago

Apply

5.0 - 8.0 years

8 - 14 Lacs

Bengaluru

Remote

Naukri logo

Job Overview : We are looking for an experienced GCP Data Engineer with deep expertise in BigQuery, DataFlow, DataProc, Pub/Sub, and GCS to build, manage, and optimize large-scale data pipelines. The ideal candidate should have a strong background in cloud data storage, real-time data streaming, and orchestration. Key Responsibilities : Data Storage & Management : - Manage Google Cloud Storage (GCS) buckets, set up permissions, and optimize storage solutions for handling large datasets. - Ensure data security, access control, and lifecycle management. Data Processing & Analytics : - Design and optimize BigQuery for data warehousing, querying large datasets, and performance tuning. - Implement ETL/ELT pipelines for structured and unstructured data. - Work with DataProc (Apache Spark, Hadoop) for batch processing of large datasets. Real-Time Data Streaming : - Use Pub/Sub for building real-time, event-driven streaming pipelines. - Implement Dataflow (Apache Beam) for real-time and batch data processing. Workflow Orchestration & Automation : - Use Cloud Composer (Apache Airflow) for scheduling and automating data workflows. - Build monitoring solutions to ensure data pipeline health and performance. Cloud Infrastructure & DevOps : - Implement Terraform for provisioning and managing cloud infrastructure. - Work with Google Kubernetes Engine (GKE) for container orchestration and managing distributed applications. Advanced SQL & Data Engineering : - Write efficient SQL queries for data transformation, aggregation, and analysis. - Optimize query performance and cost efficiency in BigQuery. Required Skills & Qualifications : - 4-8 years of experience in GCP Data Engineering - Strong expertise in BigQuery, DataFlow, DataProc, Pub/Sub, and GCS - Experience in SQL, Python, or Java for data processing and transformation - Proficiency in Airflow (Cloud Composer) for scheduling workflows - Hands-on experience with Terraform for cloud infrastructure automation - Familiarity with NoSQL databases like Bigtable for high-scale data handling - Knowledge of GKE for containerized applications and distributed processing Preferred Qualifications : - Experience with CI/CD pipelines for data deployment - Familiarity with Cloud Functions or Cloud Run for serverless execution - Understanding of data governance, security, and compliance Why Join Us ? - Work on cutting-edge GCP data projects in a cloud-first environment - Competitive salary and career growth opportunities - Collaborative and innovative work culture - Exposure to big data, real-time streaming, and advanced analytics.

Posted 4 weeks ago

Apply

5.0 - 6.0 years

18 - 25 Lacs

Gurugram, Sector-20

Work from Office

Naukri logo

5-7 years of experience in Solution, Design and Development of Cloud based data models, ETL Pipelines and infrastructure for reporting, analytics, and data science. Experience working with Spark, Hive, HDFS, MR, Apache Kafka/AWS Kinesis Experience with version control tools (Git, Subversion) Experience using automated build systems (CI/CD) Experience working in different programming languages (Java, python, Scala) Experience working with both structured and unstructured data. Strong proficiency with SQL and its variation among popular databases Ability to create the data model from scratch. Experience with some of the modern relational databases Skilled at optimizing large complicated SQL statements Knowledge of best practices when dealing with relational databases Capable of configuring popular database engines and orchestrating clusters as necessary Ability to plan resource requirements from high level specifications Capable of troubleshooting common database issues. Experience of Data Structures and algorithms Knowledge of different databases technologies (Relational, NoSQL, Graph, Document, Key-Value, Time Series, etc). This should include building and managing scalable data models. Knowledge of ML model deployment Knowledge of Cloud based platforms (AWS) Knowledge of TDD/BDD Strong desire to improve upon their skills in software development, frameworks, and technologies.

Posted 4 weeks ago

Apply

3.0 - 4.0 years

15 - 22 Lacs

Bengaluru

Hybrid

Naukri logo

Velotio Technologies is a product engineering company working with innovative startups and enterprises. We are a certified Great Place to Work and recognized as one of the best companies to work for in India. We have provided full-stack product development for 110+ startups across the globe building products in the cloud-native, data engineering, B2B SaaS, IoT & Machine Learning space. Our team of 400+ elite software engineers solves hard technical problems while transforming customer ideas into successful products. Requirements Design, develop, and maintain robust and scalable data pipelines that ingest, transform, and load data from various sources into data warehouse. Collaborate with business stakeholders to understand data requirements and translate them into technical solutions. Implement data quality checks and monitoring to ensure data accuracy and integrity. Optimize data pipelines for performance and efficiency. Troubleshoot and resolve data pipeline issues. Stay up-to-date with emerging technologies and trends in data engineering. Qualifications Bachelors or Masters degree in Computer Science, Engineering, or a related field. 2+ years of experience in data engineering or a similar role. Strong proficiency in SQL and at least one programming language (e.g., Python, Java). Experience with data pipeline tools and frameworks Experience with cloud-based data warehousing solutions (Snowflake). Experience with AWS Kinesis, SNS, SQS Excellent problem-solving and analytical skills. Strong communication and interpersonal skills. Desired Skills & Experience: Data pipeline architecture Data warehousing ETL (Extract, Transform, Load) Data modeling SQL Python or Java or Go Cloud computing Business intelligence Our Culture : We have an autonomous and empowered work culture encouraging individuals to take ownership and grow quickly Flat hierarchy with fast decision making and a startup-oriented get things done culture A strong, fun & positive environment with regular celebrations of our success. We pride ourselves in creating an inclusive, diverse & authentic environment At Velotio, we embrace diversity. Inclusion is a priority for us, and we are eager to foster an environment where everyone feels valued. We welcome applications regardless of ethnicity or cultural background, age, gender, nationality, religion, disability or sexual orientation.

Posted 4 weeks ago

Apply

5.0 - 10.0 years

8 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

Responsibilities : - Design, develop, and maintain scalable and efficient ETL/ELT pipelines using appropriate tools and technologies. - Develop and optimize complex SQL queries for data extraction, transformation, and loading. - Implement data quality checks and validation processes to ensure data integrity. - Automate data pipelines and workflows for efficient data processing. - Integrate data from diverse sources, including databases, APIs, and flat files. - Manage and maintain data warehouses and data lakes. - Implement data modeling and schema design. - Ensure data security and compliance with relevant regulations. - Provide data support for BI and reporting tools (Microstrategy, PowerBI, Tableau, Jaspersoft, etc.). - Collaborate with BI developers to ensure data availability and accuracy. - Optimize data queries and performance for reporting applications. - Provide technical guidance and mentorship to junior data engineers. - Lead code reviews and ensure adherence to coding standards and best practices. - Contribute to the development of technical documentation and knowledge sharing. - Design and implement data solutions on cloud platforms (AWS preferred). - Utilize AWS data integration technologies such as Airflow and Glue. - Manage and optimize cloud-based data infrastructure. - Develop data processing applications using Python, Java, or Scala. - Implement data transformations and algorithms using programming languages. - Identify and resolve complex data-related issues. - Proactively seek opportunities to improve data processes and technologies. -Stay up-to-date with the latest data engineering trends and technologies. Requirements : Experience : - 5 to 10 years of experience in Business Intelligence and Data Engineering. - Proven experience in designing and implementing ETL/ELT processes. - Expert-level proficiency in SQL (advanced/complex queries). - Strong understanding of ETL concepts and experience with ETL/Data Integration tools (Informatica, ODI, Pentaho, etc.). - Familiarity with one or more reporting tools (Microstrategy, PowerBI, Tableau, Jaspersoft, etc.). - Knowledge of Python and cloud infrastructure (AWS preferred). - Experience with AWS data integration technologies (Airflow, Glue). - Programming experience in Java or Scala. - Strong analytical and problem-solving skills. - Excellent communication and interpersonal skills. - Proven ability to take initiative and be innovative. - Ability to work independently and as part of a team. Education : - B.Tech / M.Tech / MCA (Must-Have).

Posted 4 weeks ago

Apply

5.0 - 8.0 years

5 - 15 Lacs

Kochi

Work from Office

Naukri logo

Job Summary: We are looking for a seasoned Data Engineer with 58 years of experience, specializing in Microsoft Fabric. The ideal candidate will play a key role in designing, building, and optimizing scalable data pipelines and models. You will work closely with analytics and business teams to drive data integration, ensure quality, and support data-driven decision-making in a modern cloud environment. Key Responsibilities: Design, develop, and optimize end-to-end data pipelines using Microsoft Fabric (Data Factory, Dataflows Gen2). Create and maintain data models , semantic models , and data marts for analytical and reporting purposes. Develop and manage SQL-based ETL processes , integrating various structured and unstructured data sources. Collaborate with BI developers and analysts to develop Power BI datasets, dashboards, and reports. Implement robust data integration solutions across diverse platforms and sources (on-premises, cloud). Ensure data integrity, quality, and governance through automated validation and error handling mechanisms. Work with business stakeholders to understand data requirements and translate them into technical specifications. Optimize data workflows for performance and cost-efficiency in a cloud-first architecture. Provide mentorship and technical guidance to junior data engineers. Required Skills: Strong hands-on experience with Microsoft Fabric , including Dataflows Gen2, Pipelines, and OneLake. Proficiency in Power BI , including building reports, dashboards, and working with semantic models. Solid understanding of data modeling techniques : star schema, snowflake, normalization/denormalization. Deep experience with SQL , stored procedures, and query optimization. Experience in data integration from diverse sources such as APIs, flat files, databases, and streaming data. Knowledge of data governance , lineage , and data catalog capabilities within the Microsoft ecosystem.

Posted 4 weeks ago

Apply

8.0 - 13.0 years

12 - 22 Lacs

Bengaluru

Work from Office

Naukri logo

Your Responsibilities: Designing and implementing scalable and reliable data pipelines on the Azure platform Developing and maintaining data integration solutions using Azure Data Factory, Azure Databricks, and other Azure services Ensuring data quality and integrity by implementing best practices in data collection, processing, and storage Collaborating with data scientists, data analysts, and other stakeholders to understand their data needs and deliver actionable insights Managing and optimizing Azure data storage solutions such as Azure SQL Database, Azure Data Lake, and Azure Cosmos DB Monitoring the performance of data pipelines and implementing strategies for continuous improvement Developing and maintaining ETL processes to support data warehousing and analytics Implementing best practices for data governance, security, and compliance Staying up-to-date with the latest industry trends and technologies to continuously improve data engineering practices and methodologies Living Hitachi Energys core values of safety and integrity, which means taking responsibility for your own actions while caring for your colleagues and the business. Your Background: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field 8+ years of experience in data engineering, with a focus on Azure data services Relevant certifications in Azure data services or cloud computing will be an added advantage Proficiency in programming and scripting languages such as Python, SQL, or Scala Experience with Azure data services, including Azure Data Factory, Azure Databricks, Azure Synapse Analytics, and Azure Data Lake Strong understanding of data modeling, ETL processes, and data warehousing concepts Experience with big data technologies such as Hadoop and Spark Knowledge of data governance, security, and compliance best practices Familiarity with monitoring and logging tools such as Azure Monitor and Log Analytics Strong problem-solving and troubleshooting skills Excellent communication and collaboration skills to work effectively with cross-functional team. Strong attention to detail and organizational skills Ability to articulate and present ideas to senior management Problem-solving mindset with the ability to work independently and as part of a team Eagerness to learn and enhance knowledge unassisted Strong networking skills and global orientation Ability to coach and mentor team members Effective collaboration with internal and external stakeholders Adaptability to manage and lead transformational projects Proficiency in both spoken & written English language is required

Posted 4 weeks ago

Apply

1.0 - 5.0 years

27 - 32 Lacs

Karnataka

Work from Office

Naukri logo

As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations Since 2011, our mission hasnt changed "” were here to stop breaches, and weve redefined modern security with the worlds most advanced AI-native platform We work on large scale distributed systems, processing almost 3 trillion events per day We have 3.44 PB of RAM deployed across our fleet of C* servers and this traffic is growing daily Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward Were also a mission-driven company We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers Were always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other Ready to join a mission that mattersThe future of cybersecurity starts with you. About The Role The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML model development lifecycle, ML engineering, and Insights Activation This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company We processdata at a truly immense scale The data sets we process are composed of various facets including telemetry data, associated metadata, IT asset information, contextual formation about threat exposure, and many more These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse. We are seeking a strategic and technically savvy leader to head our Data and ML Platform team As the head, you will be responsible for defining and building our ML Experimentation Platform from the ground up, while scaling our data and ML infrastructure to support various roles including Data Platform Engineers, Data Scientists, and Threat Analysts Your key responsibilities will involve overseeing the design, implementation, and maintenance of scalable ML pipelines for data preparation, cataloging, feature engineering, model training, model serving, and in-field model performance monitoring These efforts will directly influence critical business decisions In this role, you'll foster a production-focused culture that effectively bridgesthe gap between model development and operational success Furthermore, you'll be at the forefront of spearheading our ongoing Generative AI investments The ideal candidate for this position will combine strategic vision with hands-on technical expertise in machine learning and data infrastructure, driving innovation and excellence across our data and ML initiatives We are building this team with ownership at Bengaluru, India, this leader will help us boot strap the entire site, starting with this team. What You'll Do Strategic Leadership Define the vision, strategy and roadmap for the organizations data and ML platform to align with critical business goals. Help design, build, and facilitate adoption of a modern Data+ML platform Stay updated on emerging technologies and trends in data platform, ML Ops and AI/ML Team Management Build a team of Data and ML Platform engineers from a small footprint across multiple geographies Foster a culture of innovation and strong customer commitment for both internal and external stakeholders Platform Development Oversee the design and implementation of a platform containing data pipelines, feature stores and model deployment frameworks. Develop and enhance ML Ops practices to streamline model lifecycle Management from development to production. Data Governance Institute best practices for data security, compliance and quality to ensure safe and secure use of AI/ML models. Stakeholder engagement Partner with product, engineering and data science teams to understand requirements and translate them into platform capabilities. Communicate progress and impact to executive leadership and key stakeholders. Operational Excellence Establish SLI/SLO metrics for Observability of the Data and ML Platform along with alerting to ensure a high level of reliability and performance. Drive continuous improvement through data-driven insights and operational metrics. What You'll Need S 10+ years experience in data engineering, ML platform development, or related fields with at least 5 years in a leadership role. Familiarity with typical machine learning algorithms from an engineering perspective; familiarity with supervised / unsupervised approacheshow, why and when labeled data is created and used. Knowledge of ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI, etc. Experience with modern ML Ops platforms such as MLFLow, Kubeflow or SageMaker preferred.Experience in data platform product(s) and frameworks like Apache Spark, Flink or comparable tools in GCP and orchestration technologies (e.g Kubernetes, Airflow) Experience with Apache Iceberg is a plus. Deep understanding of machine learning workflows, including model training, deployment and monitoring. Familiarity with data visualization tools and techniques. Experience with boot strapping new teams and growing them to make a large impact. Experience operating as a site lead within a company will be a bonus. Exceptional interpersonal and communication skills Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role s, geographic neighbourhood groups and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified„¢ across the globe CrowdStrike is proud to be an equal opportunity employer We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance. Show more Show less

Posted 4 weeks ago

Apply

8.0 - 10.0 years

12 - 17 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

Role & responsibilities Design develop data pipelines for realtime and batch data ingestion and processing using Confluent Kafka ksqlDB Kafka Connect and Apache Flink Build and configure Kafka Connectors to ingest data from various sources databases APIs message queues etc into Kafka Develop Flink applications for complex event processing stream enrichment and realtime analytics Develop and optimize ksqlDB queries for realtime data transformations aggregations and filtering Implement data quality checks and monitoring to ensure data accuracy and reliability throughout the pipeline Monitor and troubleshoot data pipeline performance identify bottlenecks and implement optimizations Automate data pipeline deployment monitoring and maintenance tasks Stay uptodate with the latest advancements in data streaming technologies and best practices Contribute to the development of data engineering standards and best practices within the organization Participate in code reviews and contribute to a collaborative and supportive team environment Work closely with other architects and tech leads in India US and create POCs and MVPs Provide regular updates on the tasks status and risks to project manager Preferred candidate profile Bachelors degree or higher from a reputed university 8 to 10 years total experience with majority of that experience related to ETLELT big data Kafka etc Proficiency in developing Flink applications for stream processing and realtime analytics Strong understanding of data streaming concepts and architectures Extensive experience with Confluent Kafka including Kafka Brokers Producers Consumers and Schema Registry Handson experience with ksqlDB for realtime data transformations and stream processing Experience with Kafka Connect and building custom connectors Extensive experience in implementing large scale data ingestion and curation solutions Good hands on experience in big data technology stack with any cloud platform Excellent problemsolving analytical and communication skills Ability to work independently and as part of a team Good to have Experience in Google Cloud Healthcare industry experience Experience in Agile Skills Mandatory Skills : AWS Kinesis,Java,Kafka,Python,AWS Glue,AWS Lambda,AWS S3,Scala,Apache SparkStreaming,ANSI-SQL"

Posted 4 weeks ago

Apply

1.0 - 5.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for a skilled and experienced PySpark Tech Lead to join our dynamic engineering team In this role, you will lead the development and execution of high-performance big data solutions using PySpark You will work closely with data scientists, engineers, and architects to design and implement scalable data pipelines and analytics solutions. As a Tech Lead, you will mentor and guide a team of engineers, ensuring the adoption of best practices for building robust and efficient systems while driving innovation in the use of data technologies. Key Responsibilities Lead and DevelopDesign and implement scalable, high-performance data pipelines and ETL processes using PySpark on distributed systems Tech LeadershipProvide technical direction and leadership to a team of engineers, ensuring the delivery of high-quality solutions that meet both business and technical requirements. Architect SolutionsDevelop and enforce best practices for architecture, design, and coding standards Lead the design of complex data engineering workflows, ensuring they are optimized for performance and cost-effectiveness. CollaborationCollaborate with data scientists, analysts, and other stakeholders to understand data requirements, translating them into scalable technical solutions. Optimization & Performance TuningOptimize large-scale data processing pipelines to improve efficiency and performance Implement best practices for memory management, data partitioning, and parallelization in Spark. Code Review & MentorshipConduct code reviews to ensure high-quality code, maintainability, and scalability Provide guidance and mentorship to junior and mid-level engineers. Innovation & Best PracticesStay current on new data technologies and trends, bringing fresh ideas and solutions to the team Implement continuous integration and deployment pipelines for data workflows. Problem SolvingIdentify bottlenecks, troubleshoot, and resolve issues related to data quality, pipeline failures, and performance optimization. Skills And Qualifications Experience: 7+ years of hands-on experience in PySpark and large-scale data processing. Technical Expertise: Strong knowledge of PySpark, Spark SQL, and Apache Kafka. Experience with cloud platforms like AWS (EMR, S3), Google Cloud, or Azure. In-depth understanding of distributed computing, parallel processing, and data engineering principles. Data Engineering: Expertise in building ETL pipelines, data wrangling, and working with structured and unstructured data. Experience with databases (relational and NoSQL) such as SQL, MongoDB, or DynamoDB. Familiarity with data warehousing solutions and query optimization techniques Leadership & Communication: Proven ability to lead a technical team, make key architectural decisions, and mentor junior engineers. Excellent communication skills, with the ability to collaborate effectively with cross-functional teams and stakeholders. Problem Solving: Strong analytical skills with the ability to solve complex problems involving large datasets and distributed systems. Education: Bachelors or Masters degree in Computer Science, Engineering, or a related field (or equivalent practical experience). Show more Show less

Posted 4 weeks ago

Apply

1.0 - 5.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Job TitleData Engineer Experience5"“8 Years LocationDelhi, Pune, Bangalore (Hyderabad & Chennai also acceptable) Time ZoneAligned with UK Time Zone Notice PeriodImmediate Joiners Only Role Overview: We are seeking experienced Data Engineers to design, develop, and optimize large-scale data processing systems You will play a key role in building scalable, efficient, and reliable data pipelines in a cloud-native environment, leveraging your expertise in GCP, BigQuery, Dataflow, Dataproc, and more Key Responsibilities: Design, build, and manage scalable and reliable data pipelines for real-time and batch processing. Implement robust data processing solutions using GCP services and open-source technologies. Create efficient data models and write high-performance analytics queries. Optimize pipelines for performance, scalability, and cost-efficiency. Collaborate with data scientists, analysts, and engineering teams to ensure smooth data integration and transformation. Maintain high data quality, enforce validation rules, and set up monitoring and alerting. Participate in code reviews, deployment activities, and provide production support. Technical Skills Required: Cloud PlatformsGCP (Google Cloud Platform)- mandatory Key GCP ServicesDataproc, BigQuery, Dataflow Programming LanguagesPython, Java, PySpark Data Engineering ConceptsData Ingestion, Change Data Capture (CDC), ETL/ELT pipeline design Strong understanding of distributed computing, data structures, and performance tuning Required Qualifications & Attributes: 5"“8 years of hands-on experience in data engineering roles Proficiency in building and optimizing distributed data pipelines Solid grasp of data governance and security best practices in cloud environments Strong analytical and problem-solving skills Effective verbal and written communication skills Proven ability to work independently and in cross-functional teams Show more Show less

Posted 4 weeks ago

Apply

1.0 - 5.0 years

9 - 13 Lacs

Pune

Work from Office

Naukri logo

-Data Integration Lead (Only US Citizens and Green Card Holders) Company Overview: At Codvo, software and people transformations go hand-in-hand We are a global empathy-led technology services company Product innovation and mature software engineering are part of our core DNA Respect, Fairness, Growth, Agility, and Inclusiveness are the core values that we aspire to live by each day. We continue to expand our digital strategy, design, architecture, and product management capabilities to offer expertise, outside-the-box thinking, and measurable results. Key Responsibilities: Data Integration Strategy & DesignLead the design and implementation of data integration strategies that align with business goals and data architecture. Project ManagementLead and manage data integration projects, ensuring the successful deployment of integration processes and tools. Technical LeadershipProvide technical leadership and mentoring to a team of data engineers and analysts Troubleshoot and resolve complex data integration issues across various systems and platforms. Data Pipeline DevelopmentDesign, implement, and optimize robust data pipelines to integrate data from multiple internal and external sources (e.g., APIs, databases, cloud services). Data Quality & GovernanceEnsure that data integrations maintain high standards of data quality and governance. Documentation & ReportingMaintain detailed documentation of integration processes, workflows, and data sources Provide regular progress reports to senior management on integration activities and performance. Key Qualifications: EducationBachelors or Master's degree in Computer Science, Information Systems, Data Science, Engineering, or related field. Experience12+ years of experience in data integration, data engineering, or related fields, with a strong background in leading data integration projects. Show more Show less

Posted 4 weeks ago

Apply

6.0 - 10.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

AECOM is seeking a Graduate Environmental Data Specialist with 2+ years of experience to support our enterprise environmental data management system (EarthSoft EQuIS). The ideal candidate will have a strong understanding of environmental data and terminology, good communication skills, and the ability to collaborate with both technical and non-technical stakeholders. This position will offer a hybrid work arrangement to include both office and remote work schedules and will be based from our office located in Bengaluru, India. This role includes, but is not limited to, the following activities: Role and Responsibilities: The ideal candidate will beable to understandrequests fromenvironmental subjectmatter experts. Be a goodcommunicator able toshare new functions andfeatures with the users and have a goodunderstanding ofenvironmentaldataandenvironmentaldataterminology. Works on issues of diverse scope where analysis of situation or data requires evaluation of a variety of factors, including an understanding of current business trends. Prepare and update environmental associated reports sound in understanding environmental data, transforming, and analyzing large and diversified environmental datasets. Ability to translate environmental problems through digital and data solutions. Commitment to data quality at all levels and scales. Experience indeveloping customreports and user-requested queriesand views on various platforms of the desired skill set. Responsive to client(user) requests. Excellentcommunicationskills Provide technical support to field sampling teams and act as a liaison between the project staff, analytical laboratory, data validator, and GIS analysts. Research state and federal regulations necessary to manage action levels or clean-up criteria. Professional qualification & Experience desired Bachelors degree in environmental/civil/chemical engineering or science in a related discipline (or similar subject) desirable with a required focus on Environmental Data and 2+ years of experience working in the environmental domain and preferably have relevant experience with environmental data. Skills Required: Ability to understand data management using excellent computer skills to perform transformations in spreadsheets and databases. Expertise and experience with environmental data and database systems (MS SQL Server, MS Access). Expertise with relational databases such as EarthSofts Environmental Quality Information System (EQuIS) /EIM/ ESdat. Ability to continually analyze data at all stages for problems, logic, and consistency concerning field data collection, analytical reporting, and other expertise on EQUIS sub-tools (Collect, Edge, ArcGIS highly desirable but not essential). Assist projects globally and task delivery with high quality and within deadlines. Managing data (geological, Field data, chemical laboratory data) for technical report writing and interpretation as required by the team. Maintaining and updating various project dashboards using the web-based EQuIS Enterprise system; and preparing report-ready data tables, charts, and figures for internal review and external client reports. Use of visualization tools like Power BI to help management make effective decisions for the environmental domain is desirable but not essential. Programming and/or coding experience (e.g., Python,R) a plus. Data engineering, AI/ML, and Data science understanding is highly desirable but not essential. Can be in either academic or work experience. Intermediate to the expert level understanding of Office 365, Excel, power query & Power automation. Strong attention to detail with excellent analytical, judgment and problem-solving capabilities. Comfortable running meetings and presentations Strong written and oral communication skills Preferred Requirements: Masters degree in environmental/civil/chemical engineering or science in a related discipline (or similar subject) desirable with a required focus on Environmental Data. Minimum of 2 5 years of experience working in the environmental domain and preferably have relevant experience with environmental data. Additional Information About AECOM AECOM is proud to offer comprehensive benefits to meet the diverse needs of our employees. Depending on your employment status, AECOM benefits may include medical, dental, vision, life, AD&D, disability benefits, paid time off, leaves of absences, voluntary benefits, perks, flexible work options,well-being resources, employee assistance program, business travel insurance, service recognition awards, retirement savings plan, and employee stock purchase plan. AECOM is the global infrastructure leader, committed to delivering a better world. As a trusted professional services firm powered by deep technical abilities, we solve our clients complex challenges in water, environment, energy, transportation and buildings. Our teams partner with public- and private-sector clients to create innovative, sustainable and resilient solutions throughout the project lifecycle from advisory, planning, design and engineering to program and construction management. AECOM is a Fortune 500 firm that had revenue of $16.1 billion in fiscal year 2024. Learn more at aecom.com. What makes AECOM a great place to work You will be part of a global team that champions your growth and career ambitions. Work on groundbreaking projects - both in your local community and on a global scale - that are transforming our industry and shaping the future. With cutting-edge technology and a network of experts, youll have the resources to make a real impact. Our award-winning training and development programs are designed to expand your technical expertise and leadership skills, helping you build the career youve always envisioned. Here, youll find a welcoming workplace built on respect, collaboration and community - where you have the freedom to grow in a world of opportunity. As an Equal Opportunity Employer, we believe in your potential and are here to help you achieve it.

Posted 4 weeks ago

Apply

5.0 - 10.0 years

12 - 17 Lacs

Hyderabad

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together The ETL Developer is responsible for the design, development and maintenance of various ETL processes. This includes the design and development of processes for various types of data, potentially large datasets and disparate data sources that require transformation and cleansing to become a usable data set. This candidate should also be able to find creative solutions to complex and diverse business requirements. The developer should have a solid working knowledge of any programing languages, data analysis, design, ETL tool sets. The ideal candidate must possess solid background on Data Engineering development technologies. The candidate must possess excellent written and verbal communication skills with the ability to collaborate effectively with business and technical experts in the team. Primary Responsibility Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Graduate degree or equivalent experience 6+ years of development, administration and migration experience in Azure Databricks and Snowflake 6+ years of experience with data design/pattern- Data warehousing, Dimensional Modeling and Lakehouse Medallion Architecture 5+ years of experience working with Azure data factory 5+ years of experience in setting up, maintenance and usage of Azure services as Azure Data Factory, Azure Databricks, Azure Data Lake Storage, Azure SQL Database etc. 5+ years of experience working with Python and Pyspark 3+ years of experience with Kafka Excellent communication skills to effectively convey technical concepts to both technical and non-technical stakeholders At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 4 weeks ago

Apply

6.0 - 10.0 years

19 - 25 Lacs

Gurugram

Work from Office

Naukri logo

Lead technology solution design and delivery Create and maintain optimal data solutions architecture and AI models Works with business partners to document complex company-wide acceptance test plans. Work concurrently on several projects, each with specific instructions that may differ from Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvementsautomating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and cloud 'big data' technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with business-critical data insights, technical issues and support their data infrastructure needs. Keep our data separated and secure across national boundaries through multiple data centers and cloud regions. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Build processes supporting data transformation, data structures, metadata, dependency and workload management. Troubleshoot production support issues post release deployment and come up with solutions Explain, Socialize and Vet designs internal and external stakeholders Undergraduate degree or equivalent experience. Undergraduate Degree in Engineering or equivalent Over 7 years of experience in Data Engineering and Advanced Analytics Strong experience in build Generative AI based solutions for data management (data pipelines, data standardization, data quality) and data analytics. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience building and optimizing 'big data' data pipelines, architectures and data sets. Experience in Cloud technologies and SNOWFLAKE Experience in Kafka development Experience in Python/Java programing Experience in creating business data models Experience in Report development and dashboarding Strong Experience in driving Customer Experience Experience in working with agile teams Experience in Healthcare Clinical Domains Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. A successful history of manipulating, processing and extracting value from large, disconnected datasets. Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores. Strong project management and organizational skills. Experience supporting and working with cross-functional teams in a dynamic environment. Careers with Optum. Here's the idea. We built an entire organization around one giant objective; make the health system work better for everyone. So when it comes to how we use the world's large accumulation of health-related information, or guide health and lifestyle choices or manage pharmacy benefits for millions, our first goal is to leap beyond the status quo and uncover new ways to serve. Optum, part of the UnitedHealth Group family of businesses, brings together some of the greatest minds and most advanced ideas on where health care has to go in order to reach its fullest potential. For you, that means working on high performance teams against sophisticated challenges that matter. Optum, incredible ideas in one incredible company and a singular opportunity to do your life's best work.SM Diversity creates a healthier atmosphereUnitedHealth Group is an Equal Employment Opportunity/Affirmative Action employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, age, national origin, protected veteran status, disability status, sexual orientation, gender identity or expression, marital status, genetic information, or any other characteristic protected by law. UnitedHealth Group is a drug-free workplace. Candidates are required to pass a drug test before beginning employment.

Posted 4 weeks ago

Apply

4.0 - 8.0 years

10 - 14 Lacs

Noida

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibility Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B.E / MCA / B.Tech / MTECH /MS Graduation (Minimum 16 years of formal education, Correspondence courses are not relevant) 2+ years of experience on Azure database offering like SQL DB, Postgres DB, constructing data pipelines using Azure data factory, design and development of analytics using Azure data bricks and snowpark 2+ years of experience on Cloud based DWSnowflake, Azure SQL DW 2+ years of experience in data engineering and working on large data warehouse including design and development of ETL / ELT 3+ years of experience in constructing large and complex SQL queries on terabytes of warehouse database system Good Knowledge on Agile practices - Scrum, Kanban Knowledge on Kubernetes, Jenkins, CI / CD Pipelines, SonarQube, Artifactory, GIT, Unit Testing Main tech experience Dockers, Kubernetes and Kafka DatabaseAzure SQL databases Knowledge on Apache Kafka and Data Streaming Main tech experience Terraform and Azure.. Ability to identify system changes and verify that technical system specifications meet the business requirements Solid problem solving, analytical kills, Good communication and presentation skills, Good attitude and self-motivated Solid problem solving, analytical kills Proven good communication and presentation skills Proven good attitude and self-motivated Preferred Qualifications 2+ years of experience on working with cloud native monitoring and logging tool like Log analytics 2+ years of experience in scheduling tools on cloud either using Apache Airflow or logic apps or any native/third party scheduling tool on cloud Exposure on ATDD, Fortify, SonarQube Unix scripting, DW concepts, ETL FrameworksScala / Spark, DataStage At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 4 weeks ago

Apply

5.0 - 9.0 years

13 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities As a Tech Lead, the candidate should be able to work as an individual contributor as well as people manager Be able to work on data pipelines and databases Be able to work on data intensive applications or systems Be able to lead the team and have to soft skills for the same Be able to review code, design and mentor the team members Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Graduate degree or equivalent experience Experience working on Databricks Well versed with Apache spark, Azure, SQL, Pyspark, Airflow, Hadoop, UNIX etc. Proven ability to work on big data technology stack on cloud and on-prem Proven ability to communicate effectively with the team Proven ability to lead and mentor the team Proven ability to have soft skills for people management

Posted 4 weeks ago

Apply

4.0 - 7.0 years

10 - 14 Lacs

Chennai

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. The Role in BriefThe Data Integration Analyst, is responsible for implementing, maintaining, and supporting HL7 interfaces between customers, both external and internal, and Optum’s integration platforms. The Engineer will work in a team, but will have individual assignments that he/she will work on independently. Engineers are expected to work under aggressive schedules, be self-sufficient, work within established standards, and be able to work on multiple assignments simultaneously. Candidates must be willing to work in a 24/7 environment and will be on-call as needed for critical issues. Primary Responsibilities Interface Design and Development: Interface AnalysisHL7 message investigation to determine gaps or remediate issues Interface Design, Development, and Delivery - Interface planning, filtering, transformation, and routing Interface ValidationReview, verification, and monitoring to ensure delivered interface passes acceptance testing Interface Go-Live and Transition to SupportCompleting cutover events with teams / partners and executing turnover procedures for hand-off Provider EnrollmentsProvisioning and documentation of all integrations Troubleshooting and Support: Issue ResolutionTroubleshoot issues raised by alarms, support, or project managers from root cause identification to resolution Support RequestsHandle tier 2 / 3 support requests and provide timely solutions to ensure client satisfaction Enhancements / MaintenanceEnsuring stable and continuous data delivery Collaboration and Communication: Stakeholder InteractionWork closely with Clients, Project Managers, Product managers and other stakeholders to understand requirements and deliver solutions DocumentationContribute to technical documentation of specifications and processes CommunicationEffectively communicate complex concepts, both verbally and in writing, to team members and clients Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Basic Qualifications EducationBachelor’s degree in Computer Science or any engineering field Experience2+ years of experience working with HL7 data and Integration Engines or Platforms Technical AptitudeAbility to learn new technologies Skills: Proven solid analytical and problem-solving skills Required Qualifications Undergraduate degree or equivalent experience HL7 Standards knowledgeHL7 v2, v3, CDA Integration Tools knowledgeInter Systems IRIS, Infor Cloverleaf, NextGen Mirth Connect, or equivalent Cloud Technology knowledgeAzure or AWS Scripting and StructureProficiency in T-SQL and procedural scripting, XML, JSON Preferred Qualifications HL7 Standards knowledge HL7 FHIR, US Core Integration Tools knowledge Inter Systems Ensemble or IRIS, Cache Scripting and Structure knowledge Object Script, Perl, TCL, Java script US Health care Knowledge Health Information SystemsWorking knowledge Clinical Data Analysis knowledge Clinical ProcessesUnderstanding of clinical processes and vocabulary Soft Skills Analytical and CreativeHighly analytical, curious, and creative OrganizedProven solid organization skills and attention to detail OwnershipTakes ownership of responsibilities At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 4 weeks ago

Apply

4.0 - 7.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Data Pipeline ManagementOversee the design, deployment, and maintenance of data pipelines to ensure they are optimized and highly available Data Collection and StorageBuild and maintain systems for data collection, storage, and processing ETL ProcessesDevelop and manage ETL (Extract, Transform, Load) processes to convert raw data into usable formats CollaborationWork closely with data analysts, data scientists, and other stakeholders to gather technical requirements and ensure data quality System MonitoringMonitor existing metrics, analyze data, and identify opportunities for system and process improvements Data GovernanceEnsure data compliance and security needs are met in system construction MentorshipOversee and mentor junior data engineers, ensuring proper execution of their duties ReportingDevelop queries for ad hoc business projects and ongoing reporting Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor degree in engineering or equivalent experience Minimum 3/4 years of experience in SQL (Joins, Stored procedures, performance tuning), Azure, PySpark, Databricks & Big Data Ecosystem) Flexibility to work in different shift timings Flexibility to work as Dev OPS Engineers At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 4 weeks ago

Apply

7.0 - 10.0 years

13 - 18 Lacs

Noida

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together Principal Data Engineer is responsible for management of the full technical lifecycle of program for given modules, with a technical focus, including the strategy, design, development, and end-of-life of new, existing or acquired applications. Primary Responsibilities Lead the design, development, and maintenance of robust data pipelines and infrastructure for cloud cost management Architect and implement scalable data models and dbt workflows to support advanced analytics and reporting Conduct complex data analysis using SQL, Python (in Jupyter notebooks), and other tools to identify cost optimization opportunities, trends, and anomalies Develop and automate advanced reporting and dashboards using tools like PowerBI, incorporating predictive analytics and machine learning insights Lead the development and implementation of AI and bot strategies to automate data analysis tasks and enhance team efficiency Mentor and guide junior team members in data engineering best practices, cloud technologies, and analytical techniques Collaborate with data scientists and engineers to build and deploy machine learning models for forecasting, anomaly detection, and other FinOps use cases Champion data governance and ensure data quality across the cloud cost management platform Drive continuous improvement within the team, identifying opportunities to enhance processes and tools Partner with stakeholders across the organization to understand their data needs and deliver actionable insights Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor's degree in Computer Science, Data Science, or a related field 5+ years of experience in data engineering or data analysis, with a focus on cloud cost management Deep expertise in SQL and Excel Proven experience building data pipelines Solid experience with reporting and dashboarding tools like PowerBI Proven excellent leadership, mentorship, communication, and problem-solving skills Preferred Qualifications Extensive experience with major cloud service providers (GCP, AWS, Azure) and cloud cost management tools Hands-on expertise with a high-level programming language (e.g., Python) Extensive experience working with Jupyter Notebooks or similar tools Experience leading the development and implementation of AI and bot strategies for data analysis automation Solid experience with data modeling, dbt, and data warehousing concepts Experience architecting and building robust data pipelines using Azure Data Factory and Databricks (or similar tools on other cloud plat At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 4 weeks ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

Gurugram

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Lead AI or ML projects from conception to deployment, ensuring high-quality and scalable solutions Design and develop predictive and prescriptive models using advanced machine learning techniques Apply knowledge of statistics, programming, data modeling, and simulation to recognize patterns and identify opportunities Collaborate with engineers and other stakeholders to build statistical models and apply machine learning techniques for targeted solutions Conduct exploratory data analysis on unstructured, diverse datasets to drive innovative solutions Use analytics and statistical software such as SQL, R, Python, Hadoop, and others to perform analysis and interpret data Provide expertise in modeling and statistical approaches, including regression methods, decision trees, deep learning, NLP techniques, and uplift modeling Develop and evaluate predictive and prescriptive models and advanced algorithms to extract optimal value from data Communicate analysis and findings through interactive visualizations, documents, and presentations Mentor and guide junior data scientists, fostering a culture of continuous learning and improvement Stay updated with the latest advancements in AI/ML and data science, and apply them to improve existing solutions Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Master's or Bachelor's degree in Computer Science, Statistics, Engineering, Mathematics, or a related field Proven experience in AI/ML projects, with hands-on experience in coding and deploying AI/ML solutions Experience to deploy and manage AI/ML applications preferable Azure or GCP Proficiency in statistical techniques, machine learning methods, and relevant technologies Proven solid programming skills in languages such as Python, R, SQL, and experience with big data tools like Hadoop Proven excellent analytical and problem-solving skills, with the ability to work with large amounts of data Proven effective communication skills, with the ability to present complex findings in a clear and concise manner Proven self-motivated, curious, and innovative, with a strong ability to work independently and as part of a team At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 4 weeks ago

Apply

4.0 - 7.0 years

9 - 14 Lacs

Noida

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities These are some of the basic responsibilities and more detailed one will be shared with the Talent Executive during the hiring process Data modeling, big data development, Extract, Transform and Load (ETL) development, storage engineering, data warehousing, data provisioning Platform-as-a-Service and Cloud solution with a focus on data stores and associated eco systems Partner with stakeholders to understand requirements and develop business intelligence tools or database tools to fetch data, provide insights and present recommendations Create specifications and transformation jobs to bring data into a proper structure and conduct analysis to validate the accuracy and quality of the data Create segmentation, dashboards, data visualizations, decision aids and business case analysis to support the organization Collaborate with stakeholders on ad hoc and standard reporting requests Identify appropriate data sources, metrics, and tools for providing required information according to clients' requests Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Graduate or Post Graduate preferable with major in Computer Science Experience on any Cloud Platform tool like Azure Experience on any visualization tools like Tableau, DOMO, QlikView Experience in a similar role/domain - HR Analytics and/or any of the HR vertical and/or Experience in a BI/Data Analytics team with exposure to HR data, PeopleSoft tables Hands-on knowledge on Data Transformations/Data Quality Knowledge on Database architecture, engineering, design, optimization, security and administration Knowledge on various HR Analytics datasets and metrics including but not restricted to Demographics, Hires & Turnover, DEI, Survey etc. Well versed with Data Engineering Insights and Data Analysis At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 4 weeks ago

Apply

2.0 - 6.0 years

6 - 10 Lacs

Noida

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together We are looking for an Associate Manager Data Analyst in NICE Reporting. He/ She will be working for BI & Claims reporting and will be responsible for PowerBI, dashboard development and help us drive in automation support using Python and Azure. Provide delivery and operational support in BI & Claims reporting. The selected individual He / She will be working on producing reports and developing dashboards in PowerBI for NICE/BI and Claims reporting and on migrating existing Manual reports to PowerBI/ as part of modernization exercise. You will work closely with external and internal stakeholders, development and build data processing solutions, solve problems, and ad hock support resolve issues. Primary Responsibilities Develop end-to-end dashboards by gathering business requirements in an iterative/agile model Collaborate with cross-functional teams and project managers from design to implementation, monitoring, and maintenance Design business analysis and data migration for departmental use Maintain and update dashboards to ensure accuracy Regularly review data reports to identify and resolve errors Analyze and collect data for various business reports and build data models in Azure and pipelines using Azure Data Factory Create insightful business reports and communicate results Coordinate with supervisors/clients and manage daily activities of business support and technical teams Build business intelligence tools, conduct analysis to identify patterns and trends, and ensure data quality using tools like Power BI, Alteryx, Python Analyze large datasets to extract meaningful insights and trends using Python and data science techniques Partner with stakeholders to understand data requirements and develop tools/models such as segmentation, dashboards, data visualizations, decision aids, and business case analysis Analyze and document requirements, evaluate, and build effective solutions Query data sources for analyses, detailed data profiling, and reporting Research complex functional data/analytical issues Perform source system analysis and gap analysis between source and target systems Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s degree or master’s in engineering with experience in Data Visualization, Data engineering and Data Science 8+ years of relevant industry experience (Healthcare preferred) Experience developing Power BI dashboards and transitioning work. Experience in requirements gathering and data warehousing systems analysis. Proficient in designing, developing scalable data pipelines, optimizing data storage solutions, and ensuring data security for cloud applications and services using Azure technologies Proficient in SQL, Power BI, Python and Machine learning Solid understanding of AI, machine learning, and solid knowledge of Python programming, including libraries for data analysis (e.g., Pandas, NumPy) and machine learning (e.g., scikit-learn, TensorFlow, Keras) Solid verbal and written communication skills to effectively collaborate with team members and stakeholders Solid skills in process support, automation, and business insights Ability to organize data, recognize trends, and develop innovative approaches Analytical skills for data-driven report development and excellent critical thinking and attention to detail At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 4 weeks ago

Apply

2.0 - 5.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Primary Responsibility: Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so. Qualifications - External Required Qualifications: Graduate degree or equivalent experience 8+ years of development, administration and migration experience in Azure Databricks and Snowflake 8+ years of experience with data design/pattern- Data warehousing, Dimensional Modeling and Lakehouse Medallion Architecture 5+ years of experience working with Azure data factory 5+ years of experience in setting up, maintenance and usage of Azure services as Azure Data Factory, Azure Databricks, Azure Data Lake Storage, Azure SQL Database etc. 5+ years of experience working with Python and Pyspark 3+ years of experience with Kafka Excellent communication skills to effectively convey technical concepts to both technical and non-technical stakeholders.

Posted 4 weeks ago

Apply

5.0 - 9.0 years

13 - 17 Lacs

Noida

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Accountable for the data engineering lifecycle including research, proof of concepts, architecture, design, development, test, deployment, and maintenance Design, develop, implement, and run cross-domain, modular, flexible, scalable, secure, reliable, and quality data solutions that transform data for meaningful analyses and analytics while ensuring operability Layer in instrumentation in the development process so that data pipelines that can be monitored to detect internal problems before they result in user-visible outages or data quality issues Build processes and diagnostic tools to troubleshoot, maintain, and optimize solutions and respond to customer and production issues Embrace continuous learning of engineering practices to ensure industry best practices and technology adoption, including DevOps, Cloud, and Agile thinking Tech debt reduction/Tech transformation including open source adoption, cloud adoption, HCP assessment, and adoption Maintain high-quality documentation of data definitions, transformations, and processes to ensure data governance and security Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience Experience with data analytics tools like Tableau, Power BI, or similar Experience in optimizing data processing workflows for performance and cost-efficiency Proficient in design and documentation of data exchanges across various channels including APIs, streams, batch feeds Proficient in source to target mapping, gap analysis and applies data transformation rules based on understanding of business rules, data structures Familiarity with healthcare regulations and data exchange standards (e.g. HL7, FHIR) Familiarity with automation tools and scripting languages (e.g., Bash, PowerShell) to automate repetitive tasks Understanding of healthcare data, including Electronic Health Records (EHR), claims data, and regulatory compliance such as HIPAA Proven ability to develop and implement scripts to maintain and monitor performance tuning Proven ability to design scalable job scheduler solutions and advises on appropriate tools/technologies to use Proven ability to work across multiple domains to define and build data models Proven ability to understand all the connected technology services and their impacts Proven ability to assess design and proposes options to ensure the solution meets business needs in terms of security, scalability, reliability, and feasibility

Posted 4 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies