Home
Jobs

984 Data Bricks Jobs - Page 29

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

5 - 15 Lacs

Chennai

Work from Office

Naukri logo

About the Role We are seeking a highly skilled Senior Azure Data Solutions Architect to design and implement scalable, secure, and efficient data solutions supporting enterprise-wide analytics and business intelligence initiatives. You will lead the architecture of modern data platforms, drive cloud migration, and collaborate with cross-functional teams to deliver robust Azure-based solutions. Key Responsibilities Architect and implement end-to-end data solutions using Azure services (Data Factory, Databricks, Data Lake, Synapse, Cosmos DB Design robust and scalable data models, including relational, dimensional, and NoSQL schemas. Develop and optimize ETL/ELT pipelines and data lakes using Azure Data Factory, Databricks, and open formats such as Delta and Iceberg. Integrate data governance, quality, and security best practices into all architecture designs. Support analytics and machine learning initiatives through structured data pipelines and platforms. Collaborate with data engineers, analysts, data scientists, and business stakeholders to align solutions with business needs. Drive CI/CD integration with Databricks using Azure DevOps and tools like DBT. Monitor system performance, troubleshoot issues, and optimize data infrastructure for efficiency and reliability. Stay current with Azure platform advancements and recommend improvements. Required Skills & Experience Extensive hands-on experience with Azure services: Data Factory, Databricks, Data Lake, Azure SQL, Cosmos DB, Synapse. • Expertise in data modeling and design (relational, dimensional, NoSQL). • Proven experience with ETL/ELT processes, data lakes, and modern lakehouse architectures. • Proficiency in Python, SQL, Scala, and/or Java. • Strong knowledge of data governance, security, and compliance frameworks. • Experience with CI/CD, Azure DevOps, and infrastructure as code (Terraform or ARM templates). • Familiarity with BI and analytics tools such as Power BI or Tableau. • Excellent communication, collaboration, and stakeholder management skills. • Bachelors degree in Computer Science, Engineering, Information Systems, or related field. Preferred Qualifications • Experience in regulated industries (finance, healthcare, etc.). • Familiarity with data cataloging, metadata management, and machine learning integration. • Leadership experience guiding teams and presenting architectural strategies to leadership. Why Join Us? • Work on cutting-edge cloud data platforms in a collaborative, innovative environment. • Lead strategic data initiatives that impact enterprise-wide decision-making. • Competitive compensation and opportunities for professional growth.

Posted 1 month ago

Apply

8.0 - 12.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Naukri logo

Good to have skills required : Cloud, SQL , data analysis skills Location : Pune - Kharadi - WFO - 3 days/week. Job Description : We are seeking a highly skilled and experienced Python Lead to join our team. The ideal candidate will have strong expertise in Python coding and development, along with good-to-have skills in cloud technologies, SQL, and data analysis. Key Responsibilities : - Lead the development of high-quality, scalable, and robust Python applications. - Collaborate with cross-functional teams to define, design, and ship new features. - Ensure the performance, quality, and responsiveness of applications. - Develop RESTful applications using frameworks like Flask, Django, or FastAPI. - Utilize Databricks, PySpark SQL, and strong data analysis skills to drive data solutions. - Implement and manage modern data solutions using Azure Data Factory, Data Lake, and Data Bricks. Mandatory Skills : - Proven experience with cloud platforms (e.g. AWS) - Strong proficiency in Python, PySpark, R, and familiarity with additional programming languages such as C++, Rust, or Java. - Expertise in designing ETL architectures for batch and streaming processes, database technologies (OLTP/OLAP), and SQL. - Experience with the Apache Spark, and multi-cloud platforms (AWS, GCP, Azure). - Knowledge of data governance and GxP data contexts; familiarity with the Pharma value chain is a plus. Good to Have Skills : - Experience with modern data solutions via Azure. - Knowledge of principles summarized in the Microsoft Cloud Adoption Framework. - Additional expertise in SQL and data analysis. Educational Qualifications : Bachelor's/Master's degree or equivalent with a focus on software engineering. If you are a passionate Python developer with a knack for cloud technologies and data analysis, we would love to hear from you. Join us in driving innovation and building cutting-edge solutions! Apply Insights Follow-up Save this job for future reference Did you find something suspicious? Report Here! Hide This Job? Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 1 month ago

Apply

5.0 - 7.0 years

25 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title: Senior Data Engineer / Technical Lead Location: Bangalore Employment Type: Full-time Role Summary We are seeking a highly skilled and motivated Senior Data Engineer/Technical Lead to take ownership of the end-to-end delivery of a key project involving data lake transitions, data warehouse maintenance, and enhancement initiatives. The ideal candidate will bring strong technical leadership, excellent communication skills, and hands-on expertise with modern data engineering tools and platforms. Experience in Databricks and JIRA is highly desirable. Knowledge of supply chain and finance domains is a plus, or a willingness to quickly ramp up in these areas is expected. Key Responsibilities Delivery Management Lead and manage data lake transition initiatives under the Gold framework. Oversee delivery of enhancements and defect fixes related to the enterprise data warehouse. Technical Leadership Design and develop efficient, scalable data pipelines using Python, PySpark , and SQL . Ensure adherence to coding standards, performance benchmarks, and data quality goals. Conduct performance tuning and infrastructure optimization for data solutions. Provide code reviews, mentorship, and technical guidance to the engineering team. Collaboration & Stakeholder Engagement Collaborate with business stakeholders (particularly the Laboratory Products team) to gather, interpret, and refine requirements. Communicate technical solutions and project progress clearly to both technical and non-technical audiences. Tooling and Technology Use Leverage tools such as Databricks, Informatica, AWS Glue, Google DataProc , and Airflow for ETL and data integration. Use JIRA to manage project workflows, track defects, and report progress. Documentation and Best Practices Create and review documentation including architecture, design, testing, and deployment artifacts. Define and promote reusable templates, checklists, and best practices for data engineering tasks. Domain Adaptation Apply or gain knowledge in supply chain and finance domains to enhance project outcomes and align with business needs. Skills and Qualifications Technical Proficiency Strong hands-on experience in Python, PySpark , and SQL . Expertise with ETL tools such as Informatica, AWS Glue, Databricks , and Google Cloud DataProc . Deep understanding of data warehousing solutions (e.g., Snowflake, BigQuery, Delta Lake, Lakehouse architectures ). Familiarity with performance tuning, cost optimization, and data modeling best practices. Platform & Tools Proficient in working with cloud platforms like AWS, Azure, or Google Cloud . Experience in version control and configuration management practices. Working knowledge of JIRA and Agile methodologies. Certifications (Preferred but not required) Certifications in cloud technologies, ETL platforms, or relevant domain (e.g., AWS Data Engineer, Databricks Data Engineer, Supply Chain certification). Expected Outcomes Timely and high-quality delivery of data engineering solutions. Reduction in production defects and improved pipeline performance. Increased team efficiency through reuse of components and automation. Positive stakeholder feedback and high team engagement. Consistent adherence to SLAs, security policies, and compliance guidelines. Performance Metrics Adherence to project timelines and engineering standards Reduction in post-release defects and production issues Improvement in data pipeline efficiency and resource utilization Resolution time for pipeline failures and data issues Completion of required certifications and training Preferred Background Background or exposure to supply chain or finance domains Willingness to work during morning US East hours Ability to work independently and drive initiatives with minimal oversight Required Skills Databricks,Data Warehousing,ETL,SQL

Posted 1 month ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Pune

Work from Office

Naukri logo

New Opportunity :FullStack Engineer. Location :Pune (Onsite). Company :Apptware Solutions Hiring. Experience :4+ years. We're looking for a skilled Full Stack Engineer to join our team. If you have experience in building scalable applications and working with modern technologies, this role is for you. Role & Responsibilities. Develop product features to help customers easily transform data. Design, implement, deploy, and support client-side and server-side architectures, including web applications, CLI, and SDKs. Minimum Requirements. 4+ years of experience as a Full Stack Developer or similar role. Hands-on experience in a distributed engineering role with direct operational responsibility (on-call experience preferred). Proficiency in at least one back-end language (Node.js, TypeScript, Python, or Go). Front-end development experience with Angular or React, HTML, CSS. Strong understanding of web applications, backend APIs, CI/CD pipelines, and testing frameworks. Familiarity with NoSQL databases (e.g. DynamoDB) and AWS services (Lambda, API Gateway, Cognito, etc.). Bachelor's degree in Computer Science, Engineering, Math, or equivalent experience. Strong written and verbal communication skills. Preferred Skills. Experience with AWS Glue, Spark, or Athena. Strong understanding of SQL and data engineering best practices. Exposure to Analytical EDWs (Snowflake, Databricks, Big Query, Cloudera, Teradata). Experience in B2B applications, SaaS offerings, or startups is a plus. (ref:hirist.tech). Show more Show less

Posted 1 month ago

Apply

2.0 - 5.0 years

3 - 12 Lacs

Kolkata, Pune, Bengaluru

Work from Office

Naukri logo

Company Name: Tech Mahindra Experience: 2-5 Years Location: Bangalore/Hyderabad Interview Mode: Virtual Interview Rounds: 2-3 Rounds Notice Period: Immediate to 30 days Generic description: Roles and Responsibilities : Design, develop, and maintain large-scale data pipelines using Azure Data Factory (ADF) to extract, transform, and load data from various sources into Azure Databricks. Collaborate with cross-functional teams to understand business requirements and design scalable solutions for big data processing using PySpark on Azure Data Lake Storage. Develop complex SQL queries to optimize database performance and troubleshoot issues in real-time. Ensure high availability of the system by implementing monitoring tools and performing regular maintenance tasks. Job Requirements : 2-5 years of experience in designing and developing large-scale data systems on Microsoft Azure platform. Strong understanding of Azure Data Factory (ADF), Azure Databricks, and Azure Data Lake Storage concepts. Proficiency in writing efficient Python code using PySpark for big data processing.

Posted 1 month ago

Apply

5.0 - 7.0 years

5 - 16 Lacs

Hyderabad, Bengaluru

Work from Office

Naukri logo

Company Name: Tech Mahindra Experience: 5-7 Years Location: Bangalore/Hyderabad Interview Mode: Virtual Interview Rounds: 2-3 Rounds Notice Period: Immediate to 30 days Generic description: Roles and Responsibilities : Design, develop, test, deploy and maintain large-scale data pipelines using Azure Data Factory (ADF) to integrate various data sources into a centralized data lake. Collaborate with cross-functional teams to gather requirements for data processing needs and design solutions that meet business objectives. Develop complex SQL queries to extract insights from large datasets stored in Azure Databricks or other relational databases. Troubleshoot issues related to ADF pipeline failures, data quality problems, and performance optimization. Job Requirements : 5-7 years of experience in designing and developing large-scale data pipelines using ADF. Strong understanding of Azure Databricks, including its architecture, features, and best practices. Proficiency in writing complex SQL queries for querying large datasets stored in relational databases. Experience working with PySpark on AWS EMR clusters.

Posted 1 month ago

Apply

5.0 - 8.0 years

5 - 15 Lacs

Kochi

Work from Office

Naukri logo

Job Summary: We are looking for a seasoned Data Engineer with 58 years of experience, specializing in Microsoft Fabric. The ideal candidate will play a key role in designing, building, and optimizing scalable data pipelines and models. You will work closely with analytics and business teams to drive data integration, ensure quality, and support data-driven decision-making in a modern cloud environment. Key Responsibilities: Design, develop, and optimize end-to-end data pipelines using Microsoft Fabric (Data Factory, Dataflows Gen2). Create and maintain data models , semantic models , and data marts for analytical and reporting purposes. Develop and manage SQL-based ETL processes , integrating various structured and unstructured data sources. Collaborate with BI developers and analysts to develop Power BI datasets, dashboards, and reports. Implement robust data integration solutions across diverse platforms and sources (on-premises, cloud). Ensure data integrity, quality, and governance through automated validation and error handling mechanisms. Work with business stakeholders to understand data requirements and translate them into technical specifications. Optimize data workflows for performance and cost-efficiency in a cloud-first architecture. Provide mentorship and technical guidance to junior data engineers. Required Skills: Strong hands-on experience with Microsoft Fabric , including Dataflows Gen2, Pipelines, and OneLake. Proficiency in Power BI , including building reports, dashboards, and working with semantic models. Solid understanding of data modeling techniques : star schema, snowflake, normalization/denormalization. Deep experience with SQL , stored procedures, and query optimization. Experience in data integration from diverse sources such as APIs, flat files, databases, and streaming data. Knowledge of data governance , lineage , and data catalog capabilities within the Microsoft ecosystem.

Posted 1 month ago

Apply

10.0 - 15.0 years

15 - 30 Lacs

Pallavaram

Work from Office

Naukri logo

Data Engineering Lead Company Name: Blackstraw.ai Oce Location: Chennai (Work from Office) Job Type: Full-time Experience: 10 - 15 Years Candidates who can join immediately will be preferred. Job Description: As a lead data engineer you will oversee data architecture, ETL processes, and analytics pipelines, ensuring efficiency, scalability, and quality. Key Responsibilities: Working with clients to understand their data. Based on the understanding you will be building the data structures and pipelines. You will be working on the application from end to end collaborating with UI and other development teams. You will be working with various cloud providers such as Azure & AWS. You will be engineering data using the Hadoop/Spark ecosystem. You will be responsible for designing, building, optimizing and supporting new and existing data pipelines. Orchestrating jobs using various tools such Oozie, Airflow, etc. Developing programs for cleaning and processing data. You will be responsible for building the data pipelines to migrate and load the data into the HDFS either on-prem or in the cloud. Developing Data ingestion/process/integration pipelines effectively. Creating Hive data structures,metadata and loading the data into data lakes / BigData warehouse environments. Optimized (Performance tuning) many data pipelines effectively to minimize cost. Code versioning control and git repository is up to date. You should be able to explain the data pipeline to internal and external stakeholders. You will be responsible for building and maintaining CI/CD of the data pipelines. You will be managing the unit testing of all data pipelines. Tech Stack: Minimum of 5+ years working experience with Spark, Hadoop eco systems. Minimum of 4+ years working experience on designing data streaming pipelines. Should be an expert in either Python/Scala/Java. Should have experience in Data Ingestion and Integration into data lake using hadoop ecosystem tools such as Sqoop, Spark, SQL, Hive, Airflow, etc.. Should have experience optimizing (Performance tuning) data pipelines. Should have minimum experience of 3+ years on NoSQL and Spark Streaming. Knowledge of Kubernetes and Docker is a plus. Should have experience with Cloud services either Azure/AWS. Should have experience with on-prem distribution such as Cloudera/HortonWorks/MapR. Basic understanding of CI/CD pipelines. Basic knowledge of Linux environment and commands. Preferred Qualifications: Bachelors degree in computer science or related field. Proven experience with big data ecosystem tools such as Sqoop, Spark, SQL, API, Hive, Oozie, Airflow, etc.. Solid experience in all phases of SDLC with 10+ years of experience (plan, design, develop, test, release, maintain and support) Hands-on experience using Azures data engineering stack. Should have implemented projects using programming languages such as Scala or Python. Working experience on SQL complex data merging techniques such as windowing functions etc.. Hands-on experience with on-prem distribution tools such as Cloudera/HortonWorks/MapR. Should have excellent communication, presentation and problem solving skills. Key Traits: Should have excellent communication skills. Should be self motivated and willing to work as part of a team. Should be able to collaborate and coordinate with on shore and offshore teams. Be a problem solver and be proactive to solve the challenges that come his way.

Posted 1 month ago

Apply

12.0 - 18.0 years

50 - 80 Lacs

Hyderabad

Work from Office

Naukri logo

Executive Director Data Management Company Overview Accordion is a global private equity-focused financial consulting firm specializing in driving value creation through services rooted in Data & Analytics and powered by technology. Accordion works at the intersection of Private Equity sponsors and portfolio companies management teams across every stage of the investment lifecycle. We provide hands-on, execution-oriented support, driving value through the office of the CFO by building data and analytics capabilities and identifying and implementing strategic work, rooted in data and analytics. Accordion is headquartered in New York City with 10 offices worldwide. Join us and make your mark on our company. Data & Analytics (Accordion | Data & Analytics) Accordion's Data & Analytics (D&A) practice in India delivers cutting-edge, intelligent solutions to a global clientele, leveraging a blend of domain knowledge, sophisticated technology tools, and deep analytics capabilities to tackle complex business challenges. We partner with Private Equity clients and their Portfolio Companies across diverse sectors, including Retail, CPG, Healthcare, Media & Entertainment, Technology, and Logistics. D&A team members deliver data and analytical solutions designed to streamline reporting capabilities and enhance business insights across vast and complex data sets ranging from Sales, Operations, Marketing, Pricing, Customer Strategies, and more. Working at Accordion in India means joining 800+ analytics, data science, finance, and technology experts in a high-growth, agile, and entrepreneurial environment to transform how portfolio companies drive value. It also means making your mark on Accordion's future by embracing a culture rooted in collaboration and a firm-wide commitment to building something great, together. Join us and experience a better way to work! Location: Hyderabad, Telangana Role Overview: Accordion is looking for an experienced Enterprise Data Architect to lead the strategy, design, and implementation of data architectures for across all its data management projects. He/she will be part of the technology team and will possess in-depth knowledge of distinct types of data architectures and frameworks including distributed large-scale implementations. He/she will collaborate closely with the client partnership team to design and recommend robust and scalable data architecture to clients and work with engineering teams to implement the same in on-premises or cloud-based environments. He/she will be a data evangelist and will conduct knowledge sharing sessions in the company on various data management topics to spread awareness of data architecture principles and improve the overall capabilities of the team. The Enterprise Data Architect will also conduct design review sessions to validate/verify implementations, emphasize and implement best practices followed by exhaustive documentation which are in line with the design philosophy. He/she will have excellent communication skills and will possess industry standard certification in the data architecture areas. What You will do: Partner with clients to understand their business and create comprehensive requirements to enable development of optimal data architecture. Translate business requirements into logical and physical design of databases, data warehouses, and data streams. Analyze, plan, and define data architecture framework, including security, reference data, metadata, and master data. Create elaborate data management processes and procedures and consult with Senior Management to share the knowledge. Collaborate with client and internal project teams to devise and implement data strategies, build models, and assess shareholder needs and goals. Develop application programming interfaces (APIs) to extract and store data in the most optimal manner. Align business requirements with technical architecture and collaborate with the technical teams for implementation and tracking purposes. Research and track the latest developments in the field to maintain expertise about the latest best practices and techniques within the industry. Ideally, you have: Undergraduate degree (B.E/B.Tech.) from tier-1/tier-2 colleges are preferred. 12+ years of experience in related field. Experience in designing logical & physical data design architectures in various RDBMS (SQL Server, Oracle, MySQL etc.), Non-RDBMS (MongoDB, Cassandra etc.) and Data Warehouse (Azure Synapse, AWS Redshift, Google BigQuery, Snowflake etc.) environments. Deep knowledge and implementation experience on Modern Data Warehouse principles using Kimball & Inmon Models or Data Vault including their application based on data quality requirements. In-depth knowledge of any one of cloud-based infrastructure (AWS, Azure, Google Cloud) for solution design, development, and delivery is mandatory. Proven abilities to take on initiative, be innovative and drive it through completion. Analytical mind with strong problem-solving attitude. Excellent communication skills, both written and verbal. Any Enterprise Data Architect certification will be an added advantage. Why Explore a Career at Accordion: High growth environment: Semi-annual performance management and promotion cycles coupled with a strong meritocratic culture, enables fast track to leadership responsibility. Cross Domain Exposure: Interesting and challenging work streams across industries and domains that always keep you excited, motivated, and on your toes. Entrepreneurial Environment: Intellectual freedom to make decisions and own them. We expect you to spread your wings and assume larger responsibilities. Fun culture and peer group: Non-bureaucratic and fun working environment; Strong peer environment that will challenge you and accelerate your learning curve. Other benefits for full time employees: Health and wellness programs that include employee health insurance covering immediate family members and parents, term life insurance for employees, free health camps for employees, discounted health services (including vision, dental) for employee and family members, free doctor's consultations, counsellors, etc. Corporate Meal card options for ease of use and tax benefits. Team lunches, company sponsored team outings and celebrations. Robust leave policy to support work-life balance. Specially designed leave structure to support woman employees for maternity and related requests. Reward and recognition platform to celebrate professional and personal milestones. A positive & transparent work environment including various employee engagement and employee benefit initiatives to support personal and professional learning and development.

Posted 1 month ago

Apply

3.0 - 8.0 years

10 - 20 Lacs

Chennai

Hybrid

Naukri logo

Roles & Responsibilities : • We are looking for a strong Senior Data Engineering who will be majorly responsible for designing, building and maintaining ETL/ ELT pipelines . • Integration of data from multiple sources or vendors to provide the holistic insights from data. • You are expected to build and manage Data Lake and Data warehouse solutions, design data models, create ETL processes, implementing data quality mechanisms etc. • Perform EDA (exploratory data analysis) required to troubleshoot data related issues and assist in the resolution of data issues. • Should have experience in client interaction oral and written. • Experience in mentoring juniors and providing required guidance to the team. Required Technical Skills • Extensive experience in languages such as Python, Pyspark, SQL (basics and advanced). • Strong experience in Data Warehouse, ETL, Data Modelling, building ETL Pipelines, Data Architecture . • Must be proficient in Redshift, Azure Data Factory, Snowflake etc. • Hands-on experience in cloud services like AWS S3, Glue, Lambda, CloudWatch, Athena etc. • Good to have knowledge in Dataiku, Big Data Technologies and basic knowledge of BI tools like Power BI, Tableau etc will be plus. • Sound knowledge in Data management, data operations, data quality and data governance. • Knowledge of SFDC, Waterfall/ Agile methodology. • Strong knowledge of Pharma domain / life sciences commercial data operations. Qualifications • Bachelors or masters Engineering/ MCA or equivalent degree. • 4-6 years of relevant industry experience as Data Engineer . • Experience working on Pharma syndicated data such as IQVIA, Veeva, Symphony; Claims, CRM, Sales, Open Data etc. • High motivation, good work ethic, maturity, self-organized and personal initiative. • Ability to work collaboratively and providing the support to the team. • Excellent written and verbal communication skills. • Strong analytical and problem-solving skills. Location • Chennai, India

Posted 1 month ago

Apply

1.0 - 5.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

Role Data Scientist LocationBangalore TimingsFull Time (As per company timings) Notice Period(Immediate Joiner Only) Experience5 Years We are looking for a highly motivated and skilled Data Scientist to join our growing team The ideal candidate should possess a robust background in data science, machine learning, and statistical analysis, with a passion for uncovering insights from complex datasets This role demands hands-on experience in Python and various ML libraries, strong business acumen, and effective communication skills for translating data insights into strategic decisions. Key Responsibilities Develop, implement, and optimize machine learning models for predictive analytics and business decision-making. Work with both structured and unstructured data to extract valuable insights and patterns. Leverage Python and standard ML libraries (NumPy, Pandas, SciPy, Scikit-Learn, TensorFlow, PyTorch, Keras, Matplotlib) for data modeling and analysis. Design and build data pipelines for streamlined data processing and integration. Conduct Exploratory Data Analysis (EDA) to identify trends, anomalies, and business opportunities. Partner with cross-functional teams to embed data-driven strategies into core business operations. Create compelling data stories through visualization techniques to convey findings to non-technical stakeholders. Stay abreast of the latest ML/AI innovations and industry best practices. Required Skills & Qualifications 5 years of proven experience in Data Scientist and machine learning. Proficient in Python and key data science libraries. Experience with ML frameworks such as TensorFlow, Keras, or PyTorch. Strong understanding of SQL and relational databases. Solid grounding in statistical analysis, hypothesis testing, and feature engineering. Familiarity with data visualization tools like Matplotlib, Seaborn, or Plotly. Demonstrated ability to work with large datasets and solve complex analytical problems. Excellent communication and data storytelling skills. Knowledge of Marketing Mix Modeling is a plus. Preferred Skills Hands-on experience with cloud platforms like AWS, Azure, or GCP. Exposure to big data technologies such as Hadoop, Spark, or Databricks. Familiarity with NLP, computer vision, or deep learning. Understanding of A/B testing and experimental design methodologies. Show more Show less

Posted 1 month ago

Apply

1.0 - 5.0 years

10 - 14 Lacs

Pune

Work from Office

Naukri logo

Technical Project Manager Company Overview: At Codvo, software and people transformations go hand-in-hand We are a global empathy-led technology services company Product innovation and mature software engineering are part of our core DNA Respect, Fairness, Growth, Agility, and Inclusiveness are the core values that we aspire to live by each day We continue to expand our digital strategy, design, architecture, and product management capabilities to offer expertise, outside-the-box thinking, and measurable results. : Lead and manage end-to-end data and analytics projects, ensuring timely delivery and alignment with business objectives. Collaborate with cross-functional teams, including data scientists, analysts, engineers, and business stakeholders, to define project scope, goals, and deliverables. Develop detailed project plans, including timelines, milestones, resource allocation, and risk management strategies. Monitor project progress, identify potential issues, and implement corrective actions to ensure project success. Facilitate effective communication and collaboration among team members and stakeholders. Ensure data quality, integrity, and security throughout the project lifecycle. Stay updated with the latest trends and technologies in data and analytics to drive continuous improvement and innovation. Provide regular project updates and reports to senior management and stakeholders. Effective leadership, interpersonal and communication skills. Ability to stay calm and composed to deliver under pressure. Strategic thinkers having adequate cost control / management experience would be a plus. Strong knowledge of Change, Risk and Resource management is required. Thorough understanding of project/program management techniques and methods from initiation to closure. Working knowledge of program/project management tools like JIRA, Azure DevOps Board, Basecamp, MS Project, Excellent communication skills and clarity of thought. Excellent problem-solving ability, with escalation handling experience. Qualifications: Bachelors degree in Computer Science, Information Technology, Data Science, or a related field A Masters degree is a plus. Proven experience as a Technical Project Manager, preferably in data and analytics projects. Strong understanding of data management, analytics, and visualization tools and technologies. Excellent project management skills, including the ability to manage multiple projects simultaneously. Proficiency in project management software (e.g., JIRA, MS Project, ADO). Strong analytical and problem-solving skills. Excellent communication and interpersonal skills. Ability to work effectively in a fast-paced, dynamic environment. Preferred Skills: Experience with big data technologies (e.g., Hadoop, Spark, Azure, Databricks). Knowledge of machine learning and artificial intelligence. Certification in project management (e.g., PMP, PRINCE2). Work Location Remote / Pune Work timings 2.30 pm- 11.30 pm Show more Show less

Posted 1 month ago

Apply

5.0 - 9.0 years

13 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities As a Tech Lead, the candidate should be able to work as an individual contributor as well as people manager Be able to work on data pipelines and databases Be able to work on data intensive applications or systems Be able to lead the team and have to soft skills for the same Be able to review code, design and mentor the team members Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Graduate degree or equivalent experience Experience working on Databricks Well versed with Apache spark, Azure, SQL, Pyspark, Airflow, Hadoop, UNIX etc. Proven ability to work on big data technology stack on cloud and on-prem Proven ability to communicate effectively with the team Proven ability to lead and mentor the team Proven ability to have soft skills for people management

Posted 1 month ago

Apply

3.0 - 7.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Lead the migration of the ETLs from on-premises SQLServer based data warehouse to Azure Cloud, Databricks and Snowflake Design, develop, and implement data platform solutions using Azure Data Factory (ADF), Self-hosted Integration Runtime (SHIR), Logic Apps, Azure Data Lake Storage Gen2 (ADLS Gen2), Blob Storage, and Databricks (Pyspark) Review and analyze existing on-premises ETL processes developed in SSIS and T-SQL Implement DevOps practices and CI/CD pipelines using GitActions Collaborate with cross-functional teams to ensure seamless integration and data flow Optimize and troubleshoot data pipelines and workflows Ensure data security and compliance with industry standards Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 6+ years of experience as a Cloud Data Engineer Hands-on experience with Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage) and Databricks Solid experience in ETL development using on-premises databases and ETL technologies Experience with Python or other scripting languages for data processing Experience with Agile methodologies Proficiency in DevOps and CI/CD practices using GitActions Proven excellent problem-solving skills and ability to work independently Proven solid communication and collaboration skills Proven solid analytical skills and attention to detail Proven ability to adapt to new technologies and learn quickly Preferred Qualifications Certification in Azure or Databricks Experience with data modeling and database design Experience with development in Snowflake for data engineering and analytics workloads Knowledge of data governance and data quality best practices Familiarity with other cloud platforms (e.g., AWS, Google Cloud)

Posted 1 month ago

Apply

5.0 - 9.0 years

13 - 18 Lacs

Hyderabad

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a highly skilled and experienced Technical Delivery Lead to join our team for a Cloud Data Modernization project. The successful candidate will be responsible for managing and leading the migration of an on-premises Enterprise Data Warehouse (SQLServer) to a modern cloud-based data platform utilizing Azure Cloud data tools and Snowflake. This platform will enable offshore (non-US) resources to build and develop Reporting, Analytics, and Data Science solutions. Primary Responsibilities Manage and lead the migration of the on-premises SQLServer Enterprise Data Warehouse to Azure Cloud and Snowflake Design, develop, and implement data platform solutions using Azure Data Factory (ADF), Self-hosted Integration Runtime (SHIR), Logic Apps, Azure Data Lake Storage Gen2 (ADLS Gen2), Blob Storage, Databricks, and Snowflake Manage and guide the development of cloud-native ETLs and data pipelines using modern technologies on Azure Cloud, Databricks, and Snowflake Implement and oversee DevOps practices and CI/CD pipelines using GitActions Collaborate with cross-functional teams to ensure seamless integration and data flow Optimize and troubleshoot data pipelines and workflows Ensure data security and compliance with industry standards Provide technical leadership and mentorship to the engineering team Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 8+ years of experience in a Cloud Data Engineering role, with 3+ years in a leadership or technical delivery role Hands-on experience with Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage), Databricks, and Snowflake Experience with Python or other scripting languages for data processing Experience with Agile methodologies and project management tools Solid experience in developing cloud-native ETLs and data pipelines using modern technologies on Azure Cloud, Databricks, and Snowflake Proficiency in DevOps and CI/CD practices using GitActions Proven excellent problem-solving skills and ability to work independently Proven solid communication and collaboration skills. Solid analytical skills and attention to detail Proven track record of successful project delivery in a cloud environment Preferred Qualifications Certification in Azure or Snowflake Experience working with automated ETL conversion tools used during cloud migrations (SnowConvert, BladeBridge, etc.) Experience with data modeling and database design Knowledge of data governance and data quality best practices Familiarity with other cloud platforms (e.g., AWS, Google Cloud)

Posted 1 month ago

Apply

4.0 - 7.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Data Pipeline ManagementOversee the design, deployment, and maintenance of data pipelines to ensure they are optimized and highly available Data Collection and StorageBuild and maintain systems for data collection, storage, and processing ETL ProcessesDevelop and manage ETL (Extract, Transform, Load) processes to convert raw data into usable formats CollaborationWork closely with data analysts, data scientists, and other stakeholders to gather technical requirements and ensure data quality System MonitoringMonitor existing metrics, analyze data, and identify opportunities for system and process improvements Data GovernanceEnsure data compliance and security needs are met in system construction MentorshipOversee and mentor junior data engineers, ensuring proper execution of their duties ReportingDevelop queries for ad hoc business projects and ongoing reporting Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor degree in engineering or equivalent experience Minimum 3/4 years of experience in SQL (Joins, Stored procedures, performance tuning), Azure, PySpark, Databricks & Big Data Ecosystem) Flexibility to work in different shift timings Flexibility to work as Dev OPS Engineers At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 month ago

Apply

4.0 - 8.0 years

10 - 15 Lacs

Gurugram

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Design and develop client-side and server-side architecture Build the front-end of applications through appealing visual design Develop and manage well-functioning databases and applications Write effective APIs and integrate third-party services Ensure cross-platform optimization for mobile devices Collaborate with graphic designers to implement web design features Troubleshoot, debug, and upgrade software Write technical documentation and maintain code quality Stay updated with emerging technologies and industry trends Work with data scientists and analysts to improve software Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor's degree in Computer Science, Engineering, or a related field 4+ years of proven experience as a Full Stack Engineer or similar role Work experience in front-end technologies (Front-end technology) Experience with back-end languages (Python, Pyspark, (Good to have Java) Experience with cloud services (AWS, Azure, Google Cloud) Knowledge of DevOps practices and CI/CD pipelines Familiarity with containerization technologies (Docker, Kubernetes) Familiarity with databases (MySQL, MongoDB, Databricks, Hive) Proven excellent problem-solving skills and attention to detail Demonstrated ability to work independently and as part of a team At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 1 month ago

Apply

3.0 - 7.0 years

10 - 14 Lacs

Chennai

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Technical Leadership Technical GuidanceProvide technical direction and guidance to the development team, ensuring that best practices are followed in coding standards, architecture, and design patterns Architecture DesignDesign and oversee the architecture of software solutions to ensure they are scalable, reliable, and performant Technology StackMake informed decisions on the technology stack (.Net for backend services, React for frontend development) to ensure it aligns with project requirements Code ReviewsConduct regular code reviews to maintain code quality and provide constructive feedback to team members Hands-on DevelopmentEngage in hands-on coding and development tasks, particularly in complex or critical areas of the project Project Management Task PlanningBreak down project requirements into manageable tasks and assign them to team members while tracking progress Milestone TrackingMonitor project milestones and deliverables to ensure timely completion of projects Data Pipeline & ETL Management Data Pipeline DesignDesign robust data pipelines that can handle large volumes of data efficiently using appropriate technologies (e.g., Apache Kafka) ETL ProcessesDevelop efficient ETL processes to extract, transform, and load data from various sources into the analytics platform Product Development Feature DevelopmentLead the development of new features from concept through implementation while ensuring they meet user requirements Integration TestingEnsure thorough testing (unit tests, integration tests) is conducted for all features before deployment Collaboration Cross-functional CollaborationCollaborate closely with product managers, UX/UI designers, QA engineers, and other stakeholders to deliver high-quality products Stakeholder CommunicationCommunicate effectively with stakeholders regarding project status updates, technical challenges, and proposed solutions Quality Assurance Performance OptimizationIdentify performance bottlenecks within applications or data pipelines and implement optimizations Bug ResolutionTriage bugs reported by users or QA teams promptly and ensure timely resolution Innovation & Continuous Improvement Stay Updated with TrendsKeep abreast of emerging technologies in .Net, React, Data Pipelines/ETL tools (like Apache Kafka or Azure Data Factory) that could benefit the product Process ImprovementContinuously seek ways to improve engineering processes for increased efficiency and productivity within the team Mentorship & Team Development MentorshipMentor junior developers by providing guidance on their technical growth as well as career development opportunities Team Building ActivitiesFoster a positive team environment through regular meetings (stand-ups), brainstorming sessions/workshops focusing on problem-solving techniques related specifically towards our tech stack needs (.Net/React/Data pipeline) Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so These responsibilities collectively ensure that the Lead Software Engineer not only contributes technically but also plays a crucial role in guiding their team towards successful project delivery for advanced data analytics products utilizing modern technologies such as .Net backend services combined seamlessly alongside frontend interfaces built using React coupled together via robustly engineered pipelines facilitating efficient ETL processes necessary powering insightful analytical outcomes beneficial end-users alike! Required Qualifications Bachelor’s DegreeA Bachelor’s degree in Computer Science, Software Engineering, Information Technology, or a related field Professional Experience8+ years of experience in software development with significant time spent on both backend (.Net) and frontend (React) technologies Leadership ExperienceProven experience in a technical leadership role where you have led projects or teams Technical Expertise: Extensive experience with .Net framework (C#) for backend development Proficiency with React for frontend development Solid knowledge and hands-on experience with data pipeline technologies (e.g., Apache Kafka) Solid understanding of ETL processes and tools such as DataBricks, ADF, Scala/Spark Technical Skills Architectural KnowledgeExperience designing scalable and high-performance architectures Cloud ServicesExperience with cloud platforms such as Azure, AWS or Google Cloud Platform Software Development LifecycleComprehensive understanding of the software development lifecycle (SDLC), including Agile methodologies Database ManagementProficiency with SQL and NoSQL databases (e.g., SQL Server, MongoDB) Leadership AbilitiesProven solid leadership skills with the ability to inspire and motivate teams Communication Skills: Proven superior verbal and written communication skills for effective collaboration with cross-functional teams and stakeholders Problem-Solving AbilitiesProven solid analytical and problem-solving skills Preferred Qualification Advanced Degree (Optional)A Master’s degree in a relevant field

Posted 1 month ago

Apply

5.0 - 10.0 years

8 - 13 Lacs

Gurugram

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a highly skilled and experienced Senior Cloud Data Engineer to join our team for a Cloud Data Modernization project. The successful candidate will be responsible for migrating our on-premises Enterprise Data Warehouse (SQLServer) to a modern cloud-based data platform utilizing Azure Cloud data tools, Delta lake and Snowflake. Primary Responsibilities Lead the migration of the ETLs from on-premises SQLServer based data warehouse to Azure Cloud, Databricks and Snowflake Design, develop, and implement data platform solutions using Azure Data Factory (ADF), Self-hosted Integration Runtime (SHIR), Logic Apps, Azure Data Lake Storage Gen2 (ADLS Gen2), Blob Storage, and Databricks (Pyspark) Review and analyze existing on-premises ETL processes developed in SSIS and T-SQL Implement DevOps practices and CI/CD pipelines using GitActions Collaborate with cross-functional teams to ensure seamless integration and data flow Optimize and troubleshoot data pipelines and workflows Ensure data security and compliance with industry standards Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 6+ years of experience as a Cloud Data Engineer Hands-on experience with Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage) and Databricks Solid experience in ETL development using on-premises databases and ETL technologies Experience with Python or other scripting languages for data processing Experience with Agile methodologies Proficiency in DevOps and CI/CD practices using GitActions Proven excellent problem-solving skills and ability to work independently Solid communication and collaboration skills Solid analytical skills and attention to detail Ability to adapt to new technologies and learn quickly Preferred Qualifications Certification in Azure or Databricks Experience with data modeling and database design Experience with development in Snowflake for data engineering and analytics workloads Knowledge of data governance and data quality best practices Familiarity with other cloud platforms (e.g., AWS, Google Cloud) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 month ago

Apply

3.0 - 7.0 years

10 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Test Planning & Automation Lead - Cloud Data Modernization Position Overview: We are seeking a highly skilled and experienced Test Planning & Automation Lead to join our team for a Cloud Data Modernization project. This role involves leading the data validation testing efforts for the migration of an on-premises Enterprise Data Warehouse (SQLServer) to a target cloud tech stack comprising Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage, etc.) and Snowflake. The primary goal is to ensure data consistency between the on-premises and cloud environments. Primary Responsibilities Lead Data Validation TestingOversee and manage the data validation testing process to ensure data consistency between the on-premises SQLServer and the target cloud environment Tool Identification and AutomationIdentify and implement appropriate tools to automate the testing process, reducing reliance on manual methods such as Excel or manual file comparisons Testing Plan DevelopmentDefine and develop a comprehensive testing plan that addresses validations for all data within the data warehouse CollaborationWork closely with data engineers, cloud architects, and other stakeholders to ensure seamless integration and validation of data Quality AssuranceEstablish and maintain quality assurance standards and best practices for data validation and testing ReportingGenerate detailed reports on testing outcomes, data inconsistencies, and corrective actions Continuous ImprovementContinuously evaluate and improve testing processes and tools to enhance efficiency and effectiveness Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor degree or above education Leadership Experience6+ years as a testing lead in Data Warehousing or Cloud Data Migration projects Automation ToolsExperience with data validation through custom built python frameworks and testing automation tools Testing MethodologiesProficiency in defining and implementing testing methodologies and frameworks for data validation Technical ExpertiseSolid knowledge of Python, SQL Server, Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage), Databricks, and Snowflake Analytical Skills: Proven excellent analytical and problem-solving skills to identify and resolve data inconsistencies CommunicationProven solid communication skills to collaborate effectively with cross-functional teams Project ManagementDemonstrated ability to manage multiple tasks and projects simultaneously, ensuring timely delivery of testing outcomes Preferred Qualifications Experience in leading data validation testing efforts in cloud migration projects Familiarity with Agile methodologies and project management tools At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 1 month ago

Apply

3.0 - 7.0 years

11 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together Primary Responsibilities Support the full data engineering lifecycle including research, proof of concepts, design, development, testing, deployment, and maintenance of data management solutions Utilize knowledge of various data management technologies to drive data engineering projects Lead data acquisition efforts to gather data from various structured or semi-structured source systems of record to hydrate client data warehouse and power analytics across numerous health care domains Leverage combination of ETL/ELT methodologies to pull complex relational and dimensional data to support loading DataMart’s and reporting aggregates Eliminate unwarranted complexity and unneeded interdependencies Detect data quality issues, identify root causes, implement fixes, and manage data audits to mitigate data challenges Implement, modify, and maintain data integration efforts that improve data efficiency, reliability, and value Leverage and facilitate the evolution of best practices for data acquisition, transformation, storage, and aggregation that solve current challenges and reduce the risk of future challenges Effectively create data transformations that address business requirements and other constraints Partner with the broader analytics organization to make recommendations for changes to data systems and the architecture of data platforms Support the implementation of a modern data framework that facilitates business intelligence reporting and advanced analytics Prepare high level design documents and detailed technical design documents with best practices to enable efficient data ingestion, transformation and data movement Leverage DevOps tools to enable code versioning and code deployment Leverage data pipeline monitoring tools to detect data integrity issues before they result into user visible outages or data quality issues Leverage processes and diagnostics tools to troubleshoot, maintain and optimize solutions and respond to customer and production issues Continuously support technical debt reduction, process transformation, and overall optimization Leverage and contribute to the evolution of standards for high quality documentation of data definitions, transformations, and processes to ensure data transparency, governance, and security Ensure that all solutions meet the business needs and requirements for security, scalability, and reliability Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s Degree (preferably in information technology, engineering, math, computer science, analytics, engineering or other related field) 3+ years of experience in Microsoft Azure Cloud, Azure Data Factory, Data Bricks, Spark, Scala / Python , ADO. 5+ years of combined experience in data engineering, ingestion, normalization, transformation, aggregation, structuring, and storage 5+ years of combined experience working with industry standard relational, dimensional or non-relational data storage systems 5+ years of experience in designing ETL/ELT solutions using tools like Informatica, DataStage, SSIS , PL/SQL, T-SQL, etc. 5+ years of experience in managing data assets using SQL, Python, Scala, VB.NET or other similar querying/coding language 3+ years of experience working with healthcare data or data to support healthcare organizations At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 1 month ago

Apply

2.0 - 6.0 years

10 - 15 Lacs

Gurugram

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Leverage AI/ML technologies to transform and improve current Optum system. This involves working with large-scale computing frameworks and data analysis systems Produce innovative solutions driven by exploratory data analysis from unstructured, diverse datasets. They apply knowledge of statistics, machine learning, programming, data modeling, simulation, and advanced mathematics to recognize patterns, identify opportunities, pose business questions, and make valuable discoveries leading to prototype development and product improvement Collaborate closely with business and product teams to grasp the requirements and translate vague concepts into tangible AI solutions. Possess solid problem solving and implementation skills to bring these solutions to life and work with engineering teams to build scalable, flexible product pipelines Adapt to an agile, fast-paced environment, and maintain a passion for exploring and integrating cutting-edge AI technologies to support future projects Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s / master’s degree in computer science, Math/Statistics, Electrical Engineering, Industrial Engineering, Bioinformatics, or another technical field that provides mathematical training 5+ years of Python development 3+ years of experience with AI/ML technologies and working with predictive analytics, image processing, Generative AI, or related technologies 3+ years of experience with AI Cloud services like Azure ML, Databricks, Azure OpenAI, or similar services in GCP / AWS 1+ years of experience in LLMs (Large Language Models) in cloud (Azure, GCP, AWS) and RAG (Retrieval-Augmented Generation) Experience in building enterprise grade AI-driven solutions Experience in designing and implementing effective prompts for LLMs to ensure optimal performance and accuracy Experience in data ingestion, transformation, and management processes Familiarity with frameworks/libraries such as TensorFlow, PyMuPDF, Tesseract Demonstrated skills in data analytics and the ability to work with large datasets to extract meaningful insights Preferred Qualification Proven ability to find AI solutions for undefined problems Soft Skills Ability to be a fast learner and self-driven Ability to work independently on complex AI projects

Posted 1 month ago

Apply

7.0 - 12.0 years

18 - 22 Lacs

Hyderabad

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. We are looking for a talented and hands-on Azure Engineer to join our team. The ideal candidate will have significant experience working on Azure, as well as a solid background in cloud data engineering, data pipelines, and analytics solutions. You will be responsible for designing, building, and managing scalable data architectures, enabling seamless data integration, and leveraging advanced analytics capabilities to drive business insights. Primary Responsibilities Azure Platform Implementation: Develop, manage, and optimize data pipelines using AML workspace on Azure Design and implement end-to-end data processing workflows, leveraging Databricks notebooks and jobs for data transformation, modeling, and analysis Build and maintain scalable data models in Databricks using Apache Spark for big data processing Integrate Databricks with other Azure services, including Azure Data Lake, Azure Synapse, and Azure Blob Storage Data Engineering & ETL Development: Design and implement robust ETL/ELT pipelines to ingest, transform, and load large volumes of data Optimize data processing jobs for performance, reliability, and scalability Use Apache Spark and other Databricks features to process structured, semi-structured, and unstructured data efficiently Azure Cloud Architecture: Work with Azure cloud services to design and deploy cloud-based data solutions Architect and implement data lakes, data warehouses, and analytics solutions within the Azure ecosystem Ensure security, compliance, and governance best practices for cloud-based data solutions Collaboration & Analytics: Collaborate with data scientists, analysts, and business stakeholders to deliver actionable insights Build advanced analytics models and solutions using Databricks, leveraging Python, SQL, and Spark-based technologies Provide guidance and technical expertise to other teams on best practices for working with Databricks and Azure Performance Optimization & Monitoring: Monitor and optimize the performance of data pipelines and Databricks jobs Troubleshoot and resolve performance and reliability issues within the data engineering pipelines Ensure high availability, fault tolerance, and efficient resource utilization on Databricks Continuous Improvement: Stay up-to-date with the latest features of Databricks, Azure, and related technologies Continuously improve data architectures, pipelines, and processes for better performance and scalability Propose and implement innovative solutions to meet evolving business needs Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 10+ years of hands-on experience with Azure ecosystem Solid experience with cloud-based data engineering, particularly with Azure services (Azure Data Lake, Azure Synapse, Azure Blob Storage, etc.) Experience with Databricks notebooks and managing Databricks environments Hands-on experience with data storage technologies (Data Lake, Data Warehouse, Blob Storage) Solid knowledge of SQL and Python for data processing and transformation Familiarity with cloud infrastructure management on Azure and using Azure DevOps for CI/CD Solid understanding of data modeling, data warehousing, and data lake architectures Expertise in building and managing ETL/ELT pipelines using Apache Spark, Databricks, and other related technologies Proficiency in Apache Spark (PySpark, Scala, SQL) Proven solid problem-solving skills with a proactive approach to identifying and addressing issues Proven ability to communicate complex technical concepts to non-technical stakeholders Proven excellent collaboration skills to work effectively with cross-functional teams Preferred Qualifications Certifications in Azure (Azure Data Engineer, Azure Solutions Architect) Experience with advanced analytics techniques, including machine learning and AI, using Databricks Experience with other big data processing frameworks or platforms Experience with data governance and security best practices in cloud environments Knowledge of DevOps practices and CI/CD pipelines for cloud environments At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 1 month ago

Apply

5.0 - 9.0 years

14 - 19 Lacs

Hyderabad

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Be able to work as an individual contributor Be able to work on data pipelines and databases Be able to work on data intensive applications or systems Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Experience working on Databricks Well versed with Apache spark, Azure, SQL, Pyspark, Airflow, Hadoop, UNIX etc. Demonstrated ability to work on big data technology stack on cloud and on-prem Demonstrated ability to communicate effectively with the team At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission. #gen

Posted 1 month ago

Apply

3.0 - 7.0 years

7 - 11 Lacs

Mysuru

Work from Office

Naukri logo

As a Software Developer you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients' hybrid-cloud and AI journeys. Your primary responsibilities include: Proficient Software Development with Microsoft TechnologiesDemonstrate expertise in software development using Microsoft technologies, ensuring high-quality code and efficient application performance. Collaborative Problem-Solving and Stakeholder EngagementCollaborate effectively with stakeholders to understand product requirements and challenges, proactively addressing issues through analytical problem-solving and strategic software solutions. Agile Learning and Technology IntegrationStay updated with the latest Microsoft technologies, eagerly embracing continuous learning and integrating newfound knowledge to enhance software development processes and product features Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise SQL ADF Azure Data Bricks Preferred technical and professional experience PostgreSQL, MSSQL Eureka Hystrix, zuul/API gateway In-Memory storage

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies