Home
Jobs
Companies
Resume

171 Elt Jobs - Page 3

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

22 - 27 Lacs

Bengaluru

Work from Office

Naukri logo

Data Strategy and PlanningDevelop and implement data architecture strategies that align with organizational goals and objectives. Collaborate with business stakeholders to understand data requirements and translate them into actionable plans. Data ModelingDesign and implement logical and physical data models to support business needs. Ensure data models are scalable, efficient, and comply with industry best practices. Database Design and ManagementOversee the design and management of databases, selecting appropriate database technologies based on requirements. Optimize database performance and ensure data integrity and security. Data IntegrationDefine and implement data integration strategies to facilitate seamless flow of information across. Responsibilities: Experience in data architecture and engineering Proven expertise with Snowflake data platform Strong understanding of ETL/ELT processes and data integration Experience with data modeling and data warehousing concepts Familiarity with performance tuning and optimization techniques Excellent problem-solving skills and attention to detail Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Cloud & Data ArchitectureAWS ,Snowflake ETL & Data EngineeringAWS Glue, Apache Spark, Step Functions Big Data & AnalyticsAthena,Presto, Hadoop Database & StorageSQL,Snow sql Security & ComplianceIAM, KMS, Data Masking Preferred technical and professional experience Cloud Data WarehousingSnowflake (Data Modeling, Query Optimization) Data TransformationDBT (Data Build Tool) for ELT pipeline management Metadata & Data GovernanceAlation (Data Catalog, Lineage, Governance)

Posted 2 weeks ago

Apply

3.0 - 8.0 years

16 - 18 Lacs

Hyderabad

Work from Office

Naukri logo

We are Hiring Data Management Specialist Level 2 for a US based IT Compnay based in Hyderabad. Job Title : Data Management Specialist Level 2 Location : Hyderabad Experience : 3+ Years CTC : 16 LPA - 18 LPA Working shift : Day shift We are seeking a Level 2 Data Management Specialist to join our data team and support the development, maintenance, and optimization of data pipelines and cloud-based data platforms. The ideal candidate will have hands-on experience with Snowflake , along with a solid foundation in SQL , data integration, and cloud data technologies. As a mid-level contributor, this position will collaborate closely with senior data engineers and business analysts to deliver reliable, high-quality data solutions for reporting, analytics, and operational needs. You will help develop scalable data workflows, resolve data quality issues, and ensure compliance with data governance practices. Key Responsibilities: Design, build, and maintain scalable data pipelines using Snowflake and SQL-based transformation logic Assist in developing and optimizing data models to support reporting and business intelligence efforts Write efficient SQL queries for data extraction, transformation, and analysis Collaborate with cross-functional teams to gather data requirements and implement dependable data solutions Support data quality checks and validation procedures to ensure data integrity and consistency Contribute to data integration tasks across various sources, including relational databases and cloud storage Document technical workflows, data definitions, and transformation logic for reference and compliance Monitor the performance of data processes and help troubleshoot workflow issues Required Skills & Qualifications: 24 years of experience in data engineering or data management roles Proficiency in Snowflake for data development or analytics Strong SQL skills and a solid grasp of relational database concepts Familiarity with ETL/ELT tools such as Informatica, Talend , or dbt Basic understanding of cloud platforms like AWS, Azure , or GCP Knowledge of data modeling techniques (e.g., star and snowflake schemas) Excellent attention to detail, strong analytical thinking, and problem-solving skills Effective team player with the ability to clearly communicate technical concepts Preferred Skills: Exposure to data governance or data quality frameworks Experience working in the banking or financial services industry Basic scripting skills in Python or Shell Familiarity with Agile/Scrum methodologies Experience using Git or other version control tools For further assistance contact/whatsapp : 9354909521 9354909512 or write to pankhuri@gist.org.in

Posted 2 weeks ago

Apply

5.0 - 8.0 years

20 - 25 Lacs

Chennai

Remote

Naukri logo

Execute and support R&D activities leveraging metadata from multiple databases, ETL/ELT products, reporting tools etc. Develop, test, and deploy Python and SQL-based solutions to automate and optimize operational processes. Data Analysis & Reporting Required Candidate profile Provide hands-on programming support for AI-driven initiatives. Mastery in Python Programming Advanced SQL Proficiency analytical methodologies, statistical concepts, and data visualization techniques

Posted 2 weeks ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Noida, Hyderabad

Work from Office

Naukri logo

Primary Responsibilities Support the full data engineering lifecycle including research, proof of concepts, design, development, testing, deployment, and maintenance of data management solutions Utilize knowledge of various data management technologies to drive data engineering projects Lead data acquisition efforts to gather data from various structured or semi-structured source systems of record to hydrate client data warehouse and power analytics across numerous health care domains Leverage combination of ETL/ELT methodologies to pull complex relational and dimensional data to support loading DataMarts and reporting aggregates Eliminate unwarranted complexity and unneeded interdependencies Detect data quality issues, identify root causes, implement fixes, and manage data audits to mitigate data challenges Implement, modify, and maintain data integration efforts that improve data efficiency, reliability, and value Leverage and facilitate the evolution of best practices for data acquisition, transformation, storage, and aggregation that solve current challenges and reduce the risk of future challenges Effectively create data transformations that address business requirements and other constraints Partner with the broader analytics organization to make recommendations for changes to data systems and the architecture of data platforms Support the implementation of a modern data framework that facilitates business intelligence reporting and advanced analytics Prepare high level design documents and detailed technical design documents with best practices to enable efficient data ingestion, transformation and data movement Leverage DevOps tools to enable code versioning and code deployment Leverage data pipeline monitoring tools to detect data integrity issues before they result into user visible outages or data quality issues Leverage processes and diagnostics tools to troubleshoot, maintain and optimize solutions and respond to customer and production issues Continuously support technical debt reduction, process transformation, and overall optimization Leverage and contribute to the evolution of standards for high quality documentation of data definitions, transformations, and processes to ensure data transparency, governance, and security Ensure that all solutions meet the business needs and requirements for security, scalability, and reliability Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 5+ years of experience in creating Source to Target Mappings and ETL design for integration of new/modified data streams into the data warehouse/data marts. Minimum of 2+ years of experience with Cerner Millennium / HealthEintent and experience using Cerner CCL 2+ years of experience working with Health Catalyst product offerings, including data warehousing solutions, knowledgebase, and analytics solutions Epic certifications in one or more of the following modules: Caboodle, EpicCare, Grand Central, Healthy Planet, HIM, Prelude, Resolute, Tapestry, or Reporting Workbench Experience in Unix or Powershell or other batch scripting languages Depth of experience and proven track record creating and maintaining sophisticated data frameworks for healthcare organizations Experience supporting data pipelines that power analytical content within common reporting and business intelligence platforms (e.g. Power BI, Qlik, Tableau, MicroStrategy, etc.) Experience supporting analytical capabilities inclusive of reporting, dashboards, extracts, BI tools, analytical web applications and other similar products Experience contributing to cross-functional efforts with proven success in creating healthcare insights Experience and credibility interacting with analytics and technology leadership teams Exposure to Azure, AWS, or google cloud ecosystems Exposure to Amazon Redshift, Amazon S3, Hadoop HDFS, Azure Blob, or similar big data storage and management components Desire to continuously learn and seek new options and approaches to business challenges A willingness to leverage best practices, share knowledge, and improve the collective work of the team Ability to effectively communicate concepts verbally and in writing Willingness to support limited travel up to 10%

Posted 2 weeks ago

Apply

1.0 - 5.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Job TitleData Engineer Experience5"“8 Years LocationDelhi, Pune, Bangalore (Hyderabad & Chennai also acceptable) Time ZoneAligned with UK Time Zone Notice PeriodImmediate Joiners Only Role Overview: We are seeking experienced Data Engineers to design, develop, and optimize large-scale data processing systems You will play a key role in building scalable, efficient, and reliable data pipelines in a cloud-native environment, leveraging your expertise in GCP, BigQuery, Dataflow, Dataproc, and more Key Responsibilities: Design, build, and manage scalable and reliable data pipelines for real-time and batch processing. Implement robust data processing solutions using GCP services and open-source technologies. Create efficient data models and write high-performance analytics queries. Optimize pipelines for performance, scalability, and cost-efficiency. Collaborate with data scientists, analysts, and engineering teams to ensure smooth data integration and transformation. Maintain high data quality, enforce validation rules, and set up monitoring and alerting. Participate in code reviews, deployment activities, and provide production support. Technical Skills Required: Cloud PlatformsGCP (Google Cloud Platform)- mandatory Key GCP ServicesDataproc, BigQuery, Dataflow Programming LanguagesPython, Java, PySpark Data Engineering ConceptsData Ingestion, Change Data Capture (CDC), ETL/ELT pipeline design Strong understanding of distributed computing, data structures, and performance tuning Required Qualifications & Attributes: 5"“8 years of hands-on experience in data engineering roles Proficiency in building and optimizing distributed data pipelines Solid grasp of data governance and security best practices in cloud environments Strong analytical and problem-solving skills Effective verbal and written communication skills Proven ability to work independently and in cross-functional teams Show more Show less

Posted 2 weeks ago

Apply

3.0 - 7.0 years

11 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together Primary Responsibilities Support the full data engineering lifecycle including research, proof of concepts, design, development, testing, deployment, and maintenance of data management solutions Utilize knowledge of various data management technologies to drive data engineering projects Lead data acquisition efforts to gather data from various structured or semi-structured source systems of record to hydrate client data warehouse and power analytics across numerous health care domains Leverage combination of ETL/ELT methodologies to pull complex relational and dimensional data to support loading DataMart’s and reporting aggregates Eliminate unwarranted complexity and unneeded interdependencies Detect data quality issues, identify root causes, implement fixes, and manage data audits to mitigate data challenges Implement, modify, and maintain data integration efforts that improve data efficiency, reliability, and value Leverage and facilitate the evolution of best practices for data acquisition, transformation, storage, and aggregation that solve current challenges and reduce the risk of future challenges Effectively create data transformations that address business requirements and other constraints Partner with the broader analytics organization to make recommendations for changes to data systems and the architecture of data platforms Support the implementation of a modern data framework that facilitates business intelligence reporting and advanced analytics Prepare high level design documents and detailed technical design documents with best practices to enable efficient data ingestion, transformation and data movement Leverage DevOps tools to enable code versioning and code deployment Leverage data pipeline monitoring tools to detect data integrity issues before they result into user visible outages or data quality issues Leverage processes and diagnostics tools to troubleshoot, maintain and optimize solutions and respond to customer and production issues Continuously support technical debt reduction, process transformation, and overall optimization Leverage and contribute to the evolution of standards for high quality documentation of data definitions, transformations, and processes to ensure data transparency, governance, and security Ensure that all solutions meet the business needs and requirements for security, scalability, and reliability Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s Degree (preferably in information technology, engineering, math, computer science, analytics, engineering or other related field) 3+ years of experience in Microsoft Azure Cloud, Azure Data Factory, Data Bricks, Spark, Scala / Python , ADO. 5+ years of combined experience in data engineering, ingestion, normalization, transformation, aggregation, structuring, and storage 5+ years of combined experience working with industry standard relational, dimensional or non-relational data storage systems 5+ years of experience in designing ETL/ELT solutions using tools like Informatica, DataStage, SSIS , PL/SQL, T-SQL, etc. 5+ years of experience in managing data assets using SQL, Python, Scala, VB.NET or other similar querying/coding language 3+ years of experience working with healthcare data or data to support healthcare organizations At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 2 weeks ago

Apply

7.0 - 12.0 years

18 - 22 Lacs

Hyderabad

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. We are looking for a talented and hands-on Azure Engineer to join our team. The ideal candidate will have significant experience working on Azure, as well as a solid background in cloud data engineering, data pipelines, and analytics solutions. You will be responsible for designing, building, and managing scalable data architectures, enabling seamless data integration, and leveraging advanced analytics capabilities to drive business insights. Primary Responsibilities Azure Platform Implementation: Develop, manage, and optimize data pipelines using AML workspace on Azure Design and implement end-to-end data processing workflows, leveraging Databricks notebooks and jobs for data transformation, modeling, and analysis Build and maintain scalable data models in Databricks using Apache Spark for big data processing Integrate Databricks with other Azure services, including Azure Data Lake, Azure Synapse, and Azure Blob Storage Data Engineering & ETL Development: Design and implement robust ETL/ELT pipelines to ingest, transform, and load large volumes of data Optimize data processing jobs for performance, reliability, and scalability Use Apache Spark and other Databricks features to process structured, semi-structured, and unstructured data efficiently Azure Cloud Architecture: Work with Azure cloud services to design and deploy cloud-based data solutions Architect and implement data lakes, data warehouses, and analytics solutions within the Azure ecosystem Ensure security, compliance, and governance best practices for cloud-based data solutions Collaboration & Analytics: Collaborate with data scientists, analysts, and business stakeholders to deliver actionable insights Build advanced analytics models and solutions using Databricks, leveraging Python, SQL, and Spark-based technologies Provide guidance and technical expertise to other teams on best practices for working with Databricks and Azure Performance Optimization & Monitoring: Monitor and optimize the performance of data pipelines and Databricks jobs Troubleshoot and resolve performance and reliability issues within the data engineering pipelines Ensure high availability, fault tolerance, and efficient resource utilization on Databricks Continuous Improvement: Stay up-to-date with the latest features of Databricks, Azure, and related technologies Continuously improve data architectures, pipelines, and processes for better performance and scalability Propose and implement innovative solutions to meet evolving business needs Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 10+ years of hands-on experience with Azure ecosystem Solid experience with cloud-based data engineering, particularly with Azure services (Azure Data Lake, Azure Synapse, Azure Blob Storage, etc.) Experience with Databricks notebooks and managing Databricks environments Hands-on experience with data storage technologies (Data Lake, Data Warehouse, Blob Storage) Solid knowledge of SQL and Python for data processing and transformation Familiarity with cloud infrastructure management on Azure and using Azure DevOps for CI/CD Solid understanding of data modeling, data warehousing, and data lake architectures Expertise in building and managing ETL/ELT pipelines using Apache Spark, Databricks, and other related technologies Proficiency in Apache Spark (PySpark, Scala, SQL) Proven solid problem-solving skills with a proactive approach to identifying and addressing issues Proven ability to communicate complex technical concepts to non-technical stakeholders Proven excellent collaboration skills to work effectively with cross-functional teams Preferred Qualifications Certifications in Azure (Azure Data Engineer, Azure Solutions Architect) Experience with advanced analytics techniques, including machine learning and AI, using Databricks Experience with other big data processing frameworks or platforms Experience with data governance and security best practices in cloud environments Knowledge of DevOps practices and CI/CD pipelines for cloud environments At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 2 weeks ago

Apply

5.0 - 6.0 years

8 - 13 Lacs

Hyderabad

Work from Office

Naukri logo

About the Role - We are seeking a highly skilled and experienced Senior Azure Databricks Engineer to join our dynamic data engineering team. - As a Senior Azure Databricks Engineer, you will play a critical role in designing, developing, and implementing data solutions on the Azure Databricks platform. - You will be responsible for building and maintaining high-performance data pipelines, transforming raw data into valuable insights, and ensuring data quality and reliability. Key Responsibilities - Design, develop, and implement data pipelines and ETL/ELT processes using Azure Databricks. - Develop and optimize Spark applications using Scala or Python for data ingestion, transformation, and analysis. - Leverage Delta Lake for data versioning, ACID transactions, and data sharing. - Utilize Delta Live Tables for building robust and reliable data pipelines. - Design and implement data models for data warehousing and data lakes. - Optimize data structures and schemas for performance and query efficiency. - Ensure data quality and integrity throughout the data lifecycle. - Integrate Azure Databricks with other Azure services (e.g., Azure Data Factory, Azure Synapse Analytics, Azure Blob Storage). - Leverage cloud-based data services to enhance data processing and analysis capabilities. Performance Optimization & Troubleshooting - Monitor and analyze data pipeline performance. - Identify and troubleshoot performance bottlenecks. - Optimize data processing jobs for speed and efficiency. - Collaborate effectively with data engineers, data scientists, data analysts, and other stakeholders. - Communicate technical information clearly and concisely. - Participate in code reviews and contribute to the improvement of development processes. Qualifications Essential - 5+ years of experience in data engineering, with at least 2 years of hands-on experience with Azure Databricks. - Strong proficiency in Python and SQL. - Expertise in Apache Spark and its core concepts (RDDs, DataFrames, Datasets). - In-depth knowledge of Delta Lake and its features (e.g., ACID transactions, time travel). - Experience with data warehousing concepts and ETL/ELT processes. - Strong analytical and problem-solving skills. - Excellent communication and interpersonal skills. - Bachelor's degree in Computer Science, Computer Engineering, or a related field.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

22 - 27 Lacs

Navi Mumbai

Work from Office

Naukri logo

Data Strategy and PlanningDevelop and implement data architecture strategies that align with organizational goals and objectives. Collaborate with business stakeholders to understand data requirements and translate them into actionable plans. Data ModelingDesign and implement logical and physical data models to support business needs. Ensure data models are scalable, efficient, and comply with industry best practices. Database Design and ManagementOversee the design and management of databases, selecting appropriate database technologies based on requirements. Optimize database performance and ensure data integrity and security. Data IntegrationDefine and implement data integration strategies to facilitate seamless flow of information across. Responsibilities: Experience in data architecture and engineering Proven expertise with Snowflake data platform Strong understanding of ETL/ELT processes and data integration Experience with data modeling and data warehousing concepts Familiarity with performance tuning and optimization techniques Excellent problem-solving skills and attention to detail Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Cloud & Data ArchitectureAWS , Snowflake ETL & Data EngineeringAWS Glue, Apache Spark, Step Functions Big Data & AnalyticsAthena,Presto, Hadoop Database & StorageSQL, Snow sql Security & ComplianceIAM, KMS, Data Masking Preferred technical and professional experience Cloud Data WarehousingSnowflake (Data Modeling, Query Optimization) Data TransformationDBT (Data Build Tool) for ELT pipeline management Metadata & Data GovernanceAlation (Data Catalog, Lineage, Governance

Posted 3 weeks ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Naukri logo

Dear Candidate, Seeking a Cloud Monitoring Specialist to set up observability and real-time monitoring in cloud environments. Key Responsibilities: Configure logging and metrics collection. Set up alerts and dashboards using Grafana, Prometheus, etc. Optimize system visibility for performance and security. Required Skills & Qualifications: Familiar with ELK stack, Datadog, New Relic, or Cloud-native monitoring tools. Strong troubleshooting and root cause analysis skills. Knowledge of distributed systems. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 3 weeks ago

Apply

7.0 - 11.0 years

15 - 20 Lacs

Mumbai

Work from Office

Naukri logo

This role requires deep understanding of data warehousing, business intelligence (BI), and data governance principles, with strong focus on the Microsoft technology stack. Data Architecture Develop and maintain the overall data architecture, including data models, data flows, data quality standards. Design and implement data warehouses, data marts, data lakes on Microsoft Azure platform Business Intelligence Design and develop complex BI reports, dashboards, and scorecards using Microsoft Power BI. Data Engineering Work with data engineers to implement ETL/ELT pipelines using Azure Data Factory. Data Governance Establish and enforce data governance policies and standards. Primary Skills Experience 15+ years of relevant experience in data warehousing, BI, and data governance. Proven track record of delivering successful data solutions on the Microsoft stack. Experience working with diverse teams and stakeholders. Required Skills and Experience Technical Skills: Strong proficiency in data warehousing concepts and methodologies. Expertise in Microsoft Power BI. Experience with Azure Data Factory, Azure Synapse Analytics, and Azure Databricks. Knowledge of SQL and scripting languages (Python, PowerShell). Strong understanding of data modeling and ETL/ELT processes. Secondary Skills Soft Skills Excellent communication and interpersonal skills. Strong analytical and problem-solving abilities. Ability to work independently and as part of a team. Strong attention to detail and organizational skills.

Posted 3 weeks ago

Apply

10.0 - 20.0 years

10 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title: Senior Data Engineer Location: India preferably Bengaluru Experience Level: 10+ years Employment Type: Full-Time Job Summary: Our organization is seeking a highly experienced and technically proficient Senior Data Engineer with over 10 years of experience in designing, building, and optimizing data pipelines and applications in big data environments. The ideal candidate must have strong hands-on experience in workflow orchestration, data processing, and streaming platforms, and possess full-stack development capabilities. Key Responsibilities: 1. Design, build, and maintain scalable and reliable data pipelines using Apache Airflow. 2. Develop and optimize big data workflows using Apache Spark, Hive , and Apache Flink . 3. Lead the implementation and integration of Apache Kafka for real-time and batch data processing. 4. Apply strong Java full-stack development skills to build and support data-driven applications. 5. Utilize Python to develop scripts, utilities, and support data workflows and integrations. 6. Work closely with data scientists, analysts, and platform engineers to support a high- volume,high-velocity data environment. 7. Drive performance tuning, monitoring, and troubleshooting across the data stack. 8. Ensure data integrity, security, and governance across all processing layers. 9. Mentor junior engineers and contribute to technical decision-making processes. Required Skills and Experience: Minimum 10 years of experience in data engineering or related fields. Proven experience with Apache Airflow for orchestration. Deep expertise in Apache Spark, Hive, and Apache Flink . Mandatory experience as a Full Stack Java Developer. Proficiency in Python programming for data engineering tasks. Demonstrated experience in Apache Kafka development and implementation. Prior hands-on experience in a Big Data ecosystem involving distributed systems and large-scale data processing. Strong understanding of data modeling, ETL/ELT design , and streaming architectures. Excellent problem-solving, communication, and collaboration skills. Preferred Qualifications: Experience working in cloud-based environments (e.g., AWS, Azure, GCP ). Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Exposure to CI/CD pipelines and DevOps practices in data projects.

Posted 3 weeks ago

Apply

5.0 - 9.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for a Senior Data Engineer who will design, build, and maintain scalable data pipelines and ingestion frameworks. The ideal candidate must have experience with DBT, orchestration tools like Airflow or Prefect, and cloud platforms such as AWS. Responsibilities include developing ELT pipelines, optimizing queries, implementing CI/CD, and integrating with AWS services. Strong SQL, Python, and data modeling skills are essential. The role also involves working with real-time and batch processing, ensuring high performance and data integrity.

Posted 3 weeks ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Pune

Work from Office

Naukri logo

Job Summary We are seeking an energetic Senior Data Engineer with hands-on expertise in Google Cloud Platform to build, maintain, and migrate data pipelines that power analytics and AI workloads. You will leverage GCP servicesBigQuery, Dataflow, Cloud Composer, Pub/Sub, and Cloud Storagewhile collaborating with data modelers, analysts, and product teams to deliver highly reliable, well-governed datasets. Familiarity with Microsoft Azure data services (Data Factory, Databricks, Synapse, Fabric) is valuable, as many existing workloads will transition from Azure to GCP. Key Responsibilities Design, develop, and optimize batch and streaming pipelines on GCP using Dataflow / Apache Beam, BigQuery, Cloud Composer (Airflow), and Pub/Sub. Maintain and enhance existing data workflows—monitoring performance, refactoring code, and automating tests to ensure data quality and reliability. Migrate data assets and ETL / ELT workloads from Azure (Data Factory, Databricks, Synapse, Fabric) to corresponding GCP services, ensuring functional parity and cost efficiency. Partner with data modelers to implement partitioning, clustering, and materialized-view strategies in BigQuery to meet SLAs for analytics and reporting. Conduct root-cause analysis for pipeline failures, implement guardrails for data quality, and document lineage. Must-Have Skills 4-6 years of data-engineering experience, including 2+ years building pipelines on GCP (BigQuery, Dataflow, Pub/Sub, Cloud Composer). Proficiency in SQL and one programming language (Python, Java, or Scala). Solid understanding of ETL / ELT patterns, data-warehouse modeling (star, snowflake, data vault), and performance-tuning techniques. Experience implementing data-quality checks, observability, and cost-optimization practices in cloud environments. Nice-to-Have Skills Practical exposure to Azure data services—Data Factory, Databricks, Synapse Analytics, or Microsoft Fabric. Preferred Certifications Google Professional Data Engineer or Associate Cloud Engineer Microsoft Certified: Azure Data Engineer Associate (nice to have) Education Bachelor’s or Master’s degree in Computer Science, Information Systems, Engineering, or a related technical field. Equivalent professional experience will be considered.

Posted 3 weeks ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Naukri logo

Dear Candidate, We are hiring a Cloud Architect to design and oversee scalable, secure, and cost-efficient cloud solutions. Great for architects who bridge technical vision with business needs. Key Responsibilities: Design cloud-native solutions using AWS, Azure, or GCP Lead cloud migration and transformation projects Define cloud governance, cost control, and security strategies Collaborate with DevOps and engineering teams for implementation Required Skills & Qualifications: Deep expertise in cloud architecture and multi-cloud environments Experience with containers, serverless, and microservices Proficiency in Terraform, CloudFormation, or equivalent Bonus: Cloud certification (AWS/Azure/GCP Architect) Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 3 weeks ago

Apply

5.0 - 10.0 years

10 - 15 Lacs

Chennai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

We are looking for an experienced Data Engineer with a strong background in data engineering, storage, and cloud technologies. The role involves designing, building, and optimizing scalable data pipelines, ETL/ELT workflows, and data models for efficient analytics and reporting. The ideal candidate must have strong SQL expertise, including complex joins, stored procedures, and certificate-auth-based queries. Experience with NoSQL databases such as Firestore, DynamoDB, or MongoDB is required, along with proficiency in data modeling and warehousing solutions like BigQuery (preferred), Redshift, or Snowflake. The candidate should have hands-on experience working with ETL/ELT pipelines using Airflow, dbt, Kafka, or Spark. Proficiency in scripting languages such as PySpark, Python, or Scala is essential. Strong hands-on experience with Google Cloud Platform (GCP) is a must. Additionally, experience with visualization tools such as Google Looker Studio, LookerML, Power BI, or Tableau is preferred. Good-to-have skills include exposure to Master Data Management (MDM) systems and an interest in Web3 data and blockchain analytics.

Posted 3 weeks ago

Apply

5.0 - 6.0 years

7 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

About the Role - We are seeking a highly skilled and experienced Senior Azure Databricks Engineer to join our dynamic data engineering team. - As a Senior Azure Databricks Engineer, you will play a critical role in designing, developing, and implementing data solutions on the Azure Databricks platform. - You will be responsible for building and maintaining high-performance data pipelines, transforming raw data into valuable insights, and ensuring data quality and reliability. Key Responsibilities - Design, develop, and implement data pipelines and ETL/ELT processes using Azure Databricks. - Develop and optimize Spark applications using Scala or Python for data ingestion, transformation, and analysis. - Leverage Delta Lake for data versioning, ACID transactions, and data sharing. - Utilize Delta Live Tables for building robust and reliable data pipelines. - Design and implement data models for data warehousing and data lakes. - Optimize data structures and schemas for performance and query efficiency. - Ensure data quality and integrity throughout the data lifecycle. - Integrate Azure Databricks with other Azure services (e.g., Azure Data Factory, Azure Synapse Analytics, Azure Blob Storage). - Leverage cloud-based data services to enhance data processing and analysis capabilities. Performance Optimization & Troubleshooting - Monitor and analyze data pipeline performance. - Identify and troubleshoot performance bottlenecks. - Optimize data processing jobs for speed and efficiency. - Collaborate effectively with data engineers, data scientists, data analysts, and other stakeholders. - Communicate technical information clearly and concisely. - Participate in code reviews and contribute to the improvement of development processes. Qualifications Essential - 5+ years of experience in data engineering, with at least 2 years of hands-on experience with Azure Databricks. - Strong proficiency in Python and SQL. - Expertise in Apache Spark and its core concepts (RDDs, DataFrames, Datasets). - In-depth knowledge of Delta Lake and its features (e.g., ACID transactions, time travel). - Experience with data warehousing concepts and ETL/ELT processes. - Strong analytical and problem-solving skills. - Excellent communication and interpersonal skills. - Bachelor's degree in Computer Science, Computer Engineering, or a related field. Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 3 weeks ago

Apply

10.0 - 15.0 years

25 - 35 Lacs

Mysuru

Work from Office

Naukri logo

Key Responsibilities Team Leadership & Strategy Lead and mentor a team of BI analysts, AI specialists and data engineers, fostering a data-driven culture within iSOCRATES. Drive the BI strategy, ensuring alignment with business goals and key performance indicators (KPIs). Collaborate with senior leadership to identify opportunities for advanced analytics, data-driven decision-making, predictive modeling, and AI-driven automation. Own the end-to-end delivery of BI and AI initiatives, ensuring scalability, data integrity, efficiency, and innovation. Data Management, AI Integration & Insights Oversee data governance, collection, storage, and integration processes to ensure data accuracy, consistency, and security. Work closely with DevOps, IT, Data Engineering, and AI/ML teams to manage data pipelines, cloud infrastructure, ETL pipelines, API/non-API integrations and model deployment frameworks. Ensure the successful implementation of AI-enhanced reporting and visualization solutions for both internal stakeholders and clients. Leverage machine learning models for anomaly detection, forecasting, segmentation, and personalization across business functions. Champion data quality initiatives and continuously improve data readiness for AI/BI use cases. Technical Leadership & Innovation Design and implement scalable and automated BI/AI solutions using modern toolsets and cloud technologies. Leverage expertise in SQL, Python, Data Warehousing, machine learning, ETL, and Cloud (AWS/Azure/GCP) to enhance BI/AI infrastructure. Develop and maintain interactive dashboards and AI-driven visual analytics using Power BI, Tableau, Sisense, Superset, or Excel. Evaluate emerging AI/BI technologies and frameworks to continuously improve system performance, automation, and user insights. Stakeholder Collaboration & Communication Engage with internal and external stakeholders to gather BI and AI requirements and define project priorities. Translate business goals into technical solutions, data models, AI use cases, and BI solutions that align with strategic goals and deliver actionable insights. Provide thought leadership on data trends and best practices, advocating for data-driven decision-making. Present insights and recommendations to senior executives on industry trends, best practices, and the evolving role of AI in business intelligence. Required Qualifications 10+ years of experience in Business Intelligence, Data Analytics, or Data Science. 3+ years of experience in team leadership and project management in BI and/or AI-focused environments. Strong technical proficiency in SQL, Python, ETL processes, and data warehousing. Practical experience with BI platforms (e.g., Power BI, Tableau, Sisense) and ML/AI frameworks (e.g., scikit-learn, TensorFlow, AWS SageMaker). Proficiency in Cloud platforms (AWS) and understanding of data modeling. Experience handling large-scale data and implementing automation in reporting. Strong problem-solving skills, business acumen, and ability to work in a fast-paced environment. Excellent communication, stakeholder management, and presentation skills. Preferred Qualifications Masters Degree in Business Administration, Computer Science, or Data Analytics. Experience in AdTech, MarTech, or Digital Marketing Analytics. Exposure to AI/ML-driven BI solutions.

Posted 3 weeks ago

Apply

2.0 - 5.0 years

18 - 30 Lacs

Bengaluru

Remote

Naukri logo

We're seeking a Data Engineer to help build and maintain scalable data pipelines for Intelligine, our AI content platform. Requires 1–2 years' experience, strong Python/SQL skills, and a passion for clean, efficient code and problem-solving.

Posted 3 weeks ago

Apply

2.0 - 6.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title: Big Data Engineer Key Responsibilities Demonstrate deep hands-on expertise in Databricks, Python, and SQL, Design and implement Big Data solutions using technologies such as: Apache Spark, Hadoop ecosystem, Apache Kafka NoSQL databases (e-g , MongoDB, Cassandra) Work as a Big Data Engineer with a strong focus on: Query and performance tuning Troubleshooting and debugging Spark and other big data solutions Writing complex SQL queries Implement and work with data architecture patterns, including: Data Lakehouse, Delta Lake Streaming architectures (Lambda/Kappa) Build and deploy data engineering pipelines using CI/CD automation best practices, Apply theoretical best practices to technical data engineering methods and solutions, Work with Data Warehousing/ETL/ELT, Relational Databases, and Massively Parallel Processing (MPP) technologies, Additional Knowledge & Exposure Understanding of the Data Lifecycle and Data Mesh concepts, Familiarity with eXtollo service offerings and onboarding processes, Experience with Azure Cloud and Azure Data Factory (ADF) pipelines, Exposure to data analytics and complex event processing (CEP), (Preferred) Knowledge of the Aftersales domain and its data sources, Required Skills Excellent written and verbal communication skills, Strong analytical and problem-solving abilities, Collaborative mindset with the ability to work in a fast-paced, agile environment, Qualifications Bachelor of Engineering in Computer Science, Information Technology, or a related field, Preferred Certifications: Azure, Databricks,

Posted 3 weeks ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Hubli, Mangaluru, Mysuru

Work from Office

Naukri logo

About The Team You will be joining the newly formed AI, Data & Analytics team, primarily responsible as a Data Engineer leading various projects within the new Data Platform team The new team is focused on driving increased value from the data InvestCloud captures to enable a smarter financial future for our clients, in particular focused on ?enhanced intelligence? Ensuring we have fit-for-purpose modern capabilities is a key goal for the team, Key Responsibilities Assist in the Design, development, and maintenance of scalable data pipelines to support diverse analytics and machine learning needs, Manage data architectures for reliability, scalability, and performance, Support data integration solutions from our data partners, including ETL/ELT processes, ensuring seamless data flow across platforms, Collaborate with Data Scientists, Analysts, and Product Teams to define and support data requirements, Manage and maintain data platforms such as Oracle, Snowflake, and/or Databricks, ensuring high availability and performance, whilst optimizing for cost, Ensure data security and compliance with company policies and relevant regulations, Monitor and troubleshoot data systems to identify and resolve performance issues, Develop and maintain datasets and data pipelines to support Machine Learning model training and deployment Analyze large datasets to identify patterns, trends, and insights that can inform business decisions, Work with 3rd party providers of Data and Data Platform products to evaluate and implement solutions achieving Investclouds business objectives, Required Skills Bachelors or Masters degree in Computer Science, Engineering, or a related field, or equivalent practical experience, Minimum of 4 years of professional experience in data engineering or a related role, Proficiency in database technologies, including Oracle and PostgreSQL, Hands-on experience with Snowflake and/or Databricks, with a solid understanding of their ecosystems, Experience with programming languages such as Python or SQL, Familiarity with ETL/ELT tools and data integration frameworks, Experience with cloud platforms such as AWS, GCP, or Azure, Familiarity with containerization and CI/CD tools (e-g , Docker, Git), Strong problem-solving skills and the ability to handle complex datasets, Good communication skills to collaborate with global technical and non-technical stakeholders, Knowledge of data preprocessing, feature engineering, and model evaluation metrics Excellent proficiency in English Ability to work in a fast-paced environment across multiple projects simultaneously Ability to collaborate effectively as a team player, fostering a culture of open communication and mutual respect, Preferred Skills Knowledge of data warehousing and data lake architectures, Familiarity with governance frameworks for data management and security, Familiarity with Machine Learning frameworks (TensorFlow, PyTorch, Scikit-learn) and LLM frameworks (e-g Langchain)

Posted 3 weeks ago

Apply

0.0 - 2.0 years

4 - 7 Lacs

Navi Mumbai

Work from Office

Naukri logo

Title Our corporate activities are growing rapidly, and we are currently seeking a full-time, office-based Data Engineerto join our Information Technology team. This position will work on a team to accomplish tasks and projects that are instrumental to the company’s success. If you want an exciting career where you use your previous expertise and can develop and grow your career even further, then this is the opportunity for you. Overview Medpace is a full-service clinical contract research organization (CRO). We provide Phase I-IV clinical development services to the biotechnology, pharmaceutical and medical device industries. Our mission is to accelerate the global development of safe and effective medical therapeutics through its scientific and disciplined approach. We leverage local regulatory and therapeutic expertise across all major areas including oncology, cardiology, metabolic disease, endocrinology, central nervous system, anti-viral and anti-infective. Headquartered in Cincinnati, Ohio, employing more than 5,000 people across 40+ countries. Responsibilities Utilize skills in development areas including data warehousing, business intelligence, and databases (Snowflake, ANSI SQL, SQL Server, T-SQL); Support programming/software development using Extract, Transform, and Load (ETL) and Extract, Load and Transform (ELT) tools, (dbt, Azure Data Factory, SSIS); Design, develop, enhance and support business intelligence systems primarily using Microsoft Power BI; Collect, analyze and document user requirements; Participate in software validation process through development, review, and/or execution of test plan/cases/scripts; Create software applications by following software development lifecycle process, which includes requirements gathering, design, development, testing, release, and maintenance; Communicate with team members regarding projects, development, tools, and procedures; and Provide end-user support including setup, installation, and maintenance for applications Qualifications Bachelor's Degree in Computer Science, Data Science, or a related field; 5+ years of experience in Data Engineering; Knowledge of developing dimensional data models and awareness of the advantages and limitations of Star Schema and Snowflake schema designs; Solid ETL development, reporting knowledge based off intricate understanding of business process and measures; Knowledge of Snowflake cloud data warehouse, Fivetran data integration and dbt transformations is preferred; Knowledge of Python is preferred; Knowledge of REST API; Basic knowledge of SQL Server databases is required; Knowledge of C#, Azure development is a bonus; and Excellent analytical, written and oral communication skills. People. Purpose. Passion. Make a Difference Tomorrow. Join Us Today. The work we’ve done over the past 30+ years has positively impacted the lives of countless patients and families who face hundreds of diseases across all key therapeutic areas. The work we do today will improve the lives of people living with illness and disease in the future. Medpace Perks Flexible work environment Competitive compensation and benefits package Competitive PTO packages Structured career paths with opportunities for professional growth Company-sponsored employee appreciation events Employee health and wellness initiatives Awards Recognized by Forbes as one of America's Most Successful Midsize Companies in 2021, 2022, 2023 and 2024 Continually recognized with CRO Leadership Awards from Life Science Leader magazine based on expertise, quality, capabilities, reliability, and compatibility What to Expect Next A Medpace team member will review your qualifications and, if interested, you will be contacted with details for next steps. EO/AA Employer M/F/Disability/Vets

Posted 3 weeks ago

Apply

0.0 - 1.0 years

3 - 6 Lacs

Navi Mumbai

Work from Office

Naukri logo

Title Our corporate activities are growing rapidly, and we are currently seeking a full-time, office-based Data Engineerto join our Information Technology team. This position will work on a team to accomplish tasks and projects that are instrumental to the company’s success. If you want an exciting career where you use your previous expertise and can develop and grow your career even further, then this is the opportunity for you. Overview Medpace is a full-service clinical contract research organization (CRO). We provide Phase I-IV clinical development services to the biotechnology, pharmaceutical and medical device industries. Our mission is to accelerate the global development of safe and effective medical therapeutics through its scientific and disciplined approach. We leverage local regulatory and therapeutic expertise across all major areas including oncology, cardiology, metabolic disease, endocrinology, central nervous system, anti-viral and anti-infective. Headquartered in Cincinnati, Ohio, employing more than 5,000 people across 40+ countries. Responsibilities Utilize skills in development areas including data warehousing, business intelligence, and databases (Snowflake, ANSI SQL, SQL Server, T-SQL); Support programming/software development using Extract, Transform, and Load (ETL) and Extract, Load and Transform (ELT) tools, (dbt, Azure Data Factory, SSIS); Design, develop, enhance and support business intelligence systems primarily using Microsoft Power BI; Collect, analyze and document user requirements; Participate in software validation process through development, review, and/or execution of test plan/cases/scripts; Create software applications by following software development lifecycle process, which includes requirements gathering, design, development, testing, release, and maintenance; Communicate with team members regarding projects, development, tools, and procedures; and Provide end-user support including setup, installation, and maintenance for applications Qualifications Bachelor's Degree in Computer Science, Data Science, or a related field; 3+ years of experience in Data Engineering; Knowledge of developing dimensional data models and awareness of the advantages and limitations of Star Schema and Snowflake schema designs; Solid ETL development, reporting knowledge based off intricate understanding of business process and measures; Knowledge of Snowflake cloud data warehouse, Fivetran data integration and dbt transformations is preferred; Knowledge of Python is preferred; Knowledge of REST API; Basic knowledge of SQL Server databases is required; Knowledge of C#, Azure development is a bonus; and Excellent analytical, written and oral communication skills. People. Purpose. Passion. Make a Difference Tomorrow. Join Us Today. The work we’ve done over the past 30+ years has positively impacted the lives of countless patients and families who face hundreds of diseases across all key therapeutic areas. The work we do today will improve the lives of people living with illness and disease in the future. Medpace Perks Flexible work environment Competitive compensation and benefits package Competitive PTO packages Structured career paths with opportunities for professional growth Company-sponsored employee appreciation events Employee health and wellness initiatives Awards Recognized by Forbes as one of America's Most Successful Midsize Companies in 2021, 2022, 2023 and 2024 Continually recognized with CRO Leadership Awards from Life Science Leader magazine based on expertise, quality, capabilities, reliability, and compatibility What to Expect Next A Medpace team member will review your qualifications and, if interested, you will be contacted with details for next steps. EO/AA Employer M/F/Disability/Vets

Posted 3 weeks ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Naukri logo

Dear Candidate, Looking for a Cloud Data Engineer to build cloud-based data pipelines and analytics platforms. Key Responsibilities: Develop ETL workflows using cloud data services. Manage data storage, lakes, and warehouses. Ensure data quality and pipeline reliability. Required Skills & Qualifications: Experience with BigQuery, Redshift, or Azure Synapse. Proficiency in SQL, Python, or Spark. Familiarity with data lake architecture and batch/streaming. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 3 weeks ago

Apply

5 - 10 years

0 Lacs

Mysore, Bengaluru, Kochi

Hybrid

Naukri logo

Open & Direct Walk-in Drive event | Hexaware technologies Snowflake & Python - Data Engineer/Architect in Bangalore, Karnataka on 12th April [Saturday] 2025 - Snowflake/ Python/ SQL & Pyspark Dear Candidate, I hope this email finds you well. We are thrilled to announce an exciting opportunity for talented professionals like yourself to join our team as a Data Engineer/Architect. We are hosting an Open Walk-in Drive in Bangalore, Karnataka on 12th April [Saturday] 2025 , and we believe your skills in Snowflake/ SNOWPARK / Python/ SQL & Pyspark align perfectly with what we are seeking. Details of the Walk-in Drive: Date: 12th April [Saturday] 2025 Experience 4 years to 12 years Time: 9.00 AM to 5 PM Venue: Hotel Grand Mercure Bangalore 12th Main Rd, 3rd Block, Koramangala 3 Block, Koramangala, Bengaluru, Karnataka 560034 Point of Contact: Azhagu Kumaran Mohan/+91-9789518386 Work Location: Open (Hyderabad/Bangalore / Pune/ Mumbai/ Noida/ Dehradun/ Chennai/ Coimbatore) Key Skills and Experience: As a Data Engineer, we are looking for candidates who possess expertise in the following: SNOWFLAKE Python Fivetran SNOWPARK & SNOWPIPE SQL Pyspark/Spark DWH Roles and Responsibilities: As a part of our dynamic team, you will be responsible for: 4 - 15 years of Total IT experience on any ETL/Snowflake cloud tool. Min 3 years of experience in Snowflake Min 3 year of experience in query and processing data using python. Strong SQL with experience in using Analytical functions, Materialized views, and Stored Procedures Experience in Data loading features of Snowflake like Stages, Streams, Tasks, and SNOWPIPE. Working knowledge on Processing Semi-Structured data What to Bring: Updated resume Photo ID, Passport size photo How to Register: To express your interest and confirm your participation, please reply to this email with your updated resume attached. Walk-ins are also welcome on the day of the event. This is an excellent opportunity to showcase your skills, network with industry professionals, and explore the exciting possibilities that await you at Hexaware Technologies. If you have any questions or require further information, please feel free to reach out to me at AzhaguK@hexaware.com - +91-9789518386 We look forward to meeting you and exploring the potential of having you as a valuable member of our team. ********* less than 4 years of total experience will not be Screen selected to attend the interview***********

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies