Home
Jobs

1885 Data Engineering Jobs - Page 27

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 6.0 years

12 - 22 Lacs

Bengaluru

Work from Office

Naukri logo

Lifesight is a fast-growing SaaS company focused on helping businesses leverage data & AI to improve customer acquisition and retention. We have a team of 130 serving 300+ customers across 5 offices in the US, Singapore, India, Australia, and the UK. Our mission is to make it easy for non-technical marketers to leverage advanced data activation and marketing measurement tools that are powered by AI, to improve their performance and achieve their KPIs. Our product is being adopted rapidly globally and we need the best people onboard the team to accelerate our growth. Dealing with Petabytes of data and more than 400TB+ daily data processing to power attribution and measurement platforms of Lifesight, building scalable, highly available, fault-tolerant, big data platforms is critical for our success. From your first day at Lifesight, you'll make a valuable - and valued - contribution. We offer you the opportunity to delight customers around the world while gaining meaningful experience across a variety of disciplines. About The Role Lifesight is growing rapidly and seeking a strong Data Engineer to be a key member of the Data and Business Intelligence organization with a focus on deep data engineering projects. You will be joining as one of the few initial data engineers as part of the data platform team in our Bengaluru office. You will have an opportunity to help define our technical strategy and data engineering team culture in India. You will design and build data platforms and services while managing our data infrastructure in cloud environments that fuels strategic business decisions across Lifesight products. A successful candidate will be a self-starter, who drives excellence, is ready to jump into a variety of big data technologies & frameworks, and is able to coordinate and collaborate with other engineers, as well as mentor other engineers in the team. What You'll Be Doing Build quality data solutions and refine existing diverse datasets to simplified models encouraging self-service Build data pipelines that optimize on data quality and are resilient to poor quality data sources Low-level systems debugging, performance measurement & optimization on large production clusters Maintain and support existing platforms and evolve to newer technology stacks and architectures We're excited if you have 3+ years of professional experience as a data or software engineer Proficiency in Python and pyspark Deep understanding of Apache Spark, Spark tuning, creating RDDs, and building data frames. Create Java/ Scala Spark jobs for data transformation and aggregation. Experience in big data technologies like HDFS, Hive, Kafka, Spark, Airflow, Presto, etc. Experience working with various file formats like Parquet, Avro, etc for large volumes of data Experience with one or more NoSQL databases Experience with AWS, GCP Please submit your details here, if you are interested - https://forms.gle/rgzn7cdVcd2HnQQE7

Posted 1 week ago

Apply

10.0 - 20.0 years

50 - 90 Lacs

Chennai, Bengaluru, Mumbai (All Areas)

Hybrid

Naukri logo

Putting together large, intricate data sets to satisfy both functional and non-functional business needs. Determining, creating, and implementing internal process improvements, such as redesigning infrastructure for increased scalability.

Posted 1 week ago

Apply

3.0 - 8.0 years

12 - 20 Lacs

Noida, Gurugram, Mumbai (All Areas)

Work from Office

Naukri logo

3+ years of experience in data engineering or backend development with a focus on highly scalable data systems Experience B2B SaaS AI company ideally in a high-growth or startup designing and scaling cloud-based data platforms (AWS, GCP, Azure).

Posted 1 week ago

Apply

5.0 - 9.0 years

20 - 30 Lacs

Noida

Remote

Naukri logo

Experience: Experience in Data Engineering Programming & Scripting: Strong programming skills in Python and Linux Bash for automation and data workflows. Hands on experience in Snowflake. AWS Cloud Services: In-depth knowledge of AWS EC2, S3, RDS, and EMR to deploy and manage data solutions. Framework Proficiency: Hands-on experience with Luigi for orchestrating complex data workflows. Data Processing & Storage: Expertise in Hadoop ecosystem tools and managing SQL databases for data storage and query optimization. Security Practices: Understanding of data security practices, data governance, and compliance for secure data processing. Automation & CI/CD: Familiarity with CI/CD tools to support automation of deployment and testing. Big Data Technologies: Knowledge of big data processing tools like Spark, Hive, or related AWS services. Should have Good Communication Skills. Regards Rajan You can WhatsApp me your CV also 9270558628

Posted 1 week ago

Apply

5.0 - 9.0 years

10 - 17 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

Job Summary: Skill: Data Engineering Experience: 5-8 Years Location: Hyderabad, Chennai, Bangalore, Noida, Gurgaon, Mumbai, Pune, Kochi Highly preffered for Bangalore Location Notice Period: 15 days, 30 days or Currently Serving Notice Period. hashtag#GCP + hashtag#Bigquery + hashtag#Cube Check with all your candidates if they have experience in Cube Cloud or Cube. Senior level to be able to work independently on given tasks Required cloud certification: GCP. cloud professional engineer.

Posted 1 week ago

Apply

6.0 - 9.0 years

25 - 32 Lacs

Bangalore/Bengaluru

Work from Office

Naukri logo

Full time with top German MNC for location Bangalore - Experience on SCALA is a must Job Overview: To work on development, monitoring and maintenance of Data pipelines across clusters. Primary responsibilities: Develop, Monitor and Maintain data pipeline for various plants. Create and maintain optimal data pipeline architecture. Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability. Work with stakeholders including the Data officers and stewards to assist with data-related technical issues and support their data infrastructure needs. Work on incidents highlighted by the data officers. Incident diagnosis, routing, evaluation & resolution. Analyze the root cause of incidents. Create incident closure report. Qualifications Qualifications Bachelors degree in Computer Science, Electronics & Communication Engineering, a related technical field, or equivalent practical experience. 6-8 years of experience in Spark, Scala software development. Experience in large-scale software development. Excellent software engineering skills (i.e., data structures, algorithms, software design). Excellent problem-solving, investigative, and troubleshooting skills. Experience in Kafka is mandatory Additional Information Skills Self-starter and empowered professional with strong execution and project management capabilities Ability to collaborate effectively, well developed inter personal relationships with all levels in the organization and outside contacts. Outstanding written and verbal communication skills. High Collaboration & a perseverance to drive performance & change Additional information Key Competencies- Distributed computing systems Experience with CI/CD tools such as Jenkins or Github Actions Experience with Python programming Working knowledge of Docker & Kubernetes Experience in developing data pipelines using spark & scala. Experience in debugging pipeline issues. Experience in writing python and shell scripts. In-Depth Knowledge of SQL and Other Database Solutions Having a strong understanding of Apache Hadoop-based analytics Hands on experience on InteliJ, Github /Bitbucket, HUE.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

12 - 22 Lacs

Hyderabad, Bengaluru

Hybrid

Naukri logo

Position : PySpark Data Engineer Location : Bangalore / Hyderabad Experience : 5 to 9 Yrs Job Type : On Role Job Description: PySpark Data Engineer:- 1. API Development : Design, develop, and maintain robust APIs using FastAPI and RESTful principles for scalable backend systems. 2. Big Data Processing : Leverage PySpark to process and analyze large datasets efficiently, ensuring optimal performance in big data environments. 3. Full-Stack Integration : Develop seamless backend-to-frontend feature integrations, collaborating with front-end developers for cohesive user experiences. 4. CI/CD Pipelines : Implement and manage CI/CD pipelines using GitHub Actions and Azure DevOps to streamline deployments and ensure system reliability. 5. Containerization : Utilize Docker for building and deploying containerized applications in development and production environments. 6. Team Leadership : Lead and mentor a team of developers, providing guidance, code reviews, and support to junior team members to ensure high-quality deliverables. 7. Code Optimization : Write clean, maintainable, and efficient Python code, with a focus on scalability, reusability, and performance. 8. Cloud Deployment : Deploy and manage applications on cloud platforms like Azure , ensuring high availability and fault tolerance. 9. Collaboration : Work closely with cross-functional teams, including product managers and designers, to translate business requirements into technical solutions. 10. Documentation : Maintain thorough documentation for APIs, processes, and systems to ensure transparency and ease of maintenance. Highlighted Skillset:- Big Data : Strong PySpark skills for processing large datasets. DevOps : Proficiency in GitHub Actions , CI/CD pipelines , Azure DevOps , and Docker . Integration : Experience in backend-to-frontend feature connectivity. Leadership : Proven ability to lead and mentor development teams. Cloud : Knowledge of deploying and managing applications in Azure or other cloud environments. Team Collaboration : Strong interpersonal and communication skills for working in cross-functional teams. Best Practices : Emphasis on clean code, performance optimization, and robust documentation. Interested candidates kindly share your CV and below details to usha.sundar@adecco.com 1) Present CTC (Fixed + VP) - 2) Expected CTC - 3) No. of years experience - 4) Notice Period - 5) Offer-in hand - 6) Reason of Change - 7) Present Location -

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Bengaluru

Remote

Naukri logo

Job Description Job Title: Offshore Data Engineer Base Location: Bangalore Work Mode: Remote Experience: 5+ Years Job Description: We are looking for a skilled Offshore Data Engineer with strong experience in Python, SQL, and Apache Beam . Familiarity with Java is a plus. The ideal candidate should be self-driven, collaborative, and able to work in a fast-paced environment . Key Responsibilities: Design and implement reusable, scalable ETL frameworks using Apache Beam and GCP Dataflow. Develop robust data ingestion and transformation pipelines using Python and SQL . Integrate Kafka for real-time data streams alongside batch workloads. Optimize pipeline performance and manage costs within GCP services. Work closely with data analysts, data architects, and product teams to gather and understand data requirements. Manage and monitor BigQuery datasets, tables, and partitioning strategies. Implement error handling, resiliency, and observability mechanisms across pipeline components. Collaborate with DevOps teams to enable automated delivery (CI/CD) for data pipeline components. Required Skills: 5+ years of hands-on experience in Data Engineering or Software Engineering . Proficiency in Python and SQL . Good understanding of Java (for reading or modifying codebases). Experience building ETL pipelines with Apache Beam and Google Cloud Dataflow . Hands-on experience with Apache Kafka for stream processing. Solid understanding of BigQuery and data modeling on GCP. Experience with GCP services (Cloud Storage, Pub/Sub, Cloud Compose, etc.). Good to Have: Experience building reusable ETL libraries or framework components. Knowledge of data governance, data quality checks, and pipeline observability. Familiarity with Apache Airflow or Cloud Composer for orchestration. Exposure to CI/CD practices in a cloud-native environment (Docker, Terraform, etc.). Tech stack : Python, SQL, Java, GCP (BigQuery, Pub/Sub, Cloud Storage, Cloud Compose, Dataflow), Apache Beam, Apache Kafka, Apache Airflow, CI/CD (Docker, Terraform)

Posted 2 weeks ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Qualifications and Experience: BS/MS or BCA/MCA or bachelors/masters degree in Math, Statistics, Computer Science, Engineering, or another technical field. Experience required: 5+ years. Experience in Data engineering with Python\R Experience in SQL, MS-SQL Server, or other relational databases. Azure Cloud Service experience\AWS\Google Could Service (Optional) API automated Data extraction pipeline Experience in developing and maintaining integrated visualization reports in PowerBI (Optional) Experience with software deployment project lifecycle phases - requirements gathering, planning, testing, delivery, enhancements, support. Experience in Project Management (Preferred) Exceptional communication skills and fluency in English (professional level). Key Skills / Attributes Exceptional analytical and problem-solving skills, strong attention to detail, organization skills, and work ethic. Self-motivated and team-oriented, with the ability to work successfully both independently and within a team. Ability to balance and address new challenges as they arise and an eagerness to take ownership of tasks. Drive to succeed and grow a career in the Project/Program Management Principal Duties & Key Responsibilities Key Duties & Responsibilities This role will require to work independently or in team to solve data problems with unstructured data. Collaborate with other team members and other disciplines to deliver project requirements. Work independently to complete allocated activities to meet timeframe and quality objectives and meeting or exceeding client expectations. Develop effective materials for clients, making sure that their messages are clearly conveyed through the appropriate channel, using the language that is suitable for the intended audience and readers, and would induce the desired response. Actively contribute to Arcadis Global Communities of Practice relevant to Project Management, Data Visualization, and Power Platform, through knowledge shares and case study presentations. Actively contribute to the Digital Advisory community of practice, through development of integrated solutions that embed GEC capabilities into core advisory business. Data Engineering, Management, and Visualisation Experience in manipulating, transforming, and analysing data sets that are raw, large, and complex. Demonstrates ability to plan, gather, analyse, and document user and business information. Incorporates, integrates, and interfaces technical knowledge with business / systems requirements. Understanding of all aspects of an implementation project including, but not limited to planning, analysis and design, configuration, development, conversions, system testing, cutover and production support. Produce written deliverables for requirement specifications and support documentation: process mapping, meeting minutes, glossaries, data dictionary, technical design, system testing and implementation activities. Collect and organize data, data warehouse reports, spreadsheets, and databases for analytical reporting. Strong on database concepts, data modelling, stored procedures, complex query writing, performance optimization of SQL queries. Experience in creating automated data extraction pipeline from various sources like API, Databases in various formats. A problem solving, solution driven mindset with the ability to innovate within the constraints of a project time/cost/quality.

Posted 2 weeks ago

Apply

6.0 - 11.0 years

8 - 13 Lacs

Pune

Work from Office

Naukri logo

What You'll Do The Global Analytics & Insights (GAI) team is looking for a Senior Data Engineer to lead our build of the data infrastructure for Avalara's core data assets- empowering us with accurate, data to lead data backed decisions. As A Senior Data Engineer, you will help architect, implement, and maintain our data infrastructure using Snowflake, dbt (Data Build Tool), Python, Terraform, and Airflow. You will immerse yourself in our financial, marketing, and sales data to become an expert of Avalara's domain. You will have deep SQL experience, an understanding of modern data stacks and technology, a desire to build things the right way using modern software principles, and experience with data and all things data related. What Your Responsibilities Will Be You will architect repeatable, reusable solutions to keep our technology stack DRY Conduct technical and architecture reviews with engineers, ensuring all contributions meet quality expectations You will develop scalable, reliable, and efficient data pipelines using dbt, Python, or other ELT tools Implement and maintain scalable data orchestration and transformation, ensuring data accuracy, consistency Collaborate with cross-functional teams to understand complex requirements and translate them into technical solutions Build scalable, complex dbt models Demonstrate ownership of complex projects and calculations of core financial metrics and processes Work with Data Engineering teams to define and maintain scalable data pipelines. Promote automation and optimization of reporting processes to improve efficiency. You will be reporting to Senior Manager What You'll Need to be Successful Bachelor's degree in Computer Science or Engineering, or related field 6+ years experience in data engineering field, with advanced SQL knowledge 4+ years of working with Git, and demonstrated experience collaborating with other engineers across repositories 4+ years of working with Snowflake 3+ years working with dbt (dbt core) 3+ years working with Infrastructure as Code Terraform 3+ years working with CI CD, and demonstrated ability to build and operate pipelines AWS Certified Terraform Certified Experience working with complex Salesforce data Snowflake, dbt certified

Posted 2 weeks ago

Apply

4.0 - 6.0 years

15 - 20 Lacs

Noida, Vijayawada, Chennai

Hybrid

Naukri logo

Key Responsibilities: Understand all of our data definitions and nuances (e.g., attribution window) Maintain and update glossary; Advise various internal stakeholders Consult data visualization team and analysts on what data to use for new reports and analyses Recommend development or enhancement of datasets for reporting and analyses questions Develop and implement data governance policies and procedures to ensure data quality, availability, and integrity. Work closely with data stewards, data owners, and other stakeholders to define and enforce data governance standards. Monitor and evaluate data governance processes to ensure compliance with industry best practices Design, develop, and maintain scalable and efficient data pipelines and ETL processes. Integrate data from various sources into centralized data platforms (e.g., data lakes, data warehouses). Optimize data workflows for performance, reliability, and scalability. Collaborate with data scientists, analysts, and business teams to support data needs. Ensure data quality, consistency, lineage, and metadata management. Maintain data catalogs and support data stewardship initiatives. Conduct audits and assessments to identify and mitigate data risks. Required Skills & Qualifications: Bachelors or Master’s degree in computer science, Information Systems, or a related field. 4-6 years of experience in data governance, data management, or a related field. Proven experience in data engineering and data governance roles. Proficiency in SQL, Python, and ETL tools. Experience with cloud data platforms. Familiarity with data governance tools. Strong understanding of data modeling, data warehousing, and data architecture. Strong understanding of data governance frameworks, principles, and best practices. Experience with data quality management, metadata management, and data lineage practices. Familiarity with data governance tools and technologies (e.g., Collibra, Informatica, Alation). Excellent analytical, problem-solving, and decision-making skills. Strong communication and interpersonal skills, with the ability to work effectively with cross-functional teams. Ability to manage multiple projects and priorities in a fast-paced environment.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

8 - 10 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

Strong on programming languages like Python, Java Must have one cloud hands-on experience (GCP preferred) Must have: Experience working with Dockers Must have: Environments managing (e.g venv, pip, poetry, etc.) Must have: Experience with orchestrators like Vertex AI pipelines, Airflow, etc Must have: Data engineering, Feature Engineering techniques Proficient in either Apache Spark or Apache Beam or Apache Flink Must have: Advance SQL knowledge Must be aware of Streaming concepts like Windowing , Late arrival , Triggers etc Should have hands-on experience on Distributed computing Should have working experience on Data Architecture design Should be aware of storage and compute options and when to choose what Should have good understanding on Cluster Optimisation/ Pipeline Optimisation strategies Should have exposure on GCP tools to develop end to end data pipeline for various scenarios (including ingesting data from traditional data bases as well as integration of API based data sources). Should have Business mindset to understand data and how it will be used for BI and Analytics purposes. Should have working experience on CI/CD pipelines, Deployment methodologies, Infrastructure as a code (eg. Terraform) Good to have, Hands-on experience on Kubernetes Good to have Vector based Database like Qdrant Experience in Working with GCP tools like: Storage : CloudSQL , Cloud Storage, Cloud Bigtable, Bigquery, Cloud Spanner, Cloud DataStore, Vector database Ingest : Pub/Sub, Cloud Functions, AppEngine, Kubernetes Engine, Kafka, Micro services Schedule : Cloud Composer, Airflow Processing: Cloud Dataproc, Cloud Dataflow, Apache Spark, Apache Flink CI/CD : Bitbucket+Jenkinjs / Gitlab ,Infrastructre as a tool : Terraform Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote

Posted 2 weeks ago

Apply

4.0 - 6.0 years

8 - 12 Lacs

Chennai

Work from Office

Naukri logo

Experience - 4+ years Programming Side: Strong programming skill on Python Strong skill on SQL and any one of the Database/Data warehouses Knowledge on Spark coding and Architecture Sound knowledge on ETL flow, Good Knowledge on Databricks. Experience on any cloud(AWS/Azure/GCP) Knowledge on any one of the scheduling tools (good to have) Knowledge on any 1 of the real time data processing (Kafka/Kinesis/Spark Steams) (Good to have)

Posted 2 weeks ago

Apply

12.0 - 22.0 years

35 - 75 Lacs

Bengaluru, Delhi / NCR, Mumbai (All Areas)

Work from Office

Naukri logo

Professional & Technical Skills: - Must To Have Skills: Proficiency in Customer Data Platform & Integration, Google Cloud Data Services. - Strong understanding of data integration and data management principles. - Experience in architecting and implementing scalable and secure data solutions. - Knowledge of cloud-based data services and technologies. - Hands-on experience with data modeling and database design.

Posted 2 weeks ago

Apply

8.0 - 12.0 years

25 - 35 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Hybrid

Naukri logo

Hi candidates, We have an opportunities with leading brand for the Lead Data Engineer interested candidates can mail their CV's at Abhishek.saxena@mounttalent.com Location- Location: Delhi NCR, Hyderabad, Bangalore, Pune, Chennai, this is a hybrid work opportunity. Job Description- 8+ years of experience in data engineering with 3+ years hands on Databricks (DB) experience.(Must) Strong expertise in Microsoft Azure cloud platform and services, particularly Azure Databricks, Azure Data Factory, Azure Fabric, Azure SQL Database, and Power BI 3+ years of hands-on experience with event-driven streaming data engineering, preferably with Apache Flink and/or Confluent. (Pref- Both skills) Extensive experience working with large data sets with hands-on technology skills to design and build robust data architecture. Extensive experience in data modeling and database design (Good to have) Strong programming skills in SQL, Python and Pyspark Extensive experience of Medallion architecture (3 layer/ specific to Databricks) and delta lake (or lakehouse) (Databricks) Agile development environment experience applying DEVOPS along with data quality and governance principles. Good leadership skills to guide and mentor the work of less experienced personnel Ability to contribute to continual improvement by suggesting improvements to Architecture or new technologies and mentoring junior employees and being ready to shoulder ad-hoc. Good to Have: - Experience with Change Data Capture (CDC) frameworks, with Debezium experience being a bonus. - Familiarity with containerized environments for event-driven streaming - Experience in Unity catalog, DBT and data governance knowledge - Experience in Snowflake

Posted 2 weeks ago

Apply

4.0 - 6.0 years

25 - 30 Lacs

Gurugram

Work from Office

Naukri logo

Job Title: Data Engineer Location: Gurugram (WFO) Experience: 4-6 years Department: Engineering / Data & Analytics About Aramya At Aramya, were redefining fashion for Indias underserved Gen X/Y women, offering size-inclusive, comfortable, and stylish ethnic wear at affordable prices. Launched in 2024, weve already achieved 40 Cr in revenue in our first year, driven by a unique blend of data-driven design, in-house manufacturing, and a proprietary supply chain. Today, with an ARR of 100 Cr, were scaling rapidly with ambitious growth plans for the future. Our vision is bold to build the most loved fashion and lifestyle brands across the world while empowering individuals to express themselves effortlessly. Backed by marquee investors like Accel and Z47, were on a mission to make high-quality ethnic wear accessible to every woman. We’ve built a community of loyal customers who love our weekly design launches, impeccable quality, and value-for-money offerings. With a fast-moving team driven by creativity, technology, and customer obsession, Aramya is more than a fashion brand—it’s a movement to celebrate every woman’s unique journey. Role Overview We’re looking for a results-driven Data Engineer who will play a key role in building and scaling our data infrastructure. This individual will own our end-to-end data pipelines, backend services for analytics, and infrastructure automation—powering real-time decision-making across our business. This is a high-impact role for someone passionate about data architecture, cloud engineering, and creating a foundation for scalable insights in a fast-paced D2C environment. Key Responsibilities Design, build, and manage scalable ETL/ELT pipelines using tools like Apache Airflow, Databricks, or Spark. Own and optimize data lakes and data warehouses on AWS Redshift (or Snowflake/BigQuery). Develop robust and scalable backend APIs using Python (FastAPI/Django/Flask) or Node.js. Integrate third-party data sources (APIs, SFTP, flat files) and ensure data validation and consistency. Ensure high availability, observability, and fault-tolerance of data systems via logging, monitoring, and alerting. Collaborate with analysts, product managers, and business stakeholders to gather requirements and define data contracts. Implement Infrastructure-as-Code using tools like Terraform or AWS CDK to automate data workflows and provisioning. Must-Have Skills Proficiency in SQL and data modeling for both OLTP and OLAP systems. Strong Python skills, with demonstrated experience in both backend and data engineering use cases. Hands-on experience with Databricks , Apache Spark , and AWS Redshift . Experience in Airflow , dbt , or other workflow orchestration tools. Working knowledge of REST APIs , backend architectures, and microservices. Familiarity with Docker , Git , and CI/CD pipelines . Experience working on AWS cloud (S3, Lambda, ECS/Fargate, CloudWatch, etc.). Nice-to-Have Skills Experience with streaming platforms like Kafka , Flink , or Kinesis . Exposure to Snowflake , BigQuery , or Delta Lake . Understanding of data governance and PII handling best practices. Experience with GraphQL , gRPC , or event-driven architectures. Familiarity with data observability tools like Monte Carlo , Great Expectations , or Datafold . Prior experience in D2C, e-commerce, or high-growth startup environments . Qualifications Bachelor’s degree in Computer Science, Data Engineering, or related technical discipline. 4–6 years of experience in data engineering roles with strong backend and cloud integration exposure.

Posted 2 weeks ago

Apply

6.0 - 11.0 years

30 - 40 Lacs

Chennai

Work from Office

Naukri logo

Role & responsibilities Data Engineer, with working on data migration projects. Experience with Azure data stack, including Data Lake Storage, Synapse Analytics, ADF, Azure Databricks, and Azure ML. Solid knowledge of Python, PySpark and other Python packages Familiarity with ML workflows and collaboration with data science teams. Strong understanding of data governance, security, and compliance in financial domains. Experience with CI/CD tools and version control systems (e.g., Azure DevOps, Git). Experience modularizing and migrating ML logic Note:- We encourage interested candidates to submit their updated CVs to mohan.kumar@changepond.com

Posted 2 weeks ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

We Are Hiring: Senior .NET Backend Developer with Azure Data Engineering Experience Job Location: Hyderabad, India Work Mode: Onsite Only Experience: Minimum 6+ Years Qualification: B.Tech, B.E, MCA, M.Tech Role Overview We are seeking an experienced .NET Backend Developer with strong Azure Data Engineering skills to join our growing team in Hyderabad. You will work closely with cross-functional teams to build scalable backend systems, modern APIs, and data pipelines using cutting-edge tools like Azure Databricks and MS Fabric. Technical Skills (Must-Have) Strong hands-on experience in C, SQL Server, and OOP Concepts Proficiency with .NET Core, ASP.NET Core, Web API, Entity Framework (v6 or above) Strong understanding of Microservices Architecture Experience with Azure Cloud technologies including Data Engineering, Azure Databricks, MS Fabric, Azure SQL, Blob Storage, etc. Experience with Snowflake or similar cloud data platforms Experience working with NoSQL databases Skilled in Database Performance Tuning and Design Patterns Working knowledge of Agile methodologies Ability to write reusable libraries and modular, maintainable code Excellent verbal and written communication skills (especially with US counterparts) Strong troubleshooting and debugging skills Nice to Have Skills Experience with Angular, MongoDB, NPM Familiarity with Azure DevOps CI/CD pipelines for build and release configuration Self-starter attitude with strong analytical and problem-solving abilities Willingness to work extra hours when needed to meet tight deadlines Why Join Us Work with a passionate, high-performing team Opportunity to grow your technical and leadership skills in a dynamic environment Be part of global digital transformation initiatives with top-tier clients Exposure to real-world enterprise data systems Opportunity to work on cutting-edge Azure and cloud technologies Performance-based growth & internal mobility opportunities Tags DotNetDeveloper BackendDeveloper AzureDataEngineering Databricks MSFabric Snowflake Microservices CSharpJobs HyderabadJobs FullTimeJob HiringNow EntityFramework ASPNetCore CloudEngineering SQLJobs DevOps DotNetCore BackendJobs SuzvaCareers DataPlatformDeveloper SoftwareJobsIndia

Posted 2 weeks ago

Apply

2.0 - 4.0 years

4 - 6 Lacs

Noida, Mumbai, Hyderabad

Work from Office

Naukri logo

MAQ LLC d.b.a MAQ Software has multiple openings at Redmond, WA for: Software Data Operations Engineer (MS+0) Will support data management projects to include architecting, programming, testing and modifying software to meet customer specifications. Deploy, configure, implement, test reports, analyze databases for errors and fix them. Automate user test scenarios, configure, debug and fix errors in cloud-based infrastructure and dashboards to meet customer needs. Must be able to travel temporarily to client sites and or relocate throughout the United States. Requirements: Masters Degree or foreign equivalent in Computer Science, Computer Applications, Computer Information Systems, Information Technology or related field. Benefits: Standard Employee Benefits. The position qualifies for Employee Referral program. Send resume to 2027 152nd Avenue NE, Redmond, WA 98052, Attn: H.R. Manager.

Posted 2 weeks ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Pune, Gurugram

Work from Office

Naukri logo

ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it , our most valuable asset is our people. Here you ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage an d passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. Business Technology ZS s Technology group focuses on scalable strategies, assets and accelerators that deliver to our clients enterprise-wide transformation via cutting-edge technology. We leverage digital and technology solutions to optimize business processes, enhance decision-making, and drive innovation. Our services include, but are not limited to, Digital and Technology advisory, Product and Platform development and Data, Analytics and AI implementation. What you ll do Undertake complete ownership in accomplishing activities and assigned responsibilities across all phases of project lifecycle to solve business problems across one or more client engagements; Apply appropriate development methodologies (e.g.agile, waterfall) and best practices (e.g. mid-development client reviews, embedded QA procedures, unit testing) to ensure successful and timely completion of assignments; Collaborate with other team members to leverage expertise and ensure seamless transitions; Exhibit flexibility in undertaking new and challenging problems and demonstrate excellent task management; Assist in creating project outputs such as business case development, solution vision and design, user requirements, prototypes, and technical architecture (if needed), test cases, and operations management; Bring transparency in driving assigned tasks to completion and report accurate status; Bring Consulting mindset in problem solving, innovation by leveraging technical and business knowledge/ expertise and collaborate across other teams; Assist senior team members, delivery leads in project management responsibilities What you ll bring Big Data TechnologiesProficiency in working with big data technologies, particularly in the context of Azure Databricks, which may include Apache Spark for distributed data processing. Azure DatabricksIn-depth knowledge of Azure Databricks for data engineering tasks, including data transformations, ETL processes, and job scheduling. SQL and Query OptimizationStrong SQL skills for data manipulation and retrieval, along with the ability to optimize queries for performance in Snowflake. ETL (Extract, Transform, Load)Expertise in designing and implementing ETL processes to move and transform data between systems, utilizing tools and frameworks available in Azure Databricks. Data IntegrationExperience with integrating diverse data sources into a cohesive and usable format, ensuring data quality and integrity. Python/PySparkKnowledge of programming languages like Python and PySpark for scripting and extending the functionality of Azure Databricks notebooks. Version ControlFamiliarity with version control systems, such as Git, for managing code and configurations in a collaborative environment. Monitoring and OptimizationAbility to monitor data pipelines, identify bottlenecks, and optimize performance for both Azure Data Factory Security and ComplianceUnderstanding of security best practices and compliance considerations when working with sensitive data in Azure and Snowflake environments. Snowflake Data WarehouseExperience in designing, implementing, and optimizing data warehouses using Snowflake, including schema design, performance tuning, and query optimization. Healthcare Domain Knowledge: Familiarity with US health plan terminologies and datasets is essential. Programming/Scripting Languages: Proficiency in Python, SQL, and PySpark is required. Cloud Platforms: Experience with AWS or Azure, specifically in building data pipelines, is needed. Cloud-Based Data Platforms: Working knowledge of Snowflake and Databricks is preferred. Data Pipeline Orchestration: Experience with Azure Data Factory and AWS Glue for orchestrating data pipelines is necessary. Relational Databases: Competency with relational databases such as PostgreSQL and MySQL is required, while experience with NoSQL databases is a plus. BI Tools: Knowledge of BI tools such as Tableau and PowerBI is expected. Version Control: Proficiency with Git, including branching, merging, and pull requests, is required. CI/CD for Data Pipelines: Experience in implementing continuous integration and delivery for data workflows using tools like Azure DevOps is essential. Additional Skills Experience with front-end technologies such as SQL, JavaScript, HTML, CSS, and Angular is advantageous. Familiarity with web development frameworks like Flask, Django, and FAST API is beneficial. Basic knowledge of AWS CI/CD practices is a plus. Strong verbal and written communication skills with ability to articulate results and issues to internal and client teams; Proven ability to work creatively and analytically in a problem-solving environment; Willingness to travel to other global offices as needed to work with client or other internal project teams. Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Job Overview Foster effective collaboration with diverse teams across various functions and regions. Collect and analyze large datasets to identify patterns, trends, and insights that will inform business strategies and decision-making processes. Develop and maintain reports, dashboards, and other tools to monitor and track supply chain performance. Assist in the development and implementation of supply chain strategies that align with business objectives. Identify and suggest solutions to potential supply chain risks and challenges. Build data models and perform data mining to discover new opportunities and areas of improvement. Conduct data quality checks to ensure accuracy, completeness, and consistency of data sets. What your background should look like: 5+ years of hands-on experience in Data Engineering within the supply chain domain. Proficiency in Azure Data Engineering technologies, including but not limited to ETL processes Azure Data Warehouse (DW) Azure Databricks MS SQL Strong expertise in developing and maintaining scalable data pipelines, data models, and integrations to support analytics and decision-making Experience in optimizing data workflows for performance, scalability, and reliability. Competencies ABOUT TE CONNECTIVITY TE Connectivity plc (NYSETEL) is a global industrial technology leader creating a safer, sustainable, productive, and connected future. Our broad range of connectivity and sensor solutions enable the distribution of power, signal and data to advance next-generation transportation, energy networks, automated factories, data centers, medical technology and more. With more than 85,000 employees, including 9,000 engineers, working alongside customers in approximately 130 countries, TE ensures that EVERY CONNECTION COUNTS. Learn more at www.te.com and on LinkedIn , Facebook , WeChat, Instagram and X (formerly Twitter). Location

Posted 2 weeks ago

Apply

4.0 - 9.0 years

6 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

ABOUT THE ROLE Role Description: We are seeking an experienced MDM Senior Data Engineer with 6- 9 years of experience and expertise in backend engineering to work closely with business on development and operations of our Master Data Management (MDM) platforms, with hands-on experience in Informatica or Reltio and data engineering experience . This role will also involve guiding junior data engineers /analysts , and quality experts to deliver high-performance, scalable, and governed MDM solutions that align with enterprise data strategy. To succeed in this role, the candidate must have strong Data Engineering experience along with MDM knowledge, hence the candidates having only MDM experience are not eligible for this role. Candidate must have data engineering experience on technologies like (SQL, Python, PySpark , Databricks, AWS, API Integrations etc ), along with knowledge of MDM (Master Data Management) Roles & Responsibilities: Develop the MDM backend solutions and implement ETL and Data engineering pipelines using Databricks, AWS, Python/PySpark, SQL etc Lead the implementation and optimization of MDM solutions using Informatica or Reltio platforms. Perform data profiling and identify the DQ rules need. Define and drive enterprise-wide MDM architecture, including IDQ, data stewardship, and metadata workflows. Manage cloud-based infrastructure using AWS and Databricks to ensure scalability and performance. Ensure data integrity, lineage, and traceability across MDM pipelines and solutions. Provide mentorship and technical leadership to junior team members and ensure project delivery timelines. Help custom UI team for integration with backend data using API or other integration methods for better user experience on data stewardship Basic Qualifications and Experience: Masters degree with 4 - 6 years of experience in Business, Engineering, IT or related field OR Bachelors degree with 6 - 9 years of experience in Business, Engineering, IT or related field OR Diploma with 10 - 12 years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: Strong understanding and hands on experience of Databricks and AWS cloud services. Proficiency in Python, PySpark, SQL, and Unix for data processing and orchestration. Deep knowledge of MDM tools (Informatica, Reltio) and data quality frameworks (IDQ). Must have knowledge on customer master data (HCP, HCO etc) Experience with data modeling, governance, and DCR lifecycle management. Able to implement end to end integrations including API based integrations, Batch integrations and Flat file-based integrations Strong experience with external data enrichments like D&B Strong experience on match/merge and survivorship rules implementations Very good understanding on reference data and its integration with MDM Hands on experience with custom workflows or building data pipelines/orchestrations Good-to-Have Skills: Experience with Tableau or PowerBI for reporting MDM insights. Exposure or knowledge of DataScience and GenAI capabilities. Exposure to Agile practices and tools (JIRA, Confluence). Prior experience in Pharma/Life Sciences. Understanding of compliance and regulatory considerations in master data. Professional Certifications : Any MDM certification (e.g. Informatica, Reltio etc) Databricks Certifications (Data engineer or Architect) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams

Posted 2 weeks ago

Apply

3.0 - 6.0 years

15 - 25 Lacs

Chennai, Bengaluru

Hybrid

Naukri logo

Job Description: We are seeking a highly experienced and skilled Senior Data Engineer to join our dynamic team. This role requires hands-on experience with databases such as Snowflake and Teradata, as well as advanced knowledge in various data science and AI techniques. The successful candidate will play a pivotal role in driving data-driven decision-making and innovation within our organization. Roles and Responsibilities: Design, develop, and implement advanced machine learning models to solve complex business problems. Apply AI techniques and generative AI models to enhance data analysis and predictive capabilities. Utilize Tableau and other visualization tools to create insightful and actionable dashboards for stakeholders. Manage and optimize large datasets using Snowflake and Teradata databases. Collaborate with cross-functional teams to understand business needs and translate them into analytical solutions. Stay updated with the latest advancements in data science, machine learning, and AI technologies. Mentor and guide junior data scientists, fostering a culture of continuous learning and development. Communicate complex analytical concepts and results to non-technical stakeholders effectively. We are an equal opportunity employer and value diversity at our company. We do not discriminate based on race, religion, colour, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status.

Posted 2 weeks ago

Apply

5.0 - 7.0 years

8 - 14 Lacs

Surat

Work from Office

Naukri logo

Job Description : We are looking for a skilled Data Engineer with strong hands-on experience in Clickhouse, Kubernetes, SQL, Python, and FastAPI, along with a good understanding of PostgreSQL. The ideal candidate will be responsible for building and maintaining efficient data pipelines, optimizing query performance, and developing APIs to support scalable data services. - Design, build, and maintain scalable and efficient data pipelines and ETL processes. - Develop and optimize Clickhouse databases for high-performance analytics. - Create RESTful APIs using FastAPI to expose data services. - Work with Kubernetes for container orchestration and deployment of data services. - Write complex SQL queries to extract, transform, and analyze data from PostgreSQL and Clickhouse. - Collaborate with data scientists, analysts, and backend teams to support data needs and ensure data quality. - Monitor, troubleshoot, and improve performance of data infrastructure. - Strong experience in Clickhouse - data modeling, query optimization, performance tuning. - Expertise in SQL - including complex joins, window functions, and optimization. - Proficient in Python, especially for data processing (Pandas, NumPy) and scripting. - Experience with FastAPI for creating lightweight APIs and microservices. - Hands-on experience with PostgreSQL - schema design, indexing, and performance. - Solid knowledge of Kubernetes managing containers, deployments, and scaling. - Understanding of software engineering best practices (CI/CD, version control, testing). - Experience with cloud platforms like AWS, GCP, or Azure. - Knowledge of data warehousing and distributed data systems. - Familiarity with Docker, Helm, and monitoring tools like Prometheus/Grafana.

Posted 2 weeks ago

Apply

5.0 - 7.0 years

8 - 14 Lacs

Noida

Work from Office

Naukri logo

Job Description : We are looking for a skilled Data Engineer with strong hands-on experience in Clickhouse, Kubernetes, SQL, Python, and FastAPI, along with a good understanding of PostgreSQL. The ideal candidate will be responsible for building and maintaining efficient data pipelines, optimizing query performance, and developing APIs to support scalable data services. - Design, build, and maintain scalable and efficient data pipelines and ETL processes. - Develop and optimize Clickhouse databases for high-performance analytics. - Create RESTful APIs using FastAPI to expose data services. - Work with Kubernetes for container orchestration and deployment of data services. - Write complex SQL queries to extract, transform, and analyze data from PostgreSQL and Clickhouse. - Collaborate with data scientists, analysts, and backend teams to support data needs and ensure data quality. - Monitor, troubleshoot, and improve performance of data infrastructure. - Strong experience in Clickhouse - data modeling, query optimization, performance tuning. - Expertise in SQL - including complex joins, window functions, and optimization. - Proficient in Python, especially for data processing (Pandas, NumPy) and scripting. - Experience with FastAPI for creating lightweight APIs and microservices. - Hands-on experience with PostgreSQL - schema design, indexing, and performance. - Solid knowledge of Kubernetes managing containers, deployments, and scaling. - Understanding of software engineering best practices (CI/CD, version control, testing). - Experience with cloud platforms like AWS, GCP, or Azure. - Knowledge of data warehousing and distributed data systems. - Familiarity with Docker, Helm, and monitoring tools like Prometheus/Grafana.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies