Home
Jobs

214 Data Engineer Jobs - Page 3

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

22 - 25 Lacs

Hyderabad

Work from Office

Naukri logo

Role & responsibilities Design, build and maintain complex ELT jobs that deliver business value • Translate high-level business requirements into technical specs • Ingest data from disparate sources into the data lake and data warehouse • Cleanse and enrich data and apply adequate data quality controls • Develop re-usable tools to help streamline the delivery of new projects • Collaborate closely with other developers and provide mentorship • Evaluate and recommend tools, technologies, processes and reference architectures • Work in an Agile development environment, attending daily stand-up meetings and delivering incremental improvements Basic Qualifications • Bachelors degree in computer science, engineering or a related field • Data: 5+ years of experience with data analytics and warehousing • SQL: Deep knowledge of SQL and query optimization • ELT: Good understanding of ELT methodologies and tools • Troubleshooting: Experience with troubleshooting and root cause analysis to determine and remediate potential issues • Communication: Excellent communication, problem solving and organizational and analytical skills • Able to work independently and to provide leadership to small teams of developers Preferred Qualifications • Master’s degree in computer science or engineering or a related field • Cloud: Experience working in a cloud environment (e.g. AWS) • Python: Hands on experience developing with Python • Advanced Data Processing: Experience using data processing technologies such as Apache Spark or Kafka • Workflow: Good knowledge of orchestration and scheduling tools (e.g. Apache Airflow) • Reporting: Experience with data reporting (e.g. Microstrategy, Tableau, Looker) and data cataloging tools (e.g. Alation) Preferred candidate profile

Posted 3 weeks ago

Apply

4.0 - 7.0 years

8 - 15 Lacs

Hyderabad

Hybrid

Naukri logo

We are seeking a highly motivated Senior Data Engineer OR Data Engineer within Envoy Global's tech team to join us on a full time, permanent basis. This role is responsible for designing, developing, and documenting data pipelines and ETL jobs to enable data migration, data integration and data warehousing. That includes ETL jobs, reports, dashboards and data pipelines. The person in this role will work closely with Data Architect, BI & Analytics team and Engineering teams to deliver data assets for Data Security, DW and Analytics. As our Senior Data Engineer OR Data Engineer, you will be required to: Design, build, test and maintain cloud-based data pipelines to acquire, profile, cleanse, consolidate, transform, integrate data Design and develop ETL processes for the Data Warehouse lifecycle (staging of data, ODS data integration, EDW and data marts) and Data Security (Data archival, Data obfuscation, etc.). Build complex SQL queries on large datasets and performance tune as needed Design and develop data pipelines and ETL jobs using SSIS and Azure Data Factory Maintain ETL packages and supporting data objects for our growing BI infrastructure Carry out monitoring, tuning, and database performance analysis Facilitate integration of our application with other systems by developing data pipelines Prepare key documentation to support the technical design in technical specifications Collaborate and work alongside with other technical professionals (BI Report developers, Data Analysts, Architect) Communicate clearly and effectively with stakeholders To apply for this role, you should possess the following skills, experience and qualifications: Design, Develop, and Document Data Pipelines and ETL Jobs: Create and maintain robust data pipelines and ETL (Extract, Transform, Load) processes to support data migration, integration, and warehousing. Data Assets Delivery: Collaborate with Data Architects, BI & Analytics teams, and Engineering teams to deliver high-quality data assets for data security, data warehousing (DW), and analytics. ETL Jobs, Reports, Dashboards, and Data Pipelines: Develop and manage ETL jobs, generate reports, create dashboards, and ensure the smooth operation of data pipelines. 3+ years of experience as a SSIS ETL developer, Data Engineer or a related role 2+ years of experience using Azure Data Factory Knowledgeable in Data Modelling and Data warehouse concepts Experience working with Azure stack Demonstrated ability to write SQL/TSQL queries to retrieve/modify data Knowledge and know-how to troubleshoot potential issues, and experience with best practices around database operations Ability to work in an Agile environment Should you have a deep passion for technology and a desire to thrive in a rapidly evolving and creative environment, we would be delighted to receive your application. Please provide your updated resume, highlighting your relevant experience and the reasons you believe you would be a valuable member of our team. We look forward to reviewing your subm

Posted 3 weeks ago

Apply

1.0 - 5.0 years

7 - 15 Lacs

Pune

Work from Office

Naukri logo

Hi All, Please find the mandatory skills based on the job description for Data Engineer (Grade I & J) , here are the mandatory (essential) skills required: Mandatory Skills Data Infrastructure & Engineering Designing, building, productionizing, and maintaining scalable and reliable data infrastructure and data products. Experience with data modeling, pipeline idempotency, and operational observability. Programming Languages Proficiency in one or more object-oriented programming languages such as: Python Scala Java C# Database Technologies Strong experience with: SQL and NoSQL databases Query structures and design best practices Scalability, readability, and reliability in database design Distributed Systems Experience implementing large-scale distributed systems in collaboration with senior team members. Software Engineering Best Practices Technical design and reviews Unit testing, monitoring, and alerting Code versioning, code reviews, and documentation CI/CD pipeline development and maintenance Security & Compliance Deploying secure and well-tested software and data assets Meeting privacy and compliance requirements Site Reliability Engineering Service reliability, on-call rotations, defining and maintaining SLAs Infrastructure as code and containerized deployments Communication & Collaboration Strong verbal and written communication skills Ability to work in cross-disciplinary teams Mindset & Education Continuous learning and improvement mindset BS degree in Computer Science or related field (or equivalent experience) Thanks & Regards Sushma Patil HR Cordinator sushma.patil@in.experis.com

Posted 3 weeks ago

Apply

6.0 - 10.0 years

16 - 31 Lacs

Bhubaneswar, Pune, Bengaluru

Work from Office

Naukri logo

About Client Hiring for One of the Most Prestigious Multinational Corporations Job Title : Data Engineer Experience : 6 to 10 years Key Responsibilities : Design, develop, and maintain large-scale batch and real-time data processing systems using PySpark and Scala. Build and manage streaming pipelines using Apache Kafka. Work with structured and semi-structured data sources including MongoDB, flat files, APIs, and relational databases. Optimize and scale data pipelines to handle large volumes of data efficiently. Implement data quality, data governance, and monitoring frameworks. Collaborate with data scientists, analysts, and other engineers to support various data initiatives. Develop and maintain robust, reusable, and well-documented data engineering solutions. Troubleshoot production issues, identify root causes, and implement fixes. Stay up to date with emerging technologies in the big data and streaming space Technical Skills : 6 to 10 years of experience in Data Engineering or a similar role. Strong hands-on experience with Apache Spark (PySpark) and Scala . Proficiency in designing and managing Kafka streaming architectures. Experience with MongoDB , including indexing, aggregation, and schema design. Solid understanding of distributed computing , ETL/ELT processes , and data warehousing concepts . Experience with cloud platforms (AWS, Azure, or GCP) is a strong plus. Strong programming and scripting skills (Python, Scala, or Java). Familiarity with workflow management tools like Airflow , Luigi , or similar is a plus. Excellent problem-solving skills and ability to work independently or within a team. Strong communication skills and the ability to collaborate effectively across teams. Notice period : 30,45,60,90 days Location : Bhubaneswar ,Bangalore, Pune Mode of Work :WFO(Work From Office) Thanks & Regards, SWETHA Black and White Business Solutions Pvt.Ltd. Bangalore,Karnataka,INDIA. Contact Number:8067432433 rathy@blackwhite.in |www.blackwhite.in

Posted 3 weeks ago

Apply

2.0 - 5.0 years

2 - 4 Lacs

Mumbai, Mumbai Suburban, Mumbai (All Areas)

Work from Office

Naukri logo

Role & responsibilities 3 to 4+ years of hands-on experience in SQL database design, data architecture, ETL, Data Warehousing, Data Mart, Data Lake, Big Data, Cloud and Data Governance domains. • Take ownership of the technical aspects of implementing data pipeline & migration requirements, ensuring that the platform is being used to its fullest potential through designing and building applications around business stakeholder needs. • Interface directly with stakeholders to gather requirements and own the automated end-to-end data engineering solutions. • Implement data pipelines to automate the ingestion, transformation, and augmentation of both structured, unstructured, real-time data, and provide best practices for pipeline operations • Troubleshoot and remediate data quality issues raised by pipeline alerts or downstream consumers. Implement Data Governance best practices. • Create and maintain clear documentation on data models/schemas as well as transformation/validation rules. • Implement tools that help data consumers to extract, analyze, and visualize data faster through data pipelines. • Implement data security, privacy, and compliance protocols to ensure safe data handling in line with regulatory requirements. • Optimize data workflows and queries to ensure low latency, high throughput, and cost efficiency. • Leading the entire software lifecycle including hands-on development, code reviews, testing, deployment, and documentation for batch ETL's. • Work directly with our internal product/technical teams to ensure that our technology infrastructure is seamlessly and effectively integrated • Migrate current data applications & pipelines to Cloud leveraging technologies in future Preferred candidate profile • Graduate with Engineering Degree (CS/Electronics/IT) / MCA / MCS or equivalent with substantial data engineering experience. • 3+ years of recent hands-on experience with a modern programming language (Scala, Python, Java) is required; Spark/ Pyspark is preferred. • Experience with configuration management and version control apps (ie: Git) and experience working within a CI/CD framework is a plus. • 3+ years of recent hands-on SQL programming experience in a Big Data environment is required. • Working knowledge of PostgreSQL, RDBMS, NoSQL and columnar databases. • Experience developing and maintaining ETL applications and data pipelines using big data technologies is required; Apache Kafka, Spark, Airflow experience is a must. • Knowledge of API and microservice integration with applications. • Experience with containerization (e.g., Docker) and orchestration (e.g., Kubernetes). • Experience building data solutions for Power BI and Web visualization applications. • Experience with Cloud is a plus. • Experience in managing multiple projects and stakeholders with excellent communication and interpersonal skills. • Ability to develop and organize high-quality documentation. • Superior analytical skills and a strong sense of ownership in your work. • Collaborate with data scientists on several projects. Contribute to development and support of analytics including AI/ML. • Ability to thrive in a fast-paced environment, and to manage multiple, competing priorities simultaneously. • Prior Energy & Utilities industry experience is a big plus. Experience (Min. Max. in yrs.): 3+ years of core/relevant experience Location: Mumbai (Onsite)

Posted 3 weeks ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Chennai

Work from Office

Naukri logo

Development: Design, build, and maintain robust, scalable, and high-performance data pipelines to ingest, process, and store large volumes of structured and unstructured data. Utilize Apache Spark within Databricks to process big data efficiently, leveraging distributed computing to process large datasets in parallel. Integrate data from a variety of internal and external sources, including databases, APIs, cloud storage, and real-time streaming data. Data Integration & Storage: Implement and maintain data lakes and warehouses, using technologies like Databricks, Azure Synapse, Redshift, BigQuery to store and retrieve data. Design and implement data models, schemas, and architecture for efficient querying and storage. Data Transformation & Optimization: Leverage Databricks and Apache Spark to perform data transformations at scale, ensuring data is cleaned, transformed, and optimized for analytics. Write and optimize Spark SQL, PySpark, and Scala code to process large datasets in real-time and batch jobs. Work on ETL processes to extract, transform, and load data from various sources into cloud-based data environments. Big Data Tools & Technologies: Utilize cloud-based big data platforms (e.g., AWS, Azure, Google Cloud) in conjunction with Databricks for distributed data processing and storage. Implement and maintain data pipelines using Apache Kafka, Apache Flink, and other data streaming technologies for real-time data processing. Collaboration & Stakeholder Engagement: Work with data scientists, data analysts, and business stakeholders to define data requirements and deliver solutions that align with business objectives. Collaborate with cloud engineers, data architects, and other teams to ensure smooth integration and data flow between systems. Monitoring & Automation: Build and implement monitoring solutions for data pipelines, ensuring consistent performance, identifying issues, and optimizing workflows. Automate data ingestion, transformation, and validation processes to reduce manual intervention and increase efficiency. Document data pipeline processes, architectures, and data models to ensure clarity and maintainability. Adhere to best practices in data engineering, software development, version control, and code review. Required Skills & Qualifications: Education: Bachelors degree in Computer Science, Engineering, Data Science, or a related field (or equivalent experience). Technical Skills: Apache Spark: Strong hands-on experience with Spark, specifically within Databricks (PySpark, Scala, Spark SQL). Experience working with cloud-based platforms such as AWS, Azure, or Google Cloud, particularly in the context of big data processing and storage. Proficiency in SQL and experience with cloud data warehouses (e.g., Redshift, BigQuery, Snowflake). Strong programming skills in Python, Scala, or Java. Big Data & Cloud Technologies: Experience with distributed computing concepts and scalable data processing architectures. Familiarity with data lake architectures and frameworks (e.g., AWS S3, Azure Data Lake). Data Engineering Concepts: Strong understanding of ETL processes, data modeling, and database design. Experience with batch and real-time data processing techniques. Familiarity with data quality, data governance, and privacy regulations. Problem Solving & Analytical Skills: Strong troubleshooting skills for resolving issues in data pipelines and performance optimization. Ability to work with large, complex datasets, and perform data wrangling and cleaning.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Ahmedabad

Work from Office

Naukri logo

Role & responsibilities Senior Data Engineer Job Description GRUBBRR is seeking a mid/senior-level data engineer to help build our next-generation analytical and big data solutions. We strive to build Cloud-native, consumer-first, UX-friendly kiosks and online applications across a variety of verticals supporting enterprise clients and small businesses. Behind our consumer applications, we integrate and interact with a deep-stack of payment, loyalty, and POS systems. In addition, we also provide actionable insights to enable our customers to make informed decisions. Our challenge and goal is to provide a frictionless experience for our end-consumers and easy-to-use, smart management capabilities for our customers to maximize their ROIs. Responsibilities: Develop and maintain data pipelines Ensure data quality and accuracy Design, develop and maintain large, complex sets of data that meet non-functional and functional business requirements Build required infrastructure for optimal extraction, transformation and loading of data from various data sources using cloud technologies Build analytical tools to utilize the data pipelines Skills: Solid experience with SQL & NoSQL Strong Data modeling skills for data lake, data warehouse, data marts including dimensional modeling and star schemas Proficient with Azure Data Factory data integration technology Knowledge of Hadoop or similar Big Data technology Knowledge of Apache Kafka, Spark, Hive or equivalent Knowledge of Azure or AWS analytics technologies Qualifications: BS in Computer Science, Applied Mathematics or related fields (MS preferred) At least 8 years of experience working with OLAPs Microsoft Azure or AWS Data engineer certification a plus

Posted 3 weeks ago

Apply

5.0 - 9.0 years

15 - 30 Lacs

Hyderabad

Hybrid

Naukri logo

Hi! Greetings of the day!! We have openings for one of our product based company. Location : Hyderabad Notice Period: Only Immediate - 30 Days Work Mode - Hybrid Key Purpose Statement Core mission The core purpose of a Senior Data Engineer will play a key role in designing, building, and optimizing our data infrastructure and pipelines. This individual will leverage their deep expertise in Azure Synapse , Databricks cloud platforms, and Python programming to deliver high-quality data solutions. RESPONSIBILITIES Data Infrastructure and Pipeline Development: - Develop and maintain complex ETL/ELT pipelines using Databricks and Azure Synapse. - Optimize data pipelines for performance, scalability, and cost-efficiency. - Implement best practices for data governance, quality, and security. Cloud Platform Management: - Design and manage cloud-based data infrastructure on platforms such as Azure - Utilize cloud-native tools and services to enhance data processing and storage capabilities. - understanding and designing CI/CD pipelines for data engineering projects. Programming: - Develop and maintain high-quality, reusable Code on Databricks, and Synapse environment for data processing and automation. - Collaborate with data scientists and analysts to design solutions into data workflows. - Conduct code reviews and mentor junior engineers in Python , PySpark & SQL environments best practices. If interested, please share resume to aparna.ch@v3staffing.in

Posted 3 weeks ago

Apply

9.0 - 13.0 years

25 - 35 Lacs

Hyderabad

Hybrid

Naukri logo

Senior Data Engineer You are familiar with AWS and Azure Cloud. You have extensive knowledge of Snowflake , SnowPro Core certification is a must have. You have used DBT at least in one project to deploy models in production. You have configured and deployed Airflow and integrated various operator in airflow (especially DBT & Snowflake). You can design build, release pipelines and understand of Azure DevOps Ecosystem. You have excellent understanding of Python (especially PySpark) and able to write metadata driven programs. Familiar with Data Vault (Raw , Business) also concepts like Point In Time , Semantic Layer. You are resilient in ambiguous situations and can clearly articulate the problem in a business friendly way. You believe in documenting processes and managing the artifacts and evolving that over time.

Posted 3 weeks ago

Apply

8.0 - 13.0 years

16 - 27 Lacs

Indore, Hyderabad, Ahmedabad

Work from Office

Naukri logo

Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI. We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth. Designation: Lead Data Engineer Location: Hyderabad, Indore, Ahmedabad Experience: 8 years Role & responsibilities What You Will Do: • Analyze Business Requirements. • Analyze the Data Model and do GAP analysis with Business Requirements and Power BI. Design and Model Power BI schema. • Transformation of Data in Power BI/SQL/ETL Tool. • Create DAX Formula, Reports, and Dashboards. Able to write DAX formulas. • Experience writing SQL Queries and stored procedures. • Design effective Power BI solutions based on business requirements. • Manage a team of Power BI developers and guide their work. • Integrate data from various sources into Power BI for analysis. • Optimize performance of reports and dashboards for smooth usage. • Collaborate with stakeholders to align Power BI projects with goals. • Knowledge of Data Warehousing(must), Data Engineering is a plus What we need? • B. Tech computer science or equivalent • Minimum 5+ years of relevant experience Perks and benefits

Posted 3 weeks ago

Apply

4.0 - 7.0 years

5 - 14 Lacs

Mumbai, Navi Mumbai, Mumbai (All Areas)

Work from Office

Naukri logo

We are looking for an experienced Data Engineer to design, develop, and maintain our data pipelines, primarily focused on ingesting data into our Snowflake data platform. The ideal candidate will have strong expertise in Snowflake and practical experience with AWS services, particularly using S3 as a landing zone and an entry point to the Snowflake environment. You will be responsible for building efficient, reliable, and scalable data pipelines that are critical for our data-driven decision-making processes. Role & responsibilities 1. Design, develop, implement, and maintain scalable and robust data pipelines to ingest data from various sources into the Snowflake data platform. 2. Utilize AWS S3 as a primary landing zone for data, ensuring efficient data transfer and integration with Snowflake. 3. Develop and manage ETL/ELT processes, focusing on data transformation, cleansing, and loading within the Snowflake and AWS ecosystem. 4.Write complex SQL queries and stored procedures in Snowflake for data manipulation, transformation, and performance optimization. 5. Monitor, troubleshoot, and optimize data pipelines for performance, reliability, and scalability. 6. Collaborate with data architects, data analysts, data scientists, and business stakeholders to understand data requirements and deliver effective solutions. 7. Ensure data quality, integrity, and governance across all data pipelines and within the Snowflake platform. 8. Implement data security best practices in AWS and Snowflake. 9. Develop and maintain comprehensive documentation for data pipelines, processes, and architectures. 10. Stay up-to-date with emerging technologies and best practices in data engineering, particularly related to Snowflake and AWS. 11. Participate in Agile/Scrum development processes, including sprint planning, daily stand-ups, and retrospectives. Preferred candidate profile 1. Strong, hands-on proficiency with Snowflake: In-depth knowledge of Snowflake architecture, features (e.g., Snowpipe, Tasks, Streams, Time Travel, Zero-Copy Cloning). Experience in designing and implementing Snowflake data models (schemas, tables, views). Expertise in writing and optimizing complex SQL queries in Snowflake. Experience with data loading and unloading techniques in Snowflake. 2. Solid experience with AWS Cloud services: Proficiency in using AWS S3 for data storage, staging, and as a landing zone for Snowflake. Experience with other relevant AWS services (e.g., IAM for security, Lambda for serverless processing, Glue for ETL - if applicable). 3. Strong experience in designing and building ETL/ELT data pipelines. Proficiency in at least one programming language commonly used in data engineering (e.g., Python, Scala, Java). Python is highly preferred.

Posted 3 weeks ago

Apply

7.0 - 11.0 years

20 - 35 Lacs

Gandhinagar, Ahmedabad

Hybrid

Naukri logo

Job Title: Senior Data Engineer Experience: 8 to 10 Years Location: Ahmedabad & Gandhinagar Employment Type: Full-time Our client is a leading provider of advanced solutions for capital markets, specializing in cutting-edge trading infrastructure and software. With a global presence and a strong focus on innovation, the company empowers professional traders, brokers, and financial institutions to execute high-speed, high-performance trading strategies across multiple asset classes. Their technology is known for its reliability, low latency, and scalability, making it a preferred choice for firms seeking a competitive edge in dynamic financial environments. Role & responsibilities Design, develop, and maintain scalable and reliable data pipelines using DBT and Airflow . Work extensively with Snowflake to optimize data storage, transformation, and access. Develop and maintain efficient ETL/ELT processes in Python to support analytical and operational workloads. Ensure high standards of data quality, consistency, and security across systems. Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions. Monitor and troubleshoot data pipelines, resolving issues proactively. Optimize performance of existing data workflows and recommend improvements. Document data engineering processes and solutions effectively. Preferred candidate profile Bachelors or Masters degree in Computer Science, Engineering, or related field 8 - 10 years of experience in data engineering or related roles Strong knowledge of SQL and data warehousing principles Familiarity with version control (e.g., Git) and CI/CD practices Excellent problem-solving skills and attention to detail Strong communication and collaboration abilities Preferred Skills Experience in cloud platforms like AWS, GCP, or Azure Exposure to data governance and security best practices Knowledge of modern data architecture and real-time processing frameworks Competitive Benefits Offered By Our Client: Relocation Support: Our client offers an additional relocation allowance to assist with moving expenses. Comprehensive Health Benefits: Including medical, dental, and vision coverage. Flexible Work Schedule: Hybrid work model with an expectation of just 2 days on-site per week. Generous Paid Time Off (PTO): 21 days per year, with the ability to roll over 1 day into the following year. Additionally, 1 day per year is allocated for volunteering, 2 training days per year for uninterrupted professional development, and 1 extra PTO day during milestone years. Paid Holidays & Early Dismissals: A robust paid holiday schedule with early dismissal on select days, plus generous parental leave for all genders, including adoptive parents. Tech Resources: A rent-to-own program offering employees a company-provided Mac/PC laptop and/or mobile phone of their choice, along with a tech accessories budget for monitors, headphones, keyboards, and other office equipment. Health & Wellness Subsidies: Contributions toward gym memberships and health/wellness initiatives to support your well-being. Milestone Anniversary Bonuses: Special bonuses to celebrate key career milestones. Inclusive & Collaborative Culture: A forward-thinking, culture-based organisation that values diversity and inclusion and fosters collaborative teams.

Posted 3 weeks ago

Apply

8.0 - 10.0 years

15 - 20 Lacs

Pune

Work from Office

Naukri logo

Education: Bachelors or masters degree in computer science, Information Technology, Engineering, or a related field. Experience: 8-10 years 8+ years of experience in data engineering or a related field. Strong hands-on experience with Azure Databricks , Spark , Python/Scala, CICD, Scripting for data processing. Experience working in multiple file formats like Parquet , Delta , and Iceberg . Knowledge of Kafka or similar streaming technologies for real-time data ingestion. Experience with data governance and data security in Azure. Proven track record of building large-scale data ingestion and ETL pipelines in cloud environments, specifically Azure. Deep understanding of Azure Data Services (e.g., Azure Blob Storage, Azure Data Lake, Azure SQL Data Warehouse, Event Hubs, Functions etc.). Familiarity with data lakes , data warehouses , and modern data architectures. Experience with CI/CD pipelines , version control (Git), Jenkins and agile methodologies. Understanding of cloud infrastructure and architecture principles (especially within Azure ). Technical Skills: Expert-level proficiency in Spark, SPARK Streaming , including optimization, debugging, and troubleshooting Spark jobs. Solid knowledge of Azure Databricks for scalable, distributed data processing. Strong coding skills in Python and Scala for data processing. Experience working with SQL , especially for large datasets. Knowledge of data formats like Iceberg , Parquet , ORC , and Delta Lake . Leadership Skills: Proven ability to lead and mentor a team of data engineers, ensuring adherence to best practices. Excellent communication skills, capable of interacting with both technical and non-technical stakeholders. Strong problem-solving, analytical, and troubleshooting abilities.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Gurugram

Hybrid

Naukri logo

Hi, Wishes from GSN!!! Pleasure connecting with you!!! We been into Corporate Search Services for Identifying & Bringing in Stellar Talented Professionals for our reputed IT / Non-IT clients in India. We have been successfully providing results to various potential needs of our clients for the last 20 years. At present, GSN is hiring Pyspark Developer for one of our leading MNC client. PFB the details for your better understanding: ~~~~ LOOKING FOR IMMEDIATE JOINERS ~~~~ WORK LOCATION: Gurugram Job Role: Pyspark Developer EXPERIENCE: 5 Yrs -10 Yrs CTC Range: 20LPA -28 LPA Work Type: HYBRID Only JD: Must be strong in Advanced SQL (e.g., joins and aggregations) Should have good experience in Pyspark (atleast 4 years) Good have knowledge in AWS services Experience across the data lifecycle Design & develop ETL pipeline using PySpark on AWS framework If interested, kindly APPLY for IMMEDIATE response. Thanks & Regards Sathya K GSN Consulting Mob: 8939666794 Mail ID: sathya@gsnhr.net; Web: https://g.co/kgs/UAsF9W

Posted 3 weeks ago

Apply

8.0 - 12.0 years

15 - 27 Lacs

Mumbai, Pune, Bengaluru

Work from Office

Naukri logo

Role & responsibilities : Job Description: Primarily looking for a Data Engineer (AWS) with expertise in processing data pipelines using Data bricks, PySpark SQL on Cloud distributions like AWS Must have AWS Data bricks ,Good-to-have PySpark, Snowflake, Talend Requirements- • Candidate must be experienced working in projects involving • Other ideal qualifications include experiences in • Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. • Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python • Familiarity with AWS compute storage and IAM concepts • Experience in working with S3 Data Lake as the storage tier • Any ETL background Talend AWS Glue etc. is a plus but not required • Cloud Warehouse experience Snowflake etc. is a huge plus • Carefully evaluates alternative risks and solutions before taking action. • Optimizes the use of all available resources • Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit • Skills • Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. • Experience on Shell scripting • Exceptionally strong analytical and problem-solving skills • Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses • Strong experience with relational databases and data access methods especially SQL • Excellent collaboration and cross functional leadership skills • Excellent communication skills both written and verbal • Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment • Ability to leverage data assets to respond to complex questions that require timely answers • has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Mandatory Skills: Apache Spark, Databricks, Java, Python, Scala, Spark SQL. Note : Need only Immediate joiners/ Serving notice period. Interested candidates can apply. Regards, HR Manager

Posted 3 weeks ago

Apply

5.0 - 10.0 years

8 - 18 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

Role: Celonis Data Engineer Skills: Celonis, Celonis EMS, Data Engineer, SQL, PQL, ETL, OCPM Notice Period: 30-45 Days Role & responsibilities : Hands-on experience with Celonis EMS (Execution Management System). Strong SQL skills for data extraction, transformation, and modeling. Proficiency in PQL (Process Query Language) for custom process analytics. Experience integrating Celonis with SAP, Oracle, Salesforce, or other ERP/CRM systems. Knowledge of ETL, data pipelines, and APIs (REST/SOAP). Process Mining & Analytical Skills: Understanding of business process modeling and process optimization techniques. At least one OCPM project experience Ability to analyze event logs and identify bottlenecks, inefficiencies, and automation opportunities. 6-10 years of experience in IT industry with Data Architecture / Business Process out of which 3-4 Years of Experience in process mining, data analytics, or business intelligence. Celonis certification (e.g., Celonis Data Engineer, Business Analyst, or Solution Consultant) is a plus. OCPM experience is a Plus

Posted 3 weeks ago

Apply

13.0 - 20.0 years

30 - 45 Lacs

Bengaluru

Hybrid

Naukri logo

Job Descirption: Java/Big Data/SQL/Architect Good understanding of Java/J2EE based scalable application development. Good understanding of Data Engineering with hands in experience in dealing with Data Transfer and Data Pipeline development. Exposure to building enterprise products / tools that improves developer productivity. Passionate about Gen AI / Impact creation with hands on experience.

Posted 3 weeks ago

Apply

13.0 - 15.0 years

37 - 40 Lacs

Noida, Gurugram, Delhi / NCR

Work from Office

Naukri logo

Role & responsibilities REQUIREMENTS: Total experience 13+years. Proficient in architecting, designing, and implementing data platforms and data applications Strong experience in AWS Glue and Azure Data Factory. Hands-on experience with Databricks. Experience working with Big Data applications and distributed processing systems Working experience in build and maintain ETL/ELT pipelines using modern data engineering tools and frameworks Lead the architecture and implementation of data lakes, data warehouses, and real-time streaming solution Collaborate with stakeholders to understand business requirements and translate them into technical solutions Participate and contribute to RFPs, workshops, PoCs, and technical solutioning discussions Ensure scalability, reliability, and performance of data platforms Strong communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code Understanding the clients business use cases and technical requirements and be able to convert them in to technical design which elegantly meets the requirements Mapping decisions with requirements and be able to translate the same to developers Identifying different solutions and being able to narrow down the best option that meets the client’s requirements Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements Preferred candidate profile

Posted 3 weeks ago

Apply

5.0 - 10.0 years

12 - 22 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

Job Title: ======= Microsoft ETL Developer - Microsoft SSIS / Informatica x4 positions Onsite Location: ============= Dubai, UAE Doha , Qatar Riyadh, Saudi Onsite Monthly Salary: ============== 10k AED - 15k AED Offshore Location: =============== Pune / Hyderabad / Chennai / Bangalore / Mumbai Offshore Annual Salary: ============== 12 LPA - 20 LPA Note: ===== You need to travel to onsite (UAE) on needful basis Project duration: ============= 2 Years Initially Desired Experience Level Needed: =========================== 5 - 10 Years Qualification: ========== B.Tech / M.Tech / MCA / M.Sc or equivalent Experience Needed: =============== Over all: 5 or more Years of total IT experience Solid 3+ Years experience or more - as ETL Developer with Microsoft SSIS / Informatica as ETL Developer Engineer Job Responsibilities: ================ - Design and develop ETL data flows - Design Microsoft ETL packages - Able to code T-SQL - Able to create Orchestrations - Able to design batch jobs / Orchestrations runs - Familiarity with data models - Able to develop MDM (Master Data Management) and design SCD-1/2/3 as per client requirements Experience: ================ - Experience as ETL Developer with Microsoft SSIS - Exposure and experience with Azure services including Azure Data Factory - Sound knowledge of BI practices and visualization tools such as PowerBI / SSRS/ QlikView - Collecting / gathering data from various multiple source systems - Load the data using ETL - Creating automated data pipelines - Configuring Azure resources and services Skills: ================ - Microsoft SSIS - Informatica - Azure Data Factory - Spark - SQL Nice to have: ========== - Any on site experience is added advantage, but not mandatory - Microsoft certifications are added advantage Business Vertical: ============== - Banking / Investment Banking - Capital Markets - Securities / Stock Market Trading - Bonds / Forex Trading - Credit Risk - Payments Cards Industry (VISA/ Master Card/ Amex) Job Code: ====== ETL_DEVP_0525 No.of positions: ============ 04 Email: ===== spectrumconsulting1977@gmail.com if you are interested, please email your CV as ATTACHMENT with job ref. code [ ETL_DEVP_0525 ] as subject

Posted 3 weeks ago

Apply

5 - 8 years

22 - 30 Lacs

Pune, Chennai

Work from Office

Naukri logo

Experience: Minimum of 5 years of experience in data engineering, with a strong focus on data pipeline development. At least 2 years of experience leading teams or projects in the healthcare, life sciences, or related domains. Proficiency in Python, with experience in data manipulation libraries. Hands-on experience with AWS Glue, AWS Lambda, S3, Redshift, and other relevant AWS data services. Familiarity with data integration tools, ETL (Extract, Transform, Load) frameworks, and data warehousing solutions. Proven experience working in an onsite-offshore model, managing distributed teams, and coordinating development across multiple time zones.

Posted 1 month ago

Apply

6 - 11 years

18 - 33 Lacs

Pune, Bengaluru

Work from Office

Naukri logo

Urgent hiring for AWS Data Engineer Experience: 6-18 Years Location: Pune/ Bangalore No of positions: 9 Notice Period: immediate joiner Role & responsibilities Requires 5 to 10 years of experience in data engineering on the AWS platform. Proficiency in Spark/Pyspark/Python/SQL is essential. Familiarity with AWS data stores including S3, RDS, DynamoDB, and AWS Data Lake, having utilized these technologies in previous projects. Knowledge of AWS Services like Redshift, Kinesis Streaming, Glue, Iceberg, Lambda, Athena, S3, EC2, SQS, and SNS. Understanding of monitoring and observability toolsets like CloudWatch and Tivoli Netcool. Basic understanding of AWS networking components: VPC, SG, Subnets, Load Balancers. Collaboration with cross-functional teams to gather technical requirements and deliver high-quality ETL solutions. Strong AWS development experience for data ETL, pipeline, integration, and automation work. Deep understanding of Data & Analytics Solution development lifecycle. Proficient in CI/CD, Jenkins, capable of writing testing scripts and automating processes. Experience with IaC Terraform or CloudFormation, basic knowledge of containers. Familiarity with Bitbucket/Git and experience working in an agile/scrum team. Experience in the Private Bank/Wealth Management domain.

Posted 1 month ago

Apply

5 - 10 years

10 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title: Senior Data Engineer Location: Bengaluru, India Experience: 5-10 Years Notice period : Immediate Key Responsibilities Design, develop, and maintain scalable data pipelines for efficient data processing. Build and optimize data storage solutions, ensuring high performance and reliability. Implement ETL processes to extract, transform, and load data from various sources. Work closely with data analysts and scientists to support their data needs. Optimize database structures and ensure data integrity. Develop and manage cloud-based data architectures (AWS, Azure, or Google Cloud). Ensure compliance with data governance and security standards. Monitor and troubleshoot data workflows to maintain system efficiency. Required Skills & Qualifications Strong proficiency in SQL, Python, and R for data processing. Experience with big data technologies like Hadoop, Spark, and Kafka. Hands-on expertise in ETL tools and data warehousing solutions . Deep understanding of database management systems (MySQL, PostgreSQL, MongoDB, etc.). Familiarity with cloud platforms such as AWS, Azure, or Google Cloud. Strong problem-solving and communication skills to collaborate with cross-functional teams.

Posted 1 month ago

Apply

5 - 10 years

20 - 30 Lacs

Hyderabad

Hybrid

Naukri logo

Experience : 5 to 10 Years Location : Hyderabad Notice Period : Immediate to 30 Days Skills Required: 5+ years of experience as a Data Engineer or in a similar role working with large data sets and ELT/ETL processes 7+ years of industry experience in software development Knowledge and practical use of a wide variety of RDBMS technologies such as MySQL, Postgres, SQL Server or Oracle Use of cloud-based data warehouse technologies including Snowflake, AWS RedShift. Strong SQL experience with an emphasis on analytic queries and performance Experience with various NoSQL” technologies such as MongoDB or Elastic Search Familiarity with either native database or external change-data-capture technologies Practical use of various data formats such as CSV, XML, JSON, and Parquet Use of Data flow and transformation tools such as Apache Nifi or Talend Implementation of ELT processes in languages such as Java, Python or NodeJS Use of large, shared data stores such as Amazon S3 or Hadoop File System Thorough and practical use of various Data Warehouse data schemas (Snowflake, Star) If interested please share your updated resume to arampally@jaggaer.com with below details: Total Years of Experience: Years of Experience as Data Engineer: Years of experience in MySQL: Years of Experience in Snowflake, AWS RedShift: Current CTC: Expected CTC: Notice Period:

Posted 1 month ago

Apply

6 - 10 years

11 - 21 Lacs

Bengaluru

Hybrid

Naukri logo

RESPONSIBILITIES: Choosing the right technologies for our use cases, deploy and operate. Setting up Data stores structured, semi structured and non-structured. Secure data at rest via encryption Implement tool to access securely multiple data sources Implement solutions to run real-time analytics Use container technologies Required Experience & Skills: Experience in one of the following: Elastic Search, Cassandra, Hadoop, Mongo DB Experience in Spark and Presto/Trino Experience with microservice based architectures Experience on Kubernetes Experience of Unix/Linux environments is plus Experience of Agile/Scrum development methodologies is a plus Cloud knowledge a big plus (AWS/GCP) (Kubernetes/Docker) Be nice, respectful, able to work in a team Willingness to learn

Posted 1 month ago

Apply

6 - 10 years

10 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

We're looking for a Data Engineer to join our team. We need someone who's great at building data pipelines and understands how data works. You'll be using tools like DBT and Snowflake a lot. The most important thing for us is that you've worked with all sorts of data sources , not just files. Think different cloud systems, other company databases, and various online tools. What you'll do: Build and manage how data flows into our system using DBT and storing it in Snowflake . Design how our data is organized so it's easy to use for reports and analysis. Fix any data problems that come up. Connect to and get data from many different places , like: Cloud apps (e.g., Salesforce, marketing tools) Various databases (SQL Server, Oracle, etc.) Streaming data Different file types (CSV, JSON, etc.) Other business systems Help us improve our data setup. What you need: Experience as a Data Engineer . Strong skills with DBT (Data Build Tool). Solid experience with Snowflake . Must have experience working with many different types of data sources, especially cloud systems and other company databases not just files. Good at data modeling (organizing data). Comfortable with SQL . Good at solving problems

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies