Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
8 - 11 Lacs
bengaluru
Hybrid
Key Responsibilities: Expertise in design and implementation of SnapLogic-based integrations for enterprise applications. Develop and optimize data pipelines for real-time and batch processing using Kafka, KSQL, JSON/XML. Integrate data across platforms including Salesforce, Marketo, SAP, Snowflake, Oracle, SQL Server. Collaborate with cross-functional teams to gather requirements and deliver integration solutions. Implement CI/CD pipelines for seamless code migration and deployment. Provide production support , troubleshoot issues, and ensure SLAs are met. Monitor system performance using tools like Splunk and DataDog. Mentor junior team members and conduct training sessions on SnapLogic be...
Posted 2 months ago
8.0 - 11.0 years
13 - 17 Lacs
bengaluru
Work from Office
? Mentor and guide engineering teams in all aspects of the SDLC. ? Lead technical design sessions and translate ideas into robust technical architecture. ? Develop and maintain scalable, high-performance data pipelines and streaming systems. ? Use Python, SQL, Scala, NodeJS, and work with relational databases, NoSQL stores, and Kafka. ? Refactor and optimize legacy codebases for improved performance and scalability. ? Collaborate with product and engineering teams to deliver innovative solutions in an agile environment. ? Support technical support teams with ad-hoc data queries and operational troubleshooting. ? Ensure data pipelines and APIs meet enterprise-grade performance and usability s...
Posted 2 months ago
3.0 - 5.0 years
7 - 11 Lacs
bengaluru
Work from Office
? Design, develop, and maintain scalable data pipelines and streaming systems. ? Work with Python, SQL, Scala, NodeJS, and various data stores (relational, NoSQL, Kafka). ? Analyze healthcare and marketing datasets using SQL. ? Refactor and optimize existing codebases for performance improvements. ? Collaborate closely within an agile team to deliver creative technical solutions. ? Support ad-hoc data requests and assist technical support teams as needed. ? Ensure high performance of data ingestion, transformation, and API consumption layers. ? Follow best practices for coding, testing, CI/CD, and cloud deployments
Posted 2 months ago
5.0 - 8.0 years
9 - 14 Lacs
bengaluru
Work from Office
Please find the JD below: Essential Skills: Experience: 6 to 8 yrs 1. Technical Expertise: Proficiency in AWS services such as Amazon S3, Redshift, EMR, Glue, Lambda, and Kinesis. Strong skills in SQL and experience with scripting languages like Python or Java. 2. Data Engineering Experience: Hands on experience in building and maintaining data pipelines, data modeling, and working with big data technologies. 3. Problem-Solving Skills: Ability to analyze complex data issues and develop effective solutions to optimize data processing and storage. 4. Communication and Collaboration: Strong interpersonal skills to work effectively with cross-functional teams and communicate technical concepts t...
Posted 2 months ago
5.0 - 10.0 years
10 - 12 Lacs
pune
Work from Office
We are seeking a skilled Data Engineer with hands-on experience in building, optimizing, and maintaining large-scale data pipelines and cloud-based data solutions. The ideal candidate will have strong expertise in AWS services , DBT , and Power BI , with a deep understanding of data modeling, transformation, and analytics for manufacturing data systems. Roles and Responsibilities Design, develop, and maintain ETL/ELT pipelines for data ingestion, transformation, and integration. Work extensively with AWS Glue , Athena , and S3 to manage and optimize data workflows. Develop and maintain SQL transformation code in DBT (Data Build Tool) . Ensure data quality, reliability, and performance optimi...
Posted 2 months ago
6.0 - 10.0 years
20 - 25 Lacs
pune, bengaluru
Hybrid
CANDIDATE WITH LESS THAN 6YRS OF EXPERIENCE PLEASE DO NOT APPLY FOR ROLE. Data Engineer Location-Bangalore& Pune L1 Virtual ; L2 face to face Experience -6+ Yrs Job Mode : C2H IMMEDIATE JOINERS ONLY Must have experience in Pyspark , SQL, ETL , data pipelines, AWS , Python JD Candidate should have 6 + years of experience in Data Engineering Must have experience in Pyspark , SQL, ETL , data pipelines, AWS , Python Nice to Have Life science domain experience (Pharma), Dremio, Redshift, Nifi ingestion, BI Tools (Tableau/PowerBI) Designing, creating, testing and maintaining the complete data management & processing systems. Working closely with the stakeholders & solution architect. Ensuring arch...
Posted 2 months ago
4.0 - 7.0 years
15 - 27 Lacs
pune, gurugram, bengaluru
Hybrid
Salary: 15 to 25 LPA Exp: 4 to 7 years Location: Gurgaon/Pune/Bengalore Notice: Immediate to 30 days..!! Job Profile: Experienced Data Engineer with a strong foundation in designing, building, and maintaining scalable data pipelines and architectures. Skilled in transforming raw data into clean, structured formats for analytics and business intelligence. Proficient in modern data tools and technologies such as SQL, T-SQL, Python, Databricks, and cloud platforms (Azure). Adept at data wrangling, modeling, ETL/ELT development, and ensuring data quality, integrity, and security. Collaborative team player with a track record of enabling data-driven decision-making across business units. As a Dat...
Posted 2 months ago
5.0 - 9.0 years
7 - 12 Lacs
hyderabad, pune, bengaluru
Work from Office
KPI Partners is seeking a highly skilled and motivated Senior Data Engineer to join our dynamic team. As a Sr Data Engineer, you will be responsible for designing, developing, and maintaining scalable data pipelines and architectures. You will work closely with data scientists, analysts, and other stakeholders to ensure that our data solutions meet business requirements and drive strategic decision-making. Key Responsibilities: - Design, build, and maintain efficient and reliable data pipelines for processing large volumes of data. - Collaborate with cross-functional teams to understand data requirements and translate them into technical specifications. - Ensure data quality and integrity by...
Posted 2 months ago
6.0 - 10.0 years
15 - 30 Lacs
bengaluru
Hybrid
Urgent Hiring: AWS Data Engineer, Senior Data Engineers & Lead Data Engineers Apply Now: Send your resume to heena.ruchwani@gspann.com Location: Bangalore (6+ Years Experience) Company: GSPANN Technologies, Inc. GSPANN Technologies is seeking talented professionals with 4+ years of experience to join our team in Bangalore. We are looking for immediate joiners who are passionate about data engineering and eager to take on exciting challenges. Key Skills & Experience: 6+ years of hands-on experience with AWS Data Services (Glue, Redshift, S3, Lambda, EMR, Athena, etc.) Strong expertise in Big Data Technologies (Spark, Hadoop, Kafka) Proficiency in SQL, Python, and Scala Hands-on experience wit...
Posted 2 months ago
7.0 - 12.0 years
7 - 12 Lacs
hyderabad
Hybrid
Team Manager/ Technical Documentation/ Individual Contributor Location: Hyderabad Nanakramguda Work Mode: Hybrid (2 days office) Shift Timings: 2 PM IST 11 PM IST (Day Shift) Experience Required: 5 to 8 years About the Role We are looking for an experienced Technical Documentation Specialist (Individual Contributor) to join our Operations team in Hyderabad. The role involves creating and managing technical and business documentation related to data pipelines, data flow processes, and business review documents (BRDs). The ideal candidate should have a strong background in technical writing, a clear understanding of data pipelines and ETL processes, and the ability to convert complex data insi...
Posted 2 months ago
3.0 - 5.0 years
9 - 13 Lacs
bengaluru
Work from Office
Collaborate with software engineers, business stake holders and/or domain experts to translate business requirements into product features, tools, projects. ? Develop, implement, and deploy ETL solutions. ? Preprocess and analyze large datasets to identify patterns, trends, and insights. ? Evaluate, validate, and optimize data models to ensure efficiency, and generalizability. ? Monitor and maintain the performance of data pipeline, data models in production environments, identifying opportunities for improvement and update as needed. ? Document development processes, results, and lessons learned to facilitate knowledge sharing and continuous improvement.
Posted 2 months ago
4.0 - 10.0 years
11 - 16 Lacs
pune
Work from Office
Job Description Senior Consultant Delivery ( Data Engineer) Architect and implement scalable, secure, and cost-efficient data pipelines leveraging GCP data products (BigQuery, Dataflow, Composer, Dataproc, Cloud Storage, etc,), Translate business requirements into technical designs for datamarts, data models, and analytics solutions, Oversee development standards, coding guidelines, and CI/CD practices to ensure code quality and maintainability, Drive adoption of data governance, lineage, and security policies across the engineering team, Collaborate closely with Product Owners, Data Architects, and stakeholders to align deliverables with business goals, Stay updated with the latest GCP adva...
Posted 2 months ago
4.0 - 9.0 years
12 - 20 Lacs
mumbai
Work from Office
Role & responsibilities Must have experience in Data Lake , Data Bricks and Pyspark Preferred candidate profile
Posted 2 months ago
6.0 - 10.0 years
30 - 40 Lacs
hyderabad, gurugram, bengaluru
Hybrid
Job Title : Data & AI-backed QA Engineer We are seeking a highly skilled Automation, Data & AI-backed QA Engineer with proven expertise in automation testing, data pipeline validation, and AI-driven quality assurance practices. The ideal candidate will bring deep knowledge of test automation frameworks, strong data testing expertise, and innovative approaches leveraging AI/ML to enhance quality, coverage, and defect prediction. Key Responsibilities Design, develop, and maintain automation test frameworks for web, API, and data- centric applications. Perform data pipeline testing (ETL/ELT validation, schema verification , data integrity, reconciliation, and transformations across multiple sou...
Posted 2 months ago
3.0 - 7.0 years
5 - 14 Lacs
bengaluru
Hybrid
Software Engineer (3–6 yrs), Scala Programming, Apache Spark, SQL, Backend, Functional Programming, Performance Tuning. Exp in Tableau/Zeppelin is preferred. C2H via TE Infotech (Nutanix), Convertible 2 Permanent, BLR. Apply: ssankala@toppersedge.com
Posted 2 months ago
5.0 - 8.0 years
15 - 25 Lacs
pune, bengaluru
Hybrid
Overview of 66degrees 66degrees is a leading consulting and professional services company specializing in developing AI-focused, data-led solutions leveraging the latest advancements in cloud technology. With our unmatched engineering capabilities and vast industry experience, we help the world's leading brands transform their business challenges into opportunities and shape the future of work. At 66degrees, we believe in embracing the challenge and winning together. These values not only guide us in achieving our goals as a company but also for our people. We are dedicated to creating a significant impact for our employees by fostering a culture that sparks innovation and supports professio...
Posted 2 months ago
5.0 - 7.0 years
25 - 33 Lacs
bengaluru
Work from Office
Serko is a cutting-edge tech platform in global business travel & expense technology. When you join Serko, you become part of a team of passionate travellers and technologists bringing people together, using the world's leading business travel marketplace. We are proud to be an equal opportunity employer, we embrace the richness of diversity, showing up authentically to create a positive impact. There's an exciting road ahead of us, where travel needs real, impactful change. With offices in New Zealand, Australia, North America, and China, we are thrilled to be expanding our global footprint, landing our new hub in Bengaluru, India. With a rapid growth plan in place for India, we're hiring p...
Posted 2 months ago
5.0 - 7.0 years
35 - 50 Lacs
bengaluru
Work from Office
Serko is a cutting-edge tech platform in global business travel & expense technology. When you join Serko, you become part of a team of passionate travellers and technologists bringing people together, using the world's leading business travel marketplace. We are proud to be an equal opportunity employer, we embrace the richness of diversity, showing up authentically to create a positive impact. There's an exciting road ahead of us, where travel needs real, impactful change. With offices in New Zealand, Australia, North America, and China, we are thrilled to be expanding our global footprint, landing our new hub in Bengaluru, India. With a rapid growth plan in place for India, we're hiring p...
Posted 2 months ago
14.0 - 18.0 years
20 - 35 Lacs
hyderabad
Hybrid
Role & responsibilities Architecture/ Design Decisions & Technology Evaluations - Industry Trends, Research, POC, Tools - Test Plan Reviews - Production Support Transition Reviews - Contribute to our architecture community by attending and participating in Architecture & Design Roundtables, Architecture Review Boards, and Solution Architecture Guild meetings. The Minimum Qualifications Bachelors or Masters degree in Computer Science, Information Systems, or related field. 8+ years of experience in data architecture and engineering, with a strong focus on financial services. Proven experience with lake house platforms and cloud-native data ecosystems Strong understanding of wealth management ...
Posted 2 months ago
8.0 - 13.0 years
35 - 45 Lacs
bengaluru
Hybrid
Purpose As a Senior Data Engineer at LogixHealth, you will work with a globally distributed team of engineers to design and build cutting edge solutions that directly improve the healthcare industry. Youll contribute to our fast-paced, collaborative environment and bring your expertise to continue delivering innovative technology solutions, while mentoring others. Duties and Responsibilities 1. Lead and contribute to the creation of a self-service data platform for reporting and analytics 2. Design and build data solutions using Databricks, SQL, Python, Spark, and Delta Lake in the Azure ecosystem (Blob Storage, Data Factory, Event Hubs) 3. Ensure best practices for ETL / ELT processes (data...
Posted 2 months ago
5.0 - 9.0 years
20 - 30 Lacs
pune
Hybrid
-Design, develop & maintain data pipelines using GCP services: Dataflow, BigQuery, and Pub/Sub -Provisioning infrastructure on GCP using IaC with Terraform -Implement & manage data warehouse solutions -Monitor and resolve issues in data workflows Required Candidate profile -Expertise in GCP, Apache Beam, Dataflow, & BigQuery -Pro in Python, SQL, PySpark -Worked with Cloud Composer for orchestration -Solid understanding of DWH, ETL pipelines, and real-time data streaming
Posted 2 months ago
5.0 - 9.0 years
20 - 30 Lacs
bengaluru
Hybrid
-Design, develop & maintain data pipelines using GCP services: Dataflow, BigQuery, and Pub/Sub -Provisioning infrastructure on GCP using IaC with Terraform -Implement & manage data warehouse solutions -Monitor and resolve issues in data workflows Required Candidate profile -Expertise in GCP, Apache Beam, Dataflow, & BigQuery -Pro in Python, SQL, PySpark -Worked with Cloud Composer for orchestration -Solid understanding of DWH, ETL pipelines, and real-time data streaming
Posted 2 months ago
5.0 - 9.0 years
20 - 30 Lacs
hyderabad
Hybrid
-Design, develop & maintain data pipelines using GCP services: Dataflow, BigQuery, and Pub/Sub -Provisioning infrastructure on GCP using IaC with Terraform -Implement & manage data warehouse solutions -Monitor and resolve issues in data workflows Required Candidate profile -Expertise in GCP, Apache Beam, Dataflow, & BigQuery -Pro in Python, SQL, PySpark -Worked with Cloud Composer for orchestration -Solid understanding of DWH, ETL pipelines, and real-time data streaming
Posted 2 months ago
8.0 - 13.0 years
35 - 50 Lacs
chennai
Work from Office
Skill: Pyspark Experience: 6 to 13 years Location: Bhubaneswar Job description : Develop and maintain scalable data pipelines using Python and PySpark. Collaborate with data engineers and data scientists to understand and fulfill data processing needs. Optimize and troubleshoot existing PySpark applications for performance improvements. Write clean, efficient, and well-documented code following best practices. Participate in design and code reviews. Develop and implement ETL processes to extract, transform, and load data. Ensure data integrity and quality throughout the data lifecycle. Stay current with the latest industry trends and technologies in big data and cloud computing. Qualificatio...
Posted 2 months ago
7.0 - 12.0 years
35 - 50 Lacs
chennai
Work from Office
Skill: Pyspark Experience: 6 to 13 years Location: Bhubaneswar Job description : Develop and maintain scalable data pipelines using Python and PySpark. Collaborate with data engineers and data scientists to understand and fulfill data processing needs. Optimize and troubleshoot existing PySpark applications for performance improvements. Write clean, efficient, and well-documented code following best practices. Participate in design and code reviews. Develop and implement ETL processes to extract, transform, and load data. Ensure data integrity and quality throughout the data lifecycle. Stay current with the latest industry trends and technologies in big data and cloud computing. Qualificatio...
Posted 2 months ago
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
192783 Jobs | Dublin
Wipro
61786 Jobs | Bengaluru
EY
49321 Jobs | London
Accenture in India
40642 Jobs | Dublin 2
Turing
35027 Jobs | San Francisco
Uplers
31887 Jobs | Ahmedabad
IBM
29626 Jobs | Armonk
Capgemini
26439 Jobs | Paris,France
Accenture services Pvt Ltd
25841 Jobs |
Infosys
25077 Jobs | Bangalore,Karnataka