Jobs
Interviews

271 Data Engineer Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

30 - 37 Lacs

Chennai

Remote

Role & responsibilities Expert in Azure Data Factory Proven experience in Data Modelling for Manufacturing data sources Proficient SQL design +5 years of experience in Data engineering roles Prove experience in PBI: Dashboarding, DAX calculations, Star scheme development and semantic model building Manufacturing knowledge Experience with GE PPA as data source is desirable API dev Knowledge Python skills Location: nearshore or offshore with 3 up to 5 hours overlap with CST time zone Preferred candidate profile

Posted 2 weeks ago

Apply

10.0 - 20.0 years

30 - 40 Lacs

Hyderabad

Hybrid

Job Title: Technical Product Manager Enterprise Applications Summary We are looking for an experienced Technical Product Manager (TPM) to lead the development of enterprise-grade software products. This role bridges deep technical knowledge with strong product management expertise. You will work closely with engineering, architecture, and cross-functional teams to define, prioritize, and deliver high-impact features and platform capabilities. The ideal candidate is someone who can engage engineers on system design, translate complex technical requirements into actionable plans, and communicate product value effectively to internal and external stakeholders. Key Responsibilities Product Strategy & Technical Planning Own the product roadmap and delivery of enterprise application features with a strong technical foundation. Partner with engineering and architecture teams to translate product goals into scalable, performant, and secure solutions. Evaluate technical feasibility and actively participate in design and architecture discussions. Requirements Management & Feature Definition Gather and translate complex functional and technical requirements into clear user stories and acceptance criteria. Own the product backlog and ensure technical integrity in prioritization and trade-off decisions. Define success metrics and track feature impact on platform performance, adoption, and stability. Stakeholder Communication & Alignment Act as the point of contact between engineering, data, product design, and customer-facing teams. Drive alignment and clarity around scope, priorities, and deliverables across cross-functional teams. Communicate technical roadmaps and rationale effectively to both technical and business stakeholders. Execution & Delivery Oversight Lead sprint planning, backlog grooming, and release coordination with agile teams. Proactively identify delivery risks, technical dependencies, and blockers—and work to resolve them. Monitor and optimize delivery velocity, system health, and platform scalability with a hands-on approach. Required Qualifications Bachelor’s or Master’s in Computer Science, Engineering, or related technical discipline. 6–10 years of experience in product management, with at least 3+ years as a Technical Product Manager . Experience delivering enterprise-grade software platforms, APIs, or data-intensive applications. Strong technical acumen—able to engage in architecture, data modeling, system design, and API discussions. Hands-on experience with modern cloud platforms (AWS, GCP, or Azure), microservices, DevOps practices, and CI/CD pipelines. Proven ability to write detailed technical product specs, define clear roadmaps, and manage stakeholder expectations. Preferred Qualifications Background in software engineering or systems architecture. Experience working on AI/ML platforms, developer tools, or infrastructure products. Familiarity with observability, scalability, or performance optimization for enterprise systems. Proficiency with tools like Jira, Confluence, Swagger, Postman, and GitHub. Excellent communicator who can simplify the complex and align diverse teams toward a common goal.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

6 - 16 Lacs

Vadodara

Work from Office

We are seeking an experienced Senior Data Engineer with minimum 5 years of hands-on experience to join our dynamic data team. The ideal candidate will have strong expertise in Microsoft Fabric, demonstrate readiness to adopt cutting-edge tools like SAP Data Sphere, and possess foundational AI knowledge to guide our data engineering initiatives. Key Roles and Responsibilities: Design, develop, and maintain scalable data pipelines and ETL/ELT processes using Microsoft Fabric tools such as Azure Data Factory (ADF) and Power BI. Work on large-scale data processing and analytics using PySpark. Evaluate and implement new data engineering tools like SAP Data Sphere through training or self-learning. Support business intelligence, analytics, and AI/ML initiatives by building robust data architectures. Apply AI techniques to automate workflows and collaborate with data scientists on machine learning projects. Mentor junior data engineers and lead data-related projects across departments. Coordinate with business teams, vendors, and technology partners for smooth project delivery. Create dashboards and reports using tools like Power BI or Tableau, ensuring data accuracy and accessibility. Support self-service analytics across business units and maintain consistency in all visualizations. Experience & Technical Skills 5+ years of professional experience in data engineering with expertise in Microsoft Fabric components Strong proficiency in PySpark for large-scale data processing and distributed computing (MANDATORY) Extensive experience with Azure Data Factory (ADF) for orchestrating complex data workflows (MANDATORY) Proficiency in SQL and Python for data processing and pipeline development Strong understanding of cloud data platforms, preferably Azure ecosystem Experience in data modelling , data warehousing , and modern data architecture patterns Interested candidates can share their updated profiles at "itcv@alembic.co.in"

Posted 2 weeks ago

Apply

6.0 - 10.0 years

10 - 20 Lacs

Pune

Work from Office

Job Description: Job Role: Data Engineer Role Yrs of Exp : 6+Years Job Location : Pune Work Model : Hybrid Job Summary: We are seeking a highly skilled Data Engineer with strong expertise in DBT, Java, Apache Airflow, and DAG (Directed Acyclic Graph) design to join our data platform team. You will be responsible for building robust data pipelines, designing and managing workflow DAGs, and ensuring scalable data transformations to support analytics and business intelligence. Key Responsibilities: Design, implement, and optimize ETL/ELT pipelines using DBT for data modeling and transformation. Develop backend components and data processing logic using Java. Build and maintain DAGs in Apache Airflow for orchestration and automation of data workflows. Ensure the reliability, scalability, and efficiency of data pipelines for ingestion, transformation, and storage. Work with cross-functional teams to understand data needs and deliver high-quality solutions. Troubleshoot and resolve data pipeline issues in production environments. Apply data quality and governance best practices, including validation, logging, and monitoring. Collaborate on CI/CD deployment pipelines for data infrastructure. Required Skills & Qualifications: 4+ years of hands-on experience in Data engineering roles. Strong experience with DBT for modular, testable, and version-controlled data transformation. Proficient in Java , especially for building custom data connectors or processing frameworks. Deep understanding of Apache Airflow and ability to design and manage complex DAGs. Solid SQL skills and familiarity with data warehouse platforms (e.g., Snowflake, Redshift, BigQuery). Familiarity with version control tools (Git), CI/CD pipelines, and Agile methodologies. Exposure to cloud environments like AWS, GCP, or Azure .

Posted 2 weeks ago

Apply

7.0 - 12.0 years

25 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Develop and maintain data pipelines, ETL/ELT processes, and workflows to ensure the seamless integration and transformation of data. Architect, implement, and optimize scalable data solutions. Required Candidate profile Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver actionable insights. Partner with cloud architects and DevOps teams

Posted 2 weeks ago

Apply

5.0 - 10.0 years

25 - 40 Lacs

Bengaluru

Work from Office

Dear Candidate, GyanSys is looking for Azure Databricks/Data Engineer for our overseas customers consulting Projects based in Americas/Europe/APAC region. Please apply for the job role or Share your CV directly to kiran.devaraj@gyansys.com / Call @ 8867163603 to discuss the fitment in detail. Designation: Sr/Lead/Principal Consultant based on Experience) Experience: 5+ Yrs - relevant Location: Bangalore , ITPL Notice Period: Immediate or 30 days max Job Description: 5+ years Experience. We are seeking a Data Engineer with 5-10 years of experience in Databricks, Python, and API. The primary responsibility of this role is to migrate on-premises big data Spark and Impala/Hive scripts to the Databricks environment. The ideal candidate will have a strong background in data migration projects and be proficient in transforming ETL pipelines to Databricks. The role requires excellent problem-solving skills and the ability to work independently on complex data migration tasks. Experience with big data technologies and cloud platforms(Azure) is essential. Join our team to lead the migration efforts and optimize our data infrastructure on Databricks. Excellent problem-solving skills and a passion for data accessibility. Effective communication and collaboration skills. Experience with Agile methodologies. Kinldy apply only if your profile fits the above pre-requisites. Also, Please share this job post in your acquaintances as well.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad/Secunderabad

Hybrid

Job Objective We 're looking for a skilled and passionate Data Engineer to build robust, scalable data platforms using cutting-edge technologies. If you have expertise in Databricks, Python, PySpark, Azure Data Factory, Azure Synapse, SQL Server , and a deep understanding of data modeling, orchestration, and pipeline development, this is your opportunity to make a real impact. Youll thrive in our cloud-first, innovation-driven environment, designing and optimizing end-to-end data workflows that drive meaningful business outcomes. If you're committed to high performance, clean data architecture, and continuous learning, we want to hear from you! Required Qualifications Education: BE, ME/MTech, MCA, MSc, MBA, or equivalent industry experience Experience: 5 to 10 years working with data engineering technologies ( Databricks, Azure, Python, SQL Server, PySpark, Azure Data Factory, Synapse, Delta Lake, Git, CI/CD Tech Stack, MSBI etc. ) Preferred Qualifications & Skills: Must-Have Skills: Expertise in relational & multi-dimensional database architectures Proficiency in Microsoft BI tools (SQL Server SSRS, SSAS, SSIS), Power BI , and SharePoint Strong experience in Power BI MDX, SSAS, SSIS, SSRS , Tabular & DAX Queries Deep understanding of SQL Server Tabular Model & multidimensional database design Excellent SQL-based data analysis skills Strong hands-on experience with Azure Data Factory, Databricks, PySpark/Python Nice-to-Have Skills: Exposure to AWS or GCP Experience with Lakehouse Architecture, Real-time Streaming (Kafka/Event Hubs), Infrastructure as Code (Terraform/ARM) Familiarity with Cognos, Qlik, Tableau, MDM, DQ, Data Migration MS BI, Power BI, or Azure Certifications

Posted 2 weeks ago

Apply

3.0 - 6.0 years

3 - 6 Lacs

Mumbai Suburban, Thane, Navi Mumbai

Work from Office

we have urgent opening for Application Specialist role with one of our client Support functionally and technically different versions of Capital Markets Analyse, validate, and manage business requirements.Must be strong in SQL,c#, Javascript Required Candidate profile Exp: 3+years Location : Powai Good in SQL, Java, c#,Javascript,HTML, CSS Mumbai candidate prefers only 5 days(Sun - Thurs) working 2 days weekoff(Fri & Sat) share cv:snehal@peshr.com call:9137306440

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As an Azure Data Engineer, you will be responsible for designing, implementing, and maintaining data pipelines and data solutions on the Azure platform. Your primary focus will be on developing efficient data models, ETL processes, and data integration workflows to support the organization's data needs. You will collaborate with data architects, data scientists, and other stakeholders to understand requirements and translate them into technical solutions. Additionally, you will optimize data storage and retrieval for performance and cost efficiency. In this role, you will also be involved in troubleshooting data issues, monitoring data pipelines, and ensuring data quality and integrity. You will stay current with Azure data services and best practices to continuously improve the data infrastructure. The ideal candidate for this position will have a strong background in data engineering, experience working with Azure data services such as Azure Data Factory, Azure Databricks, and Azure SQL Database, and proficiency in SQL, Python, or other programming languages used in data processing. If you are a data professional with a passion for working with large datasets, building scalable data solutions, and driving data-driven decision-making, this role offers an exciting opportunity to contribute to the organization's data strategy and growth.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

Wipro Limited is a leading technology services and consulting company dedicated to creating innovative solutions that cater to the complex digital transformation needs of clients. With a comprehensive range of capabilities in consulting, design, engineering, and operations, we assist clients in achieving their most ambitious goals and establishing future-ready, sustainable businesses. Our global presence spans over 65 countries with a workforce of more than 230,000 employees and business partners, committed to supporting our customers, colleagues, and communities in adapting to an ever-evolving world. For more information, please visit our website at www.wipro.com. As a Data Engineer with a minimum of 7 years of experience, including at least 2 years of project delivery experience in DataIku platforms, you will be responsible for configuring and optimizing Dataiku's architecture. This includes managing data connections, security settings, and workflow optimization to ensure seamless operations. Your expertise in Dataiku recipes, Designer nodes, API nodes, and Automation nodes will be instrumental in deploying custom workflows and scripts using Python. Collaboration is key in this role, as you will work closely with data analysts, business stakeholders, and clients to gather requirements and translate them into effective solutions within the DataIku environment. Your ability to independently navigate a fast-paced environment and apply strong analytical and problem-solving skills will be crucial in meeting project timelines and objectives. Additionally, familiarity with agile development methodologies and experience with Azure DevOps for CR/Production deployment implementation are highly desirable. Join us in reinventing the digital landscape at Wipro, where we encourage constant evolution and empower individuals to shape their professional growth. We welcome applications from individuals with disabilities to contribute to our diverse and inclusive workforce.,

Posted 2 weeks ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Pune, Chennai

Work from Office

Exp- 5 to 10 Yrs Skill - Azure Databricks , Azure Data Factory , Python , Spark location - Pune ,Chennai

Posted 2 weeks ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Pune, Chennai

Work from Office

Python, AWS Glue, Data Lake, SQL Orchestration tool (Example - Airflow) Data Ingestion Framework

Posted 2 weeks ago

Apply

5.0 - 8.0 years

15 - 30 Lacs

Hyderabad

Work from Office

5+ yrs exp as a Data Engineer with a strong track record of designing and implementing complex data solutions. Expert in SQL for data manipulation, analysis, and optimization. Strong programming skills in Python for data engineering tasks.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

15 - 30 Lacs

Hyderabad

Work from Office

5+ yrs exp as a Data Engineer with a strong track record of designing and implementing complex data solutions. Expert in SQL for data manipulation, analysis, and optimization. Strong programming skills in Python for data engineering tasks.

Posted 2 weeks ago

Apply

7.0 - 12.0 years

30 - 45 Lacs

Bengaluru

Work from Office

Lead Data Engineer - What You Will Do: As a PR3 Lead Data Engineer, you will be instrumental in driving our data strategy, ensuring data quality, and leading the technical execution of a small, impactful team. Your responsibilities will include: Team Leadership: Establish the strategic vision for the evolution of our data products and our technology solutions, then provide technical leadership and guidance for a small team of Data Engineers in executing the roadmap. Champion and enforce best practices for data quality, governance, and architecture within your team's work. Embody a product mindset over the teams data. Oversee the team’s use of Agile methodologies (e.g., Scrum, Kanban), ensuring smooth and predictable delivery, and overtly focusing on continuous improvement. Data Expertise & Domain Knowledge: Actively seek out, propose, and implement cutting-edge approaches to data transfer, transformation, analytics, and data warehousing to drive innovation. Design and implement scalable, robust, and high-quality ETL processes to support growing business demand for information, delivering data as a reliable service that directly influences decision making. Develop a profound understanding and "feel" for the business meaning, lineage, and context of each data field within our domain. Communication & Stakeholder Partnership: Collaborate with other engineering teams and business partners, proactively managing dependencies and holding them accountable for their contributions to ensure successful project delivery. Actively engage with data consumers to achieve deep understanding of their specific data usage, pain points, and current gaps, then plan initiatives to implement improvements collaboratively. Clearly articulate project goals, technical strategies, progress, challenges, and business value to both technical and non-technical audiences. Produce clear, concise, and comprehensive documentation. Your Qualifications: At Vista, we value the experience and potential that individual team members add to our culture. Please don’t hesitate to apply even if you don’t meet the exact qualifications, we look forward to learning more about you! Bachelor's or Master's degree in computer science, data engineering, or a related field . 10+ years of professional experience, with at least 6 years of hands-on Data Engineering, specifically in e-commerce or direct to consumer, and 4 years of team leadership Demonstrated experience in leading a team of data engineers, providing technical guidance, and coordinating project execution Stakeholder management experience and excellent communication skills Strong knowledge of SQL and data warehousing concepts is a must Strong knowledge of Data Modeling concepts and hands-on experience designing complex multi-dimension data models Strong hands-on experience in designing and managing scalable ETL pipelines in cloud environments with large volume datasets (both structured/unstructured data) Proficiency with cloud services in AWS (Preferred), including S3, EMR, RDS, Step Functions, Fargate, Glue etc. Critical hands-on experience with cloud-based data platforms (Snowflake strongly preferred) Data Visualization experience with reporting and data tools (preferably Looker with LookML skills) Coding mastery in at least one modern programming language: Python (strongly preferred), Java, Golang, PySpark, etc. Strong knowledge in production standards such as versioning, CI/CD, data quality, documentation, automation, etc. Problem solving and multi-tasking ability in a fast-paced, globally distributed environment Nice To Have: Experience with API development on enterprise platforms, with GraphQL APIs being a clear plus Hands-on experience designing DBT data pipelines Knowledge of finance, accounting, supply chain, logistics, operations, procurement data is a plus Experience managing work in Jira and writing documentation in Confluence Proficiency in AWS account management, including IAM, infrastructure, and monitoring for health, security and cost optimization Experience with Gen AI/ML tools for enhancing data pipelines or automating analysis. Why You'll Love Working Here There is a lot to love about working at Vista. We are an award winning Remote-First company. We’re an inclusive community. We’re growing (which means you can too). And to help orient us all in the same direction, we have our Vista Behaviors which exemplify the behavioral attributes that make us a culturally strong and high-performing team. Our Team: Enterprise Business Solutions Vistas Enterprise Business Solutions (EBS) domain is working to make our company one of the most data-driven organizations to support Finance, Supply Chain, and HR functions. The cross-functional team includes product owners, analysts, technologists, data engineers and more – all focused on providing Vista with cutting-edge tools and data we can use to deliver jaw-dropping customer value. EBS team members are empowered to learn new skills, communicate openly, and be active problem-solvers. Join our EBS Domain as a Lead Data Engineer! This Lead level within the organization will be responsible for the work of a small team of data engineers, focusing not only on implementations but also operations and support. The Lead Data Engineer will implement best practices, data standards, and reporting tools. The role will oversee and manage the work of other data engineers as well as being an individual contributor. This role has a lot of opportunity to impact general ETL development and implementation of new solutions. We will look to the Lead Data Engineer to modernize data technology solutions in EBS, including the opportunity to work on modern warehousing, finance, and HR datasets and integration technologies. This role will require an in-depth understanding of cloud data integration tools and cloud data warehousing, with a strong and pronounced ability to lead and execute initiatives to tangible results.

Posted 3 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

About Mindstix Software Labs: Mindstix accelerates digital transformation for the world's leading brands. We are a team of passionate innovators specialized in Cloud Engineering, DevOps, Data Science, and Digital Experiences. Our UX studio and modern-stack engineers deliver world-class products for our global customers that include Fortune 500 Enterprises and Silicon Valley startups. Our work impacts a diverse set of industries - eCommerce, Luxury Retail, ISV and SaaS, Consumer Tech, and Hospitality. A fast-moving open culture powered by curiosity and craftsmanship. A team committed to bold thinking and innovation at the very intersection of business, technology, and design. That's our DNA. Roles and Responsibilities: Mindstix is looking for a proficient Data Engineer. You are a collaborative person who takes pleasure in finding solutions to issues that add to the bottom line. You appreciate technical work by hand and feel a sense of ownership. You require a keen eye for detail, work experience as a data analyst, and in-depth knowledge of widely used databases and technologies for data analysis. Your responsibilities include: - Building outstanding domain-focused data solutions with internal teams, business analysts, and stakeholders. - Applying data engineering practices and standards to develop robust and maintainable solutions. - Being motivated by a fast-paced, service-oriented environment and interacting directly with clients on new features for future product releases. - Being a natural problem-solver and intellectually curious across a breadth of industries and topics. - Being acquainted with different aspects of Data Management like Data Strategy, Architecture, Governance, Data Quality, Integrity & Data Integration. - Being extremely well-versed in designing incremental and full data load techniques. Qualifications and Skills: - Bachelors or Master's degree in Computer Science, Information Technology, or allied streams. - 2+ years of hands-on experience in the data engineering domain with DWH development. - Must have experience with end-to-end data warehouse implementation on Azure or GCP. - Must have SQL and PL/SQL skills, implementing complex queries and stored procedures. - Solid understanding of DWH concepts such as OLAP, ETL/ELT, RBAC, Data Modelling, Data Driven Pipelines, Virtual Warehousing, and MPP. - Expertise in Databricks - Structured Streaming, Lakehouse Architecture, DLT, Data Modeling, Vacuum, Time Travel, Security, Monitoring, Dashboards, DBSQL, and Unit Testing. - Expertise in Snowflake - Monitoring, RBACs, Virtual Warehousing, Query Performance Tuning, and Time Travel. - Understanding of Apache Spark, Airflow, Hudi, Iceberg, Nessie, NiFi, Luigi, and Arrow (Good to have). - Strong foundations in computer science, data structures, algorithms, and programming logic. - Excellent logical reasoning and data interpretation capability. - Ability to interpret business requirements accurately. - Exposure to work with multicultural international customers. - Experience in the Retail/ Supply Chain/ CPG/ EComm/Health Industry is a plus. Who Fits Best - You are a data enthusiast and problem solver. - You are a self-motivated and fast learner with a strong sense of ownership and drive. - You enjoy working in a fast-paced creative environment. - You appreciate great design, have a strong sense of aesthetics and have a keen eye for detail. - You thrive in a customer-centric environment with the ability to actively listen, empathize and collaborate with globally distributed teams. - You are a team player who desires to mentor and inspire others to do their best. - You love expressing ideas and articulating well with strong written and verbal English communication and presentation skills. - You are detail-oriented with an appreciation for craftsmanship. Benefits: - Flexible working environment. - Competitive compensation and perks. - Health insurance coverage. - Accelerated career paths. - Rewards and recognition. - Sponsored certifications. - Global customers. - Mentorship by industry leaders. Location: This position is primarily based at our Pune (India) headquarters, requiring all potential hires to work from this location. A modern workplace is deeply collaborative by nature, while also demanding a touch of flexibility. We embrace deep collaboration at our offices with reasonable flexi-timing and hybrid options to our seasoned team members. Equal Opportunity Employer.,

Posted 3 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

As a Lead Software Engineer at Mastercard, you will play a crucial role in the data science and artificial intelligence initiatives that drive our digital transformation forward. Your expertise will guide complex projects from conception to execution, supporting our aggressive growth plans and contributing to the evolution of our data science and AI strategy. Your responsibilities will include providing technical vision and leadership, engaging in prioritization discussions with product and business stakeholders, estimating and managing delivery tasks across the entire development lifecycle, automating software operations, and facilitating code and design decisions within your team. You will be responsible for reporting status, managing risks, driving service integration for enhanced customer experience, conducting demos and acceptance discussions, and ensuring a deep understanding of the technical architecture and dependency systems. In addition, you will be expected to explore new tools and technologies, drive the adoption of technology standards and frameworks, mentor and guide team members, identify process improvements, and promote knowledge sharing within the Guild/Program to enhance productivity and reuse of best practices. To excel in this role, you should hold a Bachelor's degree in computer science, software engineering, or a related field, with at least 8 years of experience in software engineering combined with exposure to data science and machine learning. Proficiency in programming languages such as Python, Java, or Scala, along with frameworks like Pandas and Spring Boot, is essential. Prior experience with MLOps, familiarity with Neural Networks and LLMs, and understanding of operating system internals are highly valued skills. Your ability to debug, troubleshoot, implement standard branching and CI/CD practices, collaborate effectively, and communicate with stakeholders will be crucial. Experience with cloud platforms like AWS, Azure, and Databricks is a plus. If you are an experienced software engineer with a passion for data science and AI, and possess the skills and mindset to drive innovation and excellence in a fast-paced environment, we welcome you to join our dynamic team at Mastercard.,

Posted 3 weeks ago

Apply

7.0 - 12.0 years

8 - 16 Lacs

Bengaluru

Remote

Key role responsibilities: Develop and implement data pipelines and systems to connect and process data for analytics and business intelligence (BI) platforms. Document systems and source-to-target mappings to ensure transparency and a clear understanding of data flow. Re-engineer manual data flows to enable scalability, automation, and efficiency for repeatable use. Adhere to and contribute to best practice guidelines, continuously striving for optimization and improvement. Write clean, secure, and well-tested code, ensuring reliability, maintainability, and compliance with development standards. Monitor and operate the services and pipelines you build, proactively identifying and resolving production issues. Assess and prioritize feature requests based on business needs, technical feasibility, and impact. Identify opportunities to optimize existing data flows, promoting efficiency and reducing redundancy. Collaborate closely with team members and stakeholders to align efforts and achieve shared objectives. Implement data quality checks and validation processes to ensure accuracy and resolve data inconsistencies. Requirements and Skills: Strong background in Software Engineering, with proficiency in Python development (3+ years of experience). Excellent problem-solving, communication, and organizational skills. Ability to work independently and collaboratively within a team environment. Understanding of industry-recognized data modelling patterns and standards, and their practical application. Familiarity with data security and privacy principles, ensuring compliance with governance and regulatory requirements. Proficiency in SQL, with experience in PostgreSQL database management. Experience in API implementation and integration, with an understanding of REST principles and best practices. Knowledge of validation libraries like Marshmallow or Pydantic. Expertise in Pandas, Polars, or similar libraries for data manipulation and analysis. Proficiency in workflow orchestration tools like Apache Airflow and Dagster, ensuring efficient data pipeline scheduling and execution. Experience working with Apache Iceberg, enabling optimized data management and storage within large-scale analytics environments. Understanding of data lake architectures, leveraging scalable storage solutions for structured and unstructured data. Familiarity with data warehouse solutions, ensuring efficient data processing, query performance, and analytics workflows. Knowledge of operating systems (Linux) and modern development practices, including infrastructure deployment (DevOps). Proficiency in code versioning tools such as Git/GitHub, and experience with CI/CD pipelines (e.g., CircleCI).

Posted 3 weeks ago

Apply

6.0 - 11.0 years

18 - 32 Lacs

Hyderabad

Hybrid

Job Title: Senior Data Engineer Python, PySpark, AWS Experience Required: 6 to 12 Years Location: Hyderabad Job Type: Full Time / Permanent Job Description: We are looking for a passionate and experienced Senior Data Engineer to join our team in Hyderabad . The ideal candidate should have a strong background in data engineering on AWS , with hands-on expertise in Python, PySpark, and AWS services to build and maintain scalable data pipelines and ETL workflows. Mandatory Skills: Data Engineering Python PySpark AWS Services (S3, Glue, Lambda, Redshift, RDS, EC2, Data Pipeline) Key Responsibilities: Design and implement robust, scalable data pipelines using PySpark , AWS Glue , and AWS Data Pipeline . Develop and maintain efficient ETL workflows to handle large-scale data processing. Automate data workflows and job orchestration using AWS Data Pipeline Ensure smooth data integration across services like S3 , Redshift , and RDS . Optimize data processing for performance and cost efficiency on the cloud. Work with various file formats like CSV, Parquet, and Avro. Technical Requirements: 8+ years of experience in Data Engineering , particularly in cloud-based environments . Proficient in Python and PySpark for data transformation and manipulation. Strong experience with AWS Glue for ETL development, Data Catalog, and Crawlers. Solid knowledge of SQL for querying structured and semi-structured data. Familiar with Data Lake architectures , Amazon EMR , and Kinesis . Experience with Docker , Git , and CI/CD pipelines for deployment and versioning Interested Candidates can also share their CV at akanksha.s@esolglobal.com

Posted 3 weeks ago

Apply

6.0 - 11.0 years

25 - 30 Lacs

Mumbai, Mumbai Suburban, Mumbai (All Areas)

Work from Office

Experience in using SQL, PL/SQL or T-SQL with RDBMSs like Teradata, MS SQL Server, or Oracle in production environments. Experience with Python, ADF,Azure,Data Ricks. Experience working of Microsoft Azure/AWS or other leading cloud platforms Required Candidate profile Hands-on experience with Hadoop, Spark, Hive, or similar frameworks. Data Integration & ETL Data Modelling Database management Data warehousing Big-data framework CI/CD Perks and benefits To be disclosed post interview

Posted 3 weeks ago

Apply

7.0 - 12.0 years

17 - 27 Lacs

Hyderabad

Work from Office

Job Title: Data Quality Engineer Mandatory Skills Data Engineer, Python, AWS, SQL, Glue, Lambda, S3, SNS, ML, SQS Job Summary: We are seeking a highly skilled Data Engineer (SDET) to join our team, responsible for ensuring the quality and reliability of complex data workflows, data migrations, and analytics solutions across both cloud and on-premises environments. The ideal candidate will have extensive experience in SQL, Python, AWS, and ETL testing, along with a strong background in data quality assurance, data science platforms, DevOps pipelines, and automation frameworks. This role involves close collaboration with business analysts, developers, and data architects to support end-to-end testing,data validation, and continuous integration for data products. Expertise in tools like Redshift, EMR,Athena, Jenkins, and various ETL platforms is essential, as is experience with NoSQL databases, big data technologies, and cloud-native testing strategies. Role and Responsibilities: Work with business stakeholders, Business Systems Analysts and Developers to ensure quality delivery of software. Interact with key business functions to confirm data quality policies and governed attributes. Follow quality management best practices and processes to bring consistency and completeness to integration service testing. Designing and managing the testing AWS environments of data workflows during development and deployment of data products Provide assistance to the team in Test Estimation & Test Planning Design, development of Reports and dashboards. Analyzing and evaluating data sources, data volume, and business rules. Proficiency with SQL, familiarity with Python, Scala, Athena, EMR, Redshift and AWS. No SQL data and unstructured data experience. Extensive experience in programming tools like Map Reduce to HIVEQL. Experience in data science platforms like SageMaker/Machine Learning Studio/ H2O. Should be well versed with the Data flow and Test Strategy for Cloud/ On Prem ETL Testing. Interpret and analyses data from various source systems to support data integration and data reporting needs. Experience in testing Database Application to validate source to destination data movement and transformation. Work with team leads to prioritize business and information needs. Develop complex SQL scripts (Primarily Advanced SQL) for Cloud ETL and On prem. Develop and summarize Data Quality analysis and dashboards. Knowledge of Data modeling and Data warehousing concepts with emphasis on Cloud/ On Prem ETL. Execute testing of data analytic and data integration on time and within budget. Work with team leads to prioritize business and information needs Troubleshoot & determine best resolution for data issues and anomalies Experience in Functional Testing, Regression Testing, System Testing, Integration Testing & End to End testing. Has deep understanding of data architecture & data modeling best practices and guidelines for different data and analytic platforms Required Skills and Qualifications: Extensive Experience in Data migration is a must (Teradata to Redshift preferred). Extensive testing Experience with SQL/Unix/Linux scripting is a must. Extensive experience testing Cloud/On Prem ETL (e.g. Abinitio, Informatica, SSIS, Datastage, Alteryx, Glu). Extensive experience DBMS like Oracle, Teradata, SQL Server, DB2, Redshift, Postgres and Sybase. Extensive experience using Python scripting and AWS and Cloud Technologies. Extensive experience using Athena, EMR, Redshift, AWS, and Cloud Technologies. Experienced in large-scale application development testing Cloud/ On Prem Data warehouse, Data Lake, Data science. Experience with multi-year, large-scale projects. Expert technical skills with hands-on testing experience using SQL queries. Extensive experience with both data migration and data transformation testing. Extensive experience DBMS like Oracle, Teradata, SQL Server, DB2, Redshift, Postgres and Sybase. Extensive testing Experience with SQL/Unix/Linux. Extensive experience testing Cloud/On Prem ETL (e.g. Abinitio, Informatica, SSIS, Datastage, Alteryx, Glu). Extensive experience using Python scripting and AWS and Cloud Technologies. Extensive experience using Athena, EMR , Redshift and AWS and Cloud Technologies. API/Rest Assured automation, building reusable frameworks, and good technical expertise/acumen. Java/Java Script - Implement core Java, Integration, Core Java and API. Functional/UI/ Selenium - BDD/Cucumber, Specflow, Data Validation/Kafka, BigData, also automation experience using Cypress. AWS/Cloud - Jenkins/ Gitlab/ EC2 machine, S3 and building Jenkins and CI/CD pipelines, SouceLabs. Preferred Skills: API/Rest API - Rest API and Micro Services using JSON, SoapUI. Extensive experience in DevOps/Data Ops space. Strong experience in working with DevOps and build pipelines. Strong experience of AWS data services including Redshift, Glue, Kinesis, Kafka (MSK) and EMR/Spark, Sage Maker etc. Experience with technologies like Kubeflow, EKS, Docker. Extensive experience using No SQL data and unstructured data experience like MongoDB, Cassandra, Redis, ZooKeeper. Extensive experience in Map reduce using tools like Hadoop, Hive, Pig, Kafka, S4, Map R. Experience using Jenkins and Gitlab. Experience using both Waterfall and Agile methodologies. Experience in testing storage tools like S3, HDFS. Experience with one or more industry-standard defect or Test Case management Tools. Great communication skills (regularly interacts with cross functional team members).

Posted 3 weeks ago

Apply

0.0 - 4.0 years

0 Lacs

chennai, tamil nadu

On-site

As an Azure Data Engineer Junior at dotSolved, you will be responsible for designing, implementing, and managing scalable data solutions on Azure. Your primary focus will be on building and maintaining data pipelines, integrating data from various sources, and ensuring data quality and security. Proficiency in Azure services such as Data Factory, Databricks, and Synapse Analytics is essential as you optimize data workflows for analytics and reporting purposes. Collaboration with stakeholders is a key aspect of this role to ensure alignment with business goals and performance standards. Your responsibilities will include designing, developing, and maintaining data pipelines and workflows using Azure services, implementing data integration, transformation, and storage solutions to support analytics and reporting, ensuring data quality, security, and compliance with organizational and regulatory standards, optimizing data solutions for performance, scalability, and cost efficiency, as well as collaborating with cross-functional teams to gather requirements and deliver data-driven insights. This position is based in Chennai and Bangalore, offering you the opportunity to work in a dynamic and innovative environment where you can contribute to the digital transformation journey of enterprises across various industries.,

Posted 3 weeks ago

Apply

4.0 - 9.0 years

20 - 35 Lacs

Bengaluru

Work from Office

Looking for Data engineer with 4+ yrs exp Skills: Azure functionalities,AWS lambda ,serverless, Python ,API,snowflake Work from office--Bangalore ( Yeshvanthpur)--India

Posted 3 weeks ago

Apply

10.0 - 17.0 years

20 - 27 Lacs

Hyderabad

Work from Office

Required Skills and Qualifications: Extensive Experience in Data migration is a must (Teradata to Redshift preferred). Extensive testing Experience with SQL/Unix/Linux scripting is a must. Extensive experience testing Cloud/On Prem ETL (e.g. Abinitio, Informatica, SSIS, Datastage, Alteryx, Glu). Extensive experience DBMS like Oracle, Teradata, SQL Server, DB2, Redshift, Postgres and Sybase. Extensive experience using Python scripting and AWS and Cloud Technologies. Extensive experience using Athena, EMR, Redshift, AWS, and Cloud Technologies. Experienced in large-scale application development testing Cloud/ On Prem Data warehouse, Data Lake, Data science. Experience with multi-year, large-scale projects. Expert technical skills with hands-on testing experience using SQL queries. Extensive experience with both data migration and data transformation testing. Extensive experience DBMS like Oracle, Teradata, SQL Server, DB2, Redshift, Postgres and Sybase. Extensive testing Experience with SQL/Unix/Linux. Extensive experience testing Cloud/On Prem ETL (e.g. Abinitio, Informatica, SSIS, Datastage, Alteryx, Glu). Extensive experience using Python scripting and AWS and Cloud Technologies. Extensive experience using Athena, EMR , Redshift and AWS and Cloud Technologies. API/Rest Assured automation, building reusable frameworks, and good technical expertise/acumen. Java/Java Script - Implement core Java, Integration, Core Java and API. Functional/UI/ Selenium - BDD/Cucumber, Specflow, Data Validation/Kafka, BigData, also automation experience using Cypress. AWS/Cloud - Jenkins/ Gitlab/ EC2 machine, S3 and building Jenkins and CI/CD pipelines, SouceLabs.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad

Hybrid

We are seeking a Lead Snowflake Engineer .The ideal candidate will bring deep technical expertise in Snowflake, hands-on experience with DBT (Data Build Tool), and a collaborative mindset for working across data, analytics, and business teams.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies