Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
20 - 35 Lacs
chennai, bengaluru, delhi / ncr
Hybrid
Perform ETL operations using Dataiku and other ETL tools,expertise in ETL development and tools like Informatica.Strong coding skills in Python and SQL.Experience in Data Engineering, Data Integration, and Data Analysis
Posted Date not available
5.0 years
10 - 12 Lacs
pune
Work from Office
Role & responsibilities : Design, develop, and maintain scalable data pipelines and solutions on Azure. Work on Databricks engineering and architecture , ensuring data quality and integrity. Perform data modeling , system analysis, and contribute to database administration tasks. Collaborate with cross-functional teams to understand business requirements and implement data solutions accordingly. Participate in project and resource planning and ensure timely delivery of data solutions. Must-Have Skills: Banking domain experience is mandatory. Proven experience in Azure , Databricks , SQL , and data modeling . Strong expertise in financial crime detection or compliance data . Knowledge of database administration practices. Excellent analytical and system analysis skills. Strong verbal and written communication skills; must be a collaborative team player .
Posted Date not available
7.0 - 12.0 years
30 - 40 Lacs
pune
Work from Office
Greetings from Peoplefy Infosolutions !!! We are hiring for one of our reputed MNC client based in Pune . We are looking for candidates with 7 + years of experience as a Sr. Data Engineer. Primary Skills Strong experience in Data Engineering Python AI /ML in Data Engineering Big Query / Snowflake AWS / GCP Interested candidates for above position kindly share your CVs on Priyanka.sar@peoplefy.com with below details - Experience : CTC : Expected CTC : Notice Period : Location :
Posted Date not available
3.0 - 8.0 years
25 - 27 Lacs
bengaluru
Remote
Role & responsibilities Responsibilities: Lead and participate in the development of high-quality software solutions for client projects, using modern programming languages and frameworks. Contribute to system architecture and technical design decisions, ensuring that solutions are scalable, secure, and meet client requirements. Work closely with clients to understand their technical needs and business objectives, offering expert advice on software solutions and best practices. Provide guidance and mentorship to junior developers, assisting with code reviews, troubleshooting, and fostering a culture of technical excellence. Work with project managers, business analysts, and other engineers to ensure that technical milestones are achieved, and client expectations are met.Ensure the quality of software through testing, code optimization, and identifying potential issues before deployment. Stay up to date with industry trends, new technologies, and best practices to continuously improve development processes and software quality.Other duties as assigned and directed. EXPERTISE AND QUALIFICATIONS Required Technical Skills: Your data engineering experience should include Azure technologies and be familiar with modern data platform technologies such as Azure Data Factory, Azure Databricks, Azure Synapse, Fabric Understanding of Agile engineering practices Deep familiarity and experience in the following areas: Data warehouse and Lakehouse methodologies, including medallion architecture; Data ETL/ELT processes; Data profiling and anomaly detection; Data modeling (Dimensional/Kimball); SQL Strong background in relational database platforms DevOps/Continuous integration & continuous delivery Work Location: India - Remote Shift Timings: 2:00 pm IST to 11:00 pm IST Preferred candidate profile
Posted Date not available
5.0 - 9.0 years
6 - 16 Lacs
hyderabad, pune, chennai
Hybrid
Role & responsibilities Description Data engineering with data modeling experience, SQL, PySpark, Python, CICD with GitHub actions, API development and similar Long Description SQL + Data Modelling + CI/CD + Git Hub + Python-SQL + Data Modelling + CI/CD + Git Hub + Python.
Posted Date not available
5.0 - 10.0 years
15 - 25 Lacs
hyderabad, pune
Hybrid
Key Responsibilities: Design & Develop Data Pipelines : Build and optimize scalable, reliable, and automated ETL/ELT pipelines using AWS services (e.g., AWS Glue, AWS Lambda, Redshift, S3) and Databricks . Cloud Data Architecture : Design, implement, and support in maintaining data infrastructure in AWS , ensuring high availability, security, and scalability. Work with lake houses, data lakes, data warehouses, and distributed computing. DBT Core Implementation : Lead the implementation of DBT Core to automate data transformations, develop reusable models, and maintain efficient ELT processes. Data Modelling : Build efficient data models to support required analytics/reporting. Optimize Data Workflows : Monitor, troubleshoot, and optimize data pipelines for performance and cost-efficiency in cloud environments. Utilize Databricks for processing large-scale data sets and streamlining data workflows. Data Quality & Monitoring : Ensure high-quality data by implementing data validation and monitoring systems. Troubleshoot data issues and create solutions to ensure data reliability. Automation & CI/CD : Implement CI/CD practices for data pipeline deployment and maintain automation for monitoring and scaling data infrastructure in AWS and Databricks . Documentation & Best Practices : Maintain comprehensive documentation for data pipelines, architectures, and best practices in AWS , Databricks , and DBT Core . Ensure knowledge sharing across teams. Skills & Qualifications: Required: Bachelors / masters degree in computer science , Engineering or a related field. 4+ years of experience as a Data Engineer or in a similar role. Extensive hands-on experience with AWS services (S3, Redshift, Glue, Lambda, Kinesis, etc.) for building scalable and reliable data solutions. Advanced expertise in Databricks , including the creation and optimization of data pipelines, notebooks, and integration with other AWS services. Strong experience with DBT Core for data transformation and modelling, including writing, testing, and maintaining DBT models. Proficiency in SQL and experience with designing and optimizing complex queries for large datasets. Strong programming skills in Python/PySpark , with the ability to develop custom data processing logic and automate tasks. Experience with Data Warehousing and knowledge of concepts related to OLAP and OLTP systems. Expertise in building and managing ETL/ELT pipelines , automating data workflows, and performing data validation. Familiarity with CI/CD concepts, version control (e.g., Git), and deployment automation. Having worked under Agile project environment Preferred: Experience with Apache Spark and distributed data processing in Databricks . Familiarity with streaming data solutions (e.g., AWS Kinesis, Apache Kafka ).
Posted Date not available
5.0 - 10.0 years
16 - 27 Lacs
pune
Work from Office
Skills : Data Engineer, Python, GCP, SQL Exp : 5-10 yrs Location :Pune
Posted Date not available
4.0 - 8.0 years
6 - 16 Lacs
bengaluru
Remote
Sr Data Engineer SR DE-II Highly skilledDataEngineerwith minimum 6+ years of relevant experience in SQL, PySpark, ETL,DataLakes and Azure Tech Stack. Responsibilities 4+ years of experience in buildingdataPipelines with Python/PySpark. 5+ years of experience in Azure ETL stack ( eg. Blog Storage,DataLake,DataFactory, Synapse). 4+ years of experience with SQL. Proficient understanding of code versioning tools such as Git and PM tool like Jira. Should have Experience in Leading the Team Qualifications Excellent verbal and written communication skills. UG: B.Sc in Any Specialization, BCA in Any Specialization, B.Tech/B.E. in Any Specialization. A good internet connection is a must. Azure certifications will be a bonus
Posted Date not available
1.0 - 6.0 years
1 - 6 Lacs
bengaluru
Hybrid
Prepares and pipelines data for training and inference.
Posted Date not available
5.0 - 8.0 years
6 - 12 Lacs
bengaluru
Work from Office
MEC Sr. S/W Developer ( Data Engineers) Location Bangalore Must-Have - - SQL and Python Good to Have - - ETL and DWH - RESTFul API Experience: 5 + Years Domain: eCommerce / Financial Reconciliation Brief JD: We are seeking a Data Engineers own and operate a large-scale data processing and reconciliation platform for an eCommerce enterprise. The role involves execution, support and enhancement of real-time data ingestion and batch processing ETL dataflows (SQL, Python, PySpark, Hive), and SLA based execution of SQL based financial report generation. You will be responsible for triaging reconciliation issues, performing deep root cause analysis (RCA) using data from MySQL, HBase, and HDFS, and implementing fixes with a focus on process automation, data lineage (DataHub), and platform reliability. Key Responsibilities: Execute large scale data processing and report generation for in order to drive successful financial reconciliation on monthly basis. Work closely with the Finance Controllers. Perform RCA, issue triaging and resolve anomalies Build and monitor data quality, lineage, and SLA adherence across systems Enable observability through monitoring dashboards and diagnostics Required Skills: Strong hands-on experience with: SQL, Python( or PySpark) Based ETL for a Data Lake environment (or a large DWH platform) Proficiency in SQL and Python (PySpark) performance tuning, and building complex business logic Fast learner and willing to learn new data analytics tools, basic java programming and microservices architecture
Posted Date not available
6.0 - 10.0 years
20 - 30 Lacs
noida, gurugram, delhi / ncr
Hybrid
Role & responsibilities As a Senior Data Engineer, you will work to solve some of the organizational data management problems that would enable them as a data-driven organization; Seamlessly switch between roles of an Individual Contributor, team member and Data Modeling lead as demanded by each project to define, design, and deliver actionable insights. On a typical day, you might Engage the clients & understand the business requirements to translate those into data models. Create and maintain a Logical Data Model (LDM) and Physical Data Model (PDM) by applying best practices to provide business insights. Contribute to Data Modeling accelerators Create and maintain the Source to Target Data Mapping document that includes documentation of all entities, attributes, data relationships, primary and foreign key structures, allowed values, codes, business rules, glossary terms, etc. Gather and publish Data Dictionaries. Involve in maintaining data models as well as capturing data models from existing databases and recording descriptive information. Use the Data Modelling tool to create appropriate data models. Contribute to building data warehouse & data marts (on Cloud) while performing data profiling and quality analysis. Use version control to maintain versions of data models. Collaborate with Data Engineers to design and develop data extraction and integration code modules. Partner with the data engineers to strategize ingestion logic and consumption patterns. Preferred candidate profile 6+ years of experience in Data space. Decent SQL skills. Significant experience in one or more RDBMS (Oracle, DB2, and SQL Server) Real-time experience working in OLAP & OLTP database models (Dimensional models). Good understanding of Star schema, Snowflake schema, and Data Vault Modelling. Also, on any ETL tool, Data Governance, and Data quality. Eye to analyze data & comfortable with following agile methodology. Adept understanding of any of the cloud services is preferred (Azure, AWS & GCP). You are important to us, lets stay connected! Every individual comes with a different set of skills and qualities so even if you dont tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow. We are an equal-opportunity employer. Our diverse and inclusive culture and values guide us to listen, trust, respect, and encourage people to grow the way they desire. Note: The designation will be commensurate with expertise and experience. Compensation packages are among the best in the industry
Posted Date not available
4.0 - 9.0 years
17 - 27 Lacs
hyderabad
Work from Office
Job Title: Data Engineer Mandatory Skills: Data Engineer, Python, AWS, SQL, Glue, Lambda, S3, SNS, ML, SQS Job Summary: We are seeking a highly skilled Data Engineer (SDET) to join our team, responsible for ensuring the quality and reliability of complex data workflows, data migrations, and analytics solutions across both cloud and on-premises environments. The ideal candidate will have extensive experience in SQL, Python, AWS, and ETL testing, along with a strong background in data quality assurance, data science platforms, DevOps pipelines, and automation frameworks. This role involves close collaboration with business analysts, developers, and data architects to support end-to-end testing,data validation, and continuous integration for data products. Expertise in tools like Redshift, EMR,Athena, Jenkins, and various ETL platforms is essential, as is experience with NoSQL databases, big data technologies, and cloud-native testing strategies. Role and Responsibilities: Work with business stakeholders, Business Systems Analysts and Developers to ensure quality delivery of software. Interact with key business functions to confirm data quality policies and governed attributes. Follow quality management best practices and processes to bring consistency and completeness to integration service testing. Designing and managing the testing AWS environments of data workflows during development and deployment of data products Provide assistance to the team in Test Estimation & Test Planning Design, development of Reports and dashboards. Analyzing and evaluating data sources, data volume, and business rules. Proficiency with SQL, familiarity with Python, Scala, Athena, EMR, Redshift and AWS. No SQL data and unstructured data experience. Extensive experience in programming tools like Map Reduce to HIVEQL. Experience in data science platforms like SageMaker/Machine Learning Studio/ H2O. Should be well versed with the Data flow and Test Strategy for Cloud/ On Prem ETL Testing. Interpret and analyses data from various source systems to support data integration and data reporting needs. Experience in testing Database Application to validate source to destination data movement and transformation. Work with team leads to prioritize business and information needs. Develop complex SQL scripts (Primarily Advanced SQL) for Cloud ETL and On prem. Develop and summarize Data Quality analysis and dashboards. Knowledge of Data modeling and Data warehousing concepts with emphasis on Cloud/ On Prem ETL. Execute testing of data analytic and data integration on time and within budget. Work with team leads to prioritize business and information needs Troubleshoot & determine best resolution for data issues and anomalies Experience in Functional Testing, Regression Testing, System Testing, Integration Testing & End to End testing. Has deep understanding of data architecture & data modeling best practices and guidelines for different data and analytic platforms Required Skills and Qualifications: Extensive Experience in Data migration is a must (Teradata to Redshift preferred). Extensive testing Experience with SQL/Unix/Linux scripting is a must. Extensive experience testing Cloud/On Prem ETL (e.g. Abinitio, Informatica, SSIS, Datastage, Alteryx, Glu). Extensive experience DBMS like Oracle, Teradata, SQL Server, DB2, Redshift, Postgres and Sybase. Extensive experience using Python scripting and AWS and Cloud Technologies. Extensive experience using Athena, EMR, Redshift, AWS, and Cloud Technologies. Experienced in large-scale application development testing Cloud/ On Prem Data warehouse, Data Lake, Data science. Experience with multi-year, large-scale projects. Expert technical skills with hands-on testing experience using SQL queries. Extensive experience with both data migration and data transformation testing. Extensive experience DBMS like Oracle, Teradata, SQL Server, DB2, Redshift, Postgres and Sybase. Extensive testing Experience with SQL/Unix/Linux. Extensive experience testing Cloud/On Prem ETL (e.g. Abinitio, Informatica, SSIS, Datastage, Alteryx, Glu). Extensive experience using Python scripting and AWS and Cloud Technologies. Extensive experience using Athena, EMR , Redshift and AWS and Cloud Technologies. API/Rest Assured automation, building reusable frameworks, and good technical expertise/acumen. Java/Java Script - Implement core Java, Integration, Core Java and API. Functional/UI/ Selenium - BDD/Cucumber, Specflow, Data Validation/Kafka, BigData, also automation experience using Cypress. AWS/Cloud - Jenkins/ Gitlab/ EC2 machine, S3 and building Jenkins and CI/CD pipelines, SouceLabs. Preferred Skills: API/Rest API - Rest API and Micro Services using JSON, SoapUI. Extensive experience in DevOps/Data Ops space. Strong experience in working with DevOps and build pipelines. Strong experience of AWS data services including Redshift, Glue, Kinesis, Kafka (MSK) and EMR/Spark, Sage Maker etc. Experience with technologies like Kubeflow, EKS, Docker. Extensive experience using No SQL data and unstructured data experience like MongoDB, Cassandra, Redis, ZooKeeper. Extensive experience in Map reduce using tools like Hadoop, Hive, Pig, Kafka, S4, Map R. Experience using Jenkins and Gitlab. Experience using both Waterfall and Agile methodologies. Experience in testing storage tools like S3, HDFS. Experience with one or more industry-standard defect or Test Case management Tools. Great communication skills (regularly interacts with cross functional team members).
Posted Date not available
14.0 - 20.0 years
30 - 45 Lacs
gurugram, bengaluru
Hybrid
Job Title: Senior Principal Data Engineer Location: Gurgaon, Bangalore Work Schedule: 12:00 PM 8:30 PM IST Job Type: Full-Time Position Overview: We are seeking a Senior Principal Data Engineer with expertise in cloud-native AI/ML architecture and Generative AI . This role will be pivotal in designing innovative solutions, guiding development teams, and ensuring alignment with architectural best practices while leveraging cutting-edge technologies and AWS cloud services. Key Responsibilities: Architect Generative AI solutions using AWS services: Bedrock , SageMaker , Kendra , S3 , PGVector. Lead end-to-end solution design for AI/ML initiatives and complex data systems. Collaborate with cross-functional teams (onshore and offshore) to implement solutions. Conduct technical reviews, guide developers, and ensure code quality and best practices. Navigate and manage governance and architectural approval processes. Present solutions and provide technical advice to senior leadership and stakeholders . Mentor junior engineers and promote a culture of innovation and continuous learning. Evaluate and integrate emerging technologies like LangChain . Work closely with data scientists and data analysts to optimize model performance and data usage. Design and implement ETL pipelines , data flows, and support diverse data types (structured/unstructured). Lead technical workshops, documentation, and architecture planning. Required Qualifications: 12-15 years of experience in software engineering, data architecture, and solution design. Proven track record in building and deploying AI/ML, analytics, and data-driven platforms. Expert-level knowledge of AWS services: Bedrock SageMaker Kendra S3 PGVector Strong hands-on experience in Python (primary), with additional skills in Java or Scala. Deep understanding of data structures, algorithms, and software design patterns. Strong background in data pipelines, ETL, and working with structured, semi-structured, and unstructured data. Familiarity with DevOps, CI/CD pipelines, and infrastructure-as-code principles. Awareness of AI ethics, bias mitigation, and responsible AI frameworks. Experience with tools/frameworks like LangChain is highly desirable.
Posted Date not available
2.0 - 5.0 years
14 - 15 Lacs
bengaluru
Work from Office
Position Purpose The data engineer is responsible for Designing, Architecting and implementing robust, scalable and maintainable data pipelines. The data engineer will work directly with upstream stockholders (applications owners, data providers) and downstream stakeholders (Data consumers, Data analysts, Data Scientists) to define the data pipeline requirements, implement solutions that serve downstream stakeholders needs through APIs, materialized views. On day to day, the data engineer works in conjunction with the Data Analyst, for the aggregation and preparation of data. The data engineer interacts with security, continuity and IT architecture to validate the IT assets the person designs and develops. Furthermore, the person works with BNP Paribas international team Responsibilities Direct Responsibilities Work on the stages from data ingestion to analytics, encompass integration, transformation, warehousing and maintenance The Data Engineer designs architecture, orchestrate, deploys and monitors reliable data processing systems. Implement Batch and streaming data pipelines to ingest data into the data warehousePerform undercurrent activities (Data architecture, Data management, DataOps, Security) Perform data transformation and modeling, to convert data from OLTP to OLAP to speed up data querying, and best align with business needs Serve downstream stakeholders across the organization, whose improved access to standardized data will make them more effective at delivering use cases, building dashboards and guiding decisionsTechnical & Behavioral Competencies Master Data engineering fundamentals concepts (Data warehouse, Data Lake, Data Lakehouse) Master Golang, Bash, SQL, Python Master of HTTP and REST API Best practices Master batch and streaming datapipeline using Kafka Master code versioning with Git and best practices for continuous integration & delivery (CI/CD) Master writing clean and tested code following software engineering best practices (Readable, Modular, Reusable, Extensible) Master data modeling (3NF, Kimball, Vault) Knowledge of data orchestration using Airflow or Dagster Knowledge to self-host and manage tools like Metabase, DBT Knowledge of cloud principals and infrastructure management (IAM, Logging, Terraform, Ansible) Knowledge of data abstraction layers (Object Storage, Relational, NoSQL, Document, Trino, and Graph databases) Knowledge with Containerization and workload orchestration with (Docker, Kubernetes, Artifactory) Background in working in an agile environment (knowledge of the methods and their limits)
Posted Date not available
5.0 - 7.0 years
20 - 30 Lacs
bengaluru
Work from Office
We are hiring for- Role: Azure Data Engineer Key Responsibilities: Design, develop, and optimize scalable data pipelines using Databricks (PySpark, Scala, SQL). Implement ETL/ELT workflows for large-scale data integration across cloud and on-premise environments. Leverage Microsoft Fabric (Data Factory, OneLake, Lakehouse, DirectLake, etc.) to build unified data solutions. Collaborate with data architects, analysts, and stakeholders to deliver business-critical data models and pipelines. Monitor and troubleshoot performance issues in data pipelines. Ensure data governance, quality, and security across all data assets. Work with Delta Lake, Unity Catalog, and other modern data lakehouse components. Automate and orchestrate workflows using Azure Data Factory, Databricks Workflows, or Microsoft Fabric pipelines. Participate in code reviews, CI/CD practices, and agile ceremonies. Required Skills: 57 years of experience in data engineering, with strong exposure to Databricks . Proficient in PySpark, SQL, and performance tuning of Spark jobs. Hands-on experience with Microsoft Fabric components . Experience with Azure Synapse, Data Factory, and Azure Data Lake. Understanding of Lakehouse architecture and modern data mesh principles. Familiarity with Power BI integration and semantic modeling (preferred). Knowledge of DevOps, CI/CD for data pipelines (e.g., using GitHub Actions, Azure DevOps). Excellent problem-solving, communication, and collaboration skills. Muugddha Vanjarii 7822804824 mugdha.vanjari@sunbrilotechnologies.com
Posted Date not available
5.0 - 10.0 years
18 - 22 Lacs
noida, hyderabad, bengaluru
Work from Office
Job Description : As a Data Engineer for our Large Language Model Project, you will play a crucial role in designing, implementing, and maintaining the data infrastructure. Your expertise will be instrumental in ensuring the efficient flow of data, enabling seamless integration with various components, and optimizing data processing pipelines. 5+ years of relevant experience in data engineering roles. Key Responsibilities : Data Pipeline Development - Design, develop, and maintain scalable and efficient data pipelines to support the training and deployment of large language models. Implement ETL processes to extract, transform, and load diverse datasets into suitable formats for model training. Data Integration - Collaborate with cross-functional teams, including data scientists and software engineers, to integrate data sources and ensure the availability of relevant and high-quality data. Implement solutions for real-time data processing and integration, fostering model development agility. Data Quality Assurance - Establish and maintain robust data quality checks and validation processes to ensure the accuracy and consistency of datasets. Troubleshoot data quality issues, identify root causes, and implement corrective measures. Infrastructure Management - Work closely with DevOps and IT teams to manage and optimize the data storage infrastructure, ensuring scalability and performance. Implement best practices for data security, access control, and compliance with data governance policies. Performance Optimization - Identify bottlenecks and inefficiencies in data processing pipelines and implement optimizations to enhance overall system performance. Continuously monitor and evaluate system performance metrics, making proactive adjustments as needed. Skills & Tools Programming Languages - Proficiency in languages such as Python for building robust data processing applications. Big Data Technologies - Experience with distributed computing frameworks like Apache Spark, Databricks & DBT for large-scale data processing. Database Systems - In-depth knowledge of both relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., Vector databases, MongoDB, Cassandra etc). Data Warehousing - Familiarity with data warehousing solutions such as Amazon Redshift, Google BigQuery, or Snowflake. ETL Tools - Hands-on experience with ETL tools like Apache NiFi, Talend, or Apache Airflow. Knowledge of NLP will be an added advantage. Cloud Services - Experience with cloud platforms like AWS, Azure, or Google Cloud for deploying and managing data infrastructure. Problem Solving - Analytical mindset with a proactive approach to identifying and solving complex data engineering challenges.
Posted Date not available
6.0 - 11.0 years
14 - 22 Lacs
pune, bengaluru
Hybrid
Skill: Data Engineer Experience: 6+ Years Location: Bangalore & Pune Shift Timings: 4:00 PM to 1:00 AM IST Mode: Hybrid Notice Period: Immediate - 15 Days Must-Have Skills: Minimum of 6 years of Data Engineering Experience Must be an expert in SQL, Data lake, Azure, ETL Must be an expert in data modeling Incorta experience is a plus Excellent oral and written communication skills Self-starter with analytical, organizational, and problem-solving skills Must be highly flexible and adaptable to change Power BI Strong programming skills in languages such as SQL, Python Experience with data modelling, data warehousing, and dimensional modelling concept Familiarity with data governance, data security. Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Nice to Haves: Knowledge of big data technologies such as Apache Hadoop, Spark, or Kafka Azure Databricks & Azure Cosmos DB Relevant certifications such as Microsoft Certified: Azure Data Engineer Associate is a plus.
Posted Date not available
5.0 - 10.0 years
5 - 15 Lacs
hyderabad, chennai, bengaluru
Work from Office
Job Description: Primary Skills Data engineer + AI/ML +LLM + Azure devops +Blob Minimum experience: • Minimum of 4 years of experience in data management, analysis, and AI data preparation. Must have hands-on: • Build scalable data pipelines and feature stores for AI/ML use cases • Develop CI/CD pipelines for training, deployment, and monitoring of data and ML models • Manage real-time inference pipelines for GenAI / LLM-based applications • Automate retraining, versioning, and drift monitoring processes • Collaborate with data scientists and AI engineers to productionize models • Ensure data quality, lineage, and model governance across all systems • Maintain infrastructure-as-code using Azure DevOps, Bicep, or Terraform • Optimize compute and storage resources across Azure ML, Functions, and Blob
Posted Date not available
7.0 - 10.0 years
20 - 30 Lacs
bengaluru
Work from Office
Data Engineer-Data Warehouse DWH Concepts, Data Modelling, Tera Data Architecture, SQL Teradata
Posted Date not available
5.0 - 10.0 years
20 - 30 Lacs
bengaluru
Work from Office
5+ years of experience in building and maintaining big data platforms using Spark/Scala. Strong knowledge of distributed computing principles and big data technologies such as Hadoop, Spark, Streaming etc. Experience with ETL processes and data modelling. Problem-solving and troubleshooting skills. Working knowledge on Oozie/Airflow. Experience in writing unit test cases, shell scripting. Ability to work independently and as part of a team in a fast-paced environment. Use of version control (Git) and related software lifecycle tooling. Experience in Spark Streaming/Kafka streaming. Experience with software development methodologies including Scrum & Agile. Approaches for optimization, at a system level as well as algorithm implementation level.
Posted Date not available
5.0 - 8.0 years
5 - 15 Lacs
hyderabad
Work from Office
Role & responsibilities Key Responsibilities: Coding proficiency in SQL, Python/PySpark AWS Glue , redshift , lambda Data Engineering (Designing ETL Workflows, Understanding on DB Concepts/Architecture, Handling Big Data Volumes) Performance (Measurement, Indexes, Partitioning and Tuning) Data modeling and design Preferred candidate profile Job Description: Job Title: Data Engineer Experience: 5+ Years Location: Hyderabad Job Type: Full-Time
Posted Date not available
2.0 - 5.0 years
2 - 5 Lacs
noida, delhi / ncr
Work from Office
Job Summary We are looking for a Techno-Functional Data Engineer who is passionate about solving realworld problems through data-driven systems. While prior e-commerce experience is a plus, it is not mandatory we welcome engineers, tinkerers, and builders who are eager to challenge themselves, build scalable systems, and work closely with product and business teams. In this role, you will be at the intersection of data engineering, automation, and product strategy, contributing to a modern SaaS platform that supports diverse and dynamic customer needs. Key Responsibilities Data Engineering & Automation - Build and maintain data pipelines and automated workflows for data ingestion, transformation, and delivery. - Integrate structured and semi-structured data from APIs, external sources, and internal systems using Python and SQL. - Work on core platform modules like data connectors, product catalogs, inventory sync, and channel integrations. - Implement data quality, logging, and alerting mechanisms to ensure pipeline reliability. - Build internal APIs and microservices using Flask or Django to expose enriched datasets. Functional & Analytical Contribution - Collaborate with Product and Engineering teams to understand use cases and translate them into data-backed features. - Analyze data using Pandas, NumPy, and SQL to support roadmap decisions and customer insights. - Build bots, automation scripts, or scraping tools to handle repetitive data operations or integrate with third-party systems. - Participate in designing reporting frameworks, dashboards, and analytics services for internal and client use. Mindset & Growth - Be open to learning the dynamics of e-commerce, catalog structures, order flows, and marketplace ecosystems. - Take ownership of problems beyond your immediate knowledge area and drive them to closure. - Engage with a product-first engineering culture where outcomes > tech stack, and impact matters most. Required Skills & Qualifications - 2+ years of experience in data engineering, backend development, or technical product analytics. - Strong Python skills, with experience in: - Data libraries: Pandas, NumPy - Web frameworks: Flask, Django - Automation: Requests, BeautifulSoup, Scrapy, bot frameworks - Image processing: Pillow, OpenCV (a plus) - Proficient in SQL and hands-on with MySQL, PostgreSQL, or MongoDB. - Experience building or consuming REST APIs. - Familiarity with version control tools like Git and collaborative workflows (CI/CD, Agile). - Strong problem-solving mindset and willingness to learn domain-specific complexities. Nice to Have (But Not Required) - Exposure to cloud data platforms like AWS, GCP, or Azure. - Experience with workflow orchestration tools like Airflow, DBT, or Luigi. - Basic knowledge of BI tools (Power BI, Tableau, Looker). - Prior work on data-centric products or SaaS tools.
Posted Date not available
6.0 - 11.0 years
0 - 0 Lacs
hyderabad, chennai
Hybrid
Data Engineer , We looking for 6+ years , looking for people who can attend for F2F Hyderabad / Chennai we have 3 combination roles DATA Engineer AWS, Python, Pyspark DATA Engineer Big Data, Python, Pyspark DATA Engineer Snowflake, Python, Pyspark
Posted Date not available
7.0 - 10.0 years
9 - 18 Lacs
hyderabad
Work from Office
Profile: As a Principal Data Engineer, to lead the architecture, design, and implementation of scalable and efficient data pipelines, data models, and data infrastructure solutions on GCP. To collaborate with cross-functional teams, including Data Scientists, Analysts, and Product Engineers, to build data architectures that support data-driven decision-making across the organization. Also, to play a crucial role in the data engineering team's efforts to manage and optimize data infrastructure, ensuring performance, reliability, and scalability on GCP Key Responsibilities : Architect Data Solutions: Lead the design and implementation of complex data architectures and pipelines on Google Cloud Platform (GCP) to meet business requirements. Cloud Infrastructure Management: Build and maintain cloud-based data solutions using GCP services such as BigQuery, Dataflow, Dataproc, Pub/Sub, GCS, Composer, and others. Data Modelling and Optimization: Develop and maintain efficient, scalable data models, ensuring optimal data storage, retrieval, and processing. Pipeline Development: Lead the creation and optimization of ETL/ELT pipelines that integrate diverse data sources into centralized data repositories. Collaboration: Work closely with stakeholders from data science, business intelligence, analytics, and product teams to deliver end-to-end data solutions. Leadership: Mentor and guide junior data engineers and provide technical leadership in architecture, development, and best practices in cloud data engineering. Automation and Monitoring: Ensure automated data workflows, implement monitoring tools, and resolve issues in data pipelines to ensure data availability, integrity, and accuracy. Security and Compliance: Implement and ensure best practices for data security, privacy, and compliance with relevant data protection regulations on GCP. Qualifications : Bachelors Degree or above in Computer Science, Information Technology, Engineering, Mathematics, or a related field. Relevant certifications in Cloud technologies, particularly Google Cloud Professional Data Engineer or other GCP-specific certifications, are a plus. 7+ years of experience as a Data Engineer or in a similar role, with a strong background in cloud data platforms, particularly GCP. Skills and Competencies : GCP Expertise: Hands-on experience with Google Cloud Platform services, including: BigQuery, Dataflow, Pub/Sub, Dataproc, Composer, GCS, and Cloud Functions Experience with GCP's security features, cost optimization, and best practices for data management. Data Engineering Skills: Expertise in designing, developing, and deploying large-scale data pipelines and data architectures in a cloud environment. Programming Languages: Proficiency in Python, SQL, and/or Java for developing data pipelines, transformations, and automation tasks. Data Modeling: Strong experience with relational and NoSQL databases, schema design, and dimensional modeling. ETL Tools: Experience with ETL tools like Apache Beam, Airflow, or similar for building data pipelines. Data Warehousing: Experience with data warehousing concepts and tools, particularly BigQuery or similar cloud data warehousing technologies. Version Control: Proficiency in using version control systems such as Git. CI/CD: Familiarity with Continuous Integration and Continuous Deployment (CI/CD) practices for data engineering pipelines. Leadership: Demonstrated experience in leading a team, mentoring engineers, and contributing to a collaborative work environment. Problem-Solving: Strong analytical and troubleshooting skills, with the ability to solve complex data problems.
Posted Date not available
5.0 - 10.0 years
10 - 16 Lacs
navi mumbai, mumbai (all areas)
Work from Office
Designation : Senior Data Engineer Experience: 5+ Years Location: Navi Mumbai (JUINAGAR) - WFO Immediate Joiners preferred. Interview : Face - 2 - Face (Only 1 Day Process) Job Description We are looking for an experienced and results-driven Senior Data Engineer to join our Data Engineering team . In this role, you will design, develop, and maintain robust data pipelines and infrastructure that enable efficient data flow across our systems. As a senior contributor, you will also help define best practices, mentor junior team members, and contribute to the long-term vision of our data platform. You will work closely with cross-functional teams to deliver reliable, scalable, and high-performance data systems that support critical business intelligence and analytics initiatives. Required Qualifications: Bachelors degree in Computer Science, Information Systems, or a related field; Masters degree is a plus. 5+ years of experience in data warehousing, ETL development, and data modeling. Strong hands-on experience with one or more databases: Snowflake, Redshift, SQL Server, Oracle, Postgres, Teradata, BigQuery. Proficiency in SQL and scripting languages (e.g., Python, Shell). Deep knowledge of data modeling techniques and ETL frameworks. Excellent communication, analytical thinking, and troubleshooting skills. Preferred Qualifications Experience with modern data stack tools like dbt, Fivetran, Stitch, Looker, Tableau,or PowerBI. Knowledge of data lakes, lakehouses, and real-time data streaming (e.g., Kafka). Agile/Scrum project experience and version control using Git. Sincerely, Sonia TS
Posted Date not available
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |