Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
11 - 21 years
25 - 40 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 3 - 20 Yrs Location- Pan India Job Description : - Skills: GCP, BigQuery, Cloud Composer, Cloud DataFusion, Python, SQL 5-20 years of overall experience mainly in the data engineering space, 2+ years of Hands-on experience in GCP cloud data implementation, Experience of working in client facing roles in technical capacity as an Architect. must have implementation experience of GCP based clous Data project/program as solution architect, Proficiency of using Google Cloud Architecture Framework in Data context Expert knowledge and experience of core GCP Data stack including BigQuery, DataProc, DataFlow, CloudComposer etc. Exposure to overall Google tech stack of Looker/Vertex-AI/DataPlex etc. Expert level knowledge on Spark.Extensive hands-on experience working with data using SQL, Python Strong experience and understanding of very large-scale data architecture, solutioning, and operationalization of data warehouses, data lakes, and analytics platforms. (Both Cloud and On-Premise) Excellent communications skills with the ability to clearly present ideas, concepts, and solutions If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in or you can reach me @ 8939853050 With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 4 months ago
11 - 18 years
35 - 60 Lacs
Pune, Chennai, Delhi / NCR
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 - 18 Yrs Location- Pan India Job Description : - Mandatory experience - Data Program Manager with AWS Technical Program Manager worked on transformation projects on private cloud He should manage the overall delivery Dependency resolution with stakeholders Cross team dependency resolution Keep stakeholders informed and manage expectations Coordinate with Performance Engineering team for performance analysis Understand the impact of design decisions on the total cost of ownership Coordinate with other teams to ensure information going into designs is accurate and complete Ensure that all designs are see through the relevant security approvals Engage with other teams to ensure any identified design issues are remediated accordingly Able to manage CAB meetings release management exception approvals Well versed with SAFe Agile processes If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 4 months ago
8 - 13 years
10 - 20 Lacs
Bengaluru
Work from Office
Hi, Greetings from Sun Technology Integrators!! This is regarding a job opening with Sun Technology Integrators, Bangalore. Please find below the job description for your reference. Kindly let me know your interest and share your updated CV to nandinis@suntechnologies.com with the below details ASAP. C.CTC- E.CTC- Notice Period- Current location- Are you serving Notice period/immediate- Exp in Snowflake- Exp in Matillion- 2:00PM-11:00PM-shift timings (free cab facility-drop) +food Please let me know, if any of your friends are looking for a job change. Kindly share the references. Only Serving/ Immediate candidates can apply. Interview Process-1 Round(Virtual)+Final Round(F2F) Please Note: WFO-Work From Office (No hybrid or Work From Home) Mandatory skills : Snowflake, SQL, ETL, Data Ingestion, Data Modeling, Data Warehouse,Python, Matillion, AWS S3, EC2 Preferred skills : SSIR, SSIS, Informatica, Shell Scripting Venue Details: Sun Technology Integrators Pvt Ltd No. 496, 4th Block, 1st Stage HBR Layout (a stop ahead from Nagawara towards to K. R. Puram) Bangalore 560043 Company URL: www.suntechnologies.com Thanks and Regards,Nandini S | Sr.Technical Recruiter Sun Technology Integrators Pvt. Ltd. nandinis@suntechnologies.com www.suntechnologies.com
Posted 4 months ago
3 - 8 years
10 - 20 Lacs
Bengaluru
Work from Office
Hi, Greetings from Sun Technology Integrators!! This is regarding a job opening with Sun Technology Integrators, Bangalore. Please find below the job description for your reference. Kindly let me know your interest and share your updated CV to nandinis@suntechnologies.com with the below details ASAP. C.CTC- E.CTC- Notice Period- Current location- Are you serving Notice period/immediate- Exp in Snowflake- Exp in Matillion- 2:00PM-11:00PM-shift timings (free cab facility-drop) +food Please let me know, if any of your friends are looking for a job change. Kindly share the references. Only Serving/ Immediate candidates can apply. Interview Process-2 Rounds(Virtual)+Final Round(F2F) Please Note: WFO-Work From Office (No hybrid or Work From Home) Mandatory skills : Snowflake, SQL, ETL, Data Ingestion, Data Modeling, Data Warehouse,Python, Matillion, AWS S3, EC2 Preferred skills : SSIR, SSIS, Informatica, Shell Scripting Venue Details: Sun Technology Integrators Pvt Ltd No. 496, 4th Block, 1st Stage HBR Layout (a stop ahead from Nagawara towards to K. R. Puram) Bangalore 560043 Company URL: www.suntechnologies.com Thanks and Regards,Nandini S | Sr.Technical Recruiter Sun Technology Integrators Pvt. Ltd. nandinis@suntechnologies.com www.suntechnologies.com
Posted 4 months ago
10 - 20 years
20 - 30 Lacs
Hyderabad
Remote
Note: Looking for Immediate Joiners and timings 5:30 pm - 1:30 am IST (Remote) Project Overview (If Possible): Its one of the workstreams of Project Acuity. Client Data Platform includes centralized web application for internal platform users across the Recruitment Business to support marketing and operational use cases. Building a database at the patient level will provide significant benefit to Client future reporting capabilities and engagement of external stakeholders. Role Scope / Deliverables: We are looking for an experienced AWS Data Engineer to join our dynamic team, responsible for developing, managing, and optimizing data architectures. The ideal candidate will have extensive experience in integrating large-scale datasets, building scalable and automated data pipelines. The candidate should also have experience with AWS ETL services (such as AWS Glue, Lambda, and Data Pipeline) to handle data processing and integration tasks effectively. Must Have Skills: Proficiency in programming languages such as Python, Scala, or similar. Strong experience in data classification, including the identification of PII data entities. Ability to leverage AWS services (e.g., SageMaker, Comprehend, Entity Resolution) to solve complex data related challenges. Strong analytical and problem-solving skills, with the ability to innovate and develop new approaches to data engineering Experience with AWS ETL services (such as AWS Glue, Lambda, and Data Pipeline) to handle data processing and integration tasks effectively. Experience in core AWS Services including AWS IAM, VPC, EC2, S3, RDS, Lambda, CloudWatch, CloudTrail. Nice to Have skills: Experience with data privacy and compliance requirements, especially related to PII data. Familiarity with advanced data indexing techniques, vector databases, and other technologies that improve the quality of outputs.
Posted 4 months ago
6 - 10 years
15 - 20 Lacs
Gurugram
Remote
Title: Looker Developer Team: Data Engineering Work Mode: Remote Shift Time: 3:00 PM - 12:00AM IST Contract: 12 months Key Responsibilities Collaborate closely with engineers, architects, business analysts, product owners, and other team members to understand the requirements and develop test strategies. LookML Proficiency: LookML is Looker's proprietary language for defining data models. Looker developers need to be able to write, debug, and maintain LookML code to create and manage data models, explores, and dashboards. Data Modeling Expertise:Understanding how to structure and organize data within Looker is essential. This involves mapping database schemas to LookML, creating views, and defining measures and dimensions. SQL Knowledge: Looker leverages SQL queries under the hood. Developers need to be able to write SQL to understand the data, debug queries, and potentially extend LookML with custom SQL. Looker Environment: Familiarity with the Looker interface, including the IDE, LookML Validator, and SQL Runner, is necessary for efficient development. Education and/or Experience Bachelor's degree in MIS, Computer Science, Information Technology or equivalent required 6+ Years of IT Industry experience in Data management field.
Posted 4 months ago
5 - 8 years
3 - 3 Lacs
Hyderabad
Hybrid
JD : Data Enigneer Experience: 5-8 years of in-depth, hands-on expertise with ETL tools and logic, with a strong preference for IDMC (Informatica Cloud). Application Development/Support : Demonstrated success in either application development or support roles. Python Proficiency : Strong understanding of Python, with practical coding experience AWS: Comprehensive knowledge of AWS services and their applications Airflow : creating and managing Airflow DAG scheduling. Unix & SQL : Solid command of Unix commands, shell scripting, and writing efficient SQL scripts Analytical & Troubleshooting Skills : Exceptional ability to analyze data and resolve complex issues. Development Tasks : Proven capability to execute a variety of development activities with efficiency Insurance Domain Knowledge: Familiarity with the Insurance sector is highly advantageous. Production Data Management : Significant experience in managing and processing production data Work Schedule Flexibility: Open to working in any shift, including 24/7 support, as require
Posted 4 months ago
3.0 - 6.0 years
10 - 20 Lacs
pune
Remote
Work closely with clients to understand business needs, design data solutions, and deliver insights through end-to-end data management. Lead project execution, handle communication, documentation, and guide team members throughout. Required Candidate profile Must have hands-on experience with Python, ETL tools (Fivetran, StitchData), databases, and cloud platforms (AWS, GCP, Azure, Snowflake, Databricks). Familiarity with REST/SOAP APIs is essential.
Posted Date not available
5.0 - 10.0 years
25 - 30 Lacs
hyderabad, chennai, bengaluru
Hybrid
Mandate Skills: 1. ADF 2. Snowflake 3. Data Modeling 4. SQL 5. Python 6. Pyspark 7. Databricks Key Responsibilities: Design and develop robust data pipelines and ETL/ELT processes using Snowflake . Build and maintain dimensional and relational data models to support business intelligence and analytics. Develop SQL scripts and optimize queries for performance and scalability in Snowflake . Use Python for data processing, automation, and integration tasks as needed. Design, schedule, and manage pipelines using Azure Data Factory. Ensure data quality, consistency, and security across platforms. Work closely with data analysts, business stakeholders, and other engineers to gather requirements and deliver solutions. Monitor, troubleshoot, and optimize existing data workflows and systems. Mandate Key Skills: Snowflake data warehousing, SnowSQL, performance optimization Azure Data Factory pipelines, triggers, dataflows SQL complex queries, performance tuning Python – scripting and data manipulation Data Modeling – star schema, snowflake schema, normalized/denormalized structures Secondary Skills: Snowpipe Snowpark Snowflake Streams and Tasks Databricks DBT (Data Build Tool)
Posted Date not available
5.0 - 8.0 years
0 - 3 Lacs
bengaluru
Remote
As a GCP Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Google Cloud Platform . You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Databricks , Python , SQL , PySpark / Scala , and Informatica will be essential for performing the following key responsibilities: Key Responsibilities: Designing and developing data pipelines: Design and implement scalable and efficient data pipelines using GCP-native services (e.g., Cloud Composer, Dataflow, BigQuery) and tools like Databricks , PySpark , and Scala . This includes data ingestion, transformation, and loading (ETL/ELT) processes. Data modeling and database design: Develop data models and schema designs to support efficient data storage and analytics using tools like BigQuery , Cloud Storage , or other GCP-compatible storage solutions. Data integration and orchestration: Orchestrate and schedule complex data workflows using Cloud Composer (Apache Airflow) or similar orchestration tools. Manage end-to-end data integration across cloud and on-premises systems. Data quality and governance: Implement data quality checks, validation rules, and governance processes to ensure data accuracy, integrity, and compliance with organizational standards and external regulations. Performance optimization: Optimize pipelines and queries to enhance performance and reduce processing time, including tuning Spark jobs, SQL queries, and leveraging caching mechanisms or parallel processing in GCP. Monitoring and troubleshooting: Monitor data pipeline performance using GCP operations suite (formerly Stackdriver) or other monitoring tools. Identify bottlenecks and troubleshoot ingestion, transformation, or loading issues. Documentation and collaboration: Maintain clear and comprehensive documentation for data flows, ETL logic, and pipeline configurations. Collaborate closely with data scientists, business analysts, and product owners to understand requirements and deliver data engineering solutions. Skills and Qualifications: 5+ years of experience in a Data Engineer role with exposure to large-scale data processing. Strong hands-on experience with Google Cloud Platform (GCP) , particularly services like BigQuery , Cloud Storage , Dataflow , and Cloud Composer . Proficient in Python and/or Scala , with a strong grasp of PySpark . Experience working with Databricks in a cloud environment. Solid experience building and maintaining big data pipelines, architectures, and data sets. Strong knowledge of Informatica for ETL/ELT processes. Proven track record of manipulating, processing, and extracting value from large-scale, unstructured datasets. Working knowledge of stream processing and scalable data stores (e.g., Kafka, Pub/Sub, BigQuery). Solid understanding of data modeling concepts and best practices in both OLTP and OLAP systems. Familiarity with data quality frameworks , governance policies, and compliance standards. Skilled in performance tuning , job optimization, and cost-efficient cloud architecture design. Excellent communication and collaboration skills to work effectively in cross-functional and client-facing roles. Bachelor's degree in Computer Science , Information Systems , or a related field (Mathematics, Engineering, etc.). Bonus: Experience with distributed computing frameworks like Hadoop and Spark.
Posted Date not available
8.0 - 13.0 years
20 - 30 Lacs
kolkata, chennai, bengaluru
Hybrid
Role & responsibilities Designing, building, and maintaining ETL/ELT pipelines using dbt and GCP services like BigQuery and Cloud Composer. Creating and managing data models and transformations using dbt to ensure efficient and accurate data consumption for analytics and reporting. Developing and maintaining a data quality framework, including automated testing and cross-dataset validation. Writing and optimizing SQL queries for efficient data processing within BigQuery. Working with data engineers, analysts, scientists, and business stakeholders to deliver data solutions. Supporting day-to-day incident and ticket resolution related to data pipelines. Creating and maintaining comprehensive documentation for data pipelines, configurations, and procedures. Utilizing GCP services like BigQuery, Cloud Composer, Cloud Functions, etc. Developing and maintaining SQL/Python scripts for data ingestion, transformation, and automation tasks. Preferred candidate profile Typically requires 7~12 years of experience in data engineering or a related field. Strong hands-on experience with Google Cloud Platform (GCP) services, particularly BigQuery. Proficiency in using dbt for data transformation, testing, and documentation. Advanced SQL skills for data modeling, performance optimization, and querying large datasets. Understanding of data warehousing concepts, dimensional modeling, and star schema design. Experience with ETL/ELT tools and frameworks, including Apache Beam, Cloud Dataflow, Data Fusion, or Airflow/Composer.
Posted Date not available
9.0 - 14.0 years
15 - 27 Lacs
pune
Hybrid
Notice Period - Immediate joiner Responsibilities Lead, develop and support analytical pipelines to acquire, ingest and process data from multiple sources Debug, profile and optimize integrations and ETL/ELT processes Design and build data models to conform to our data architecture Collaborate with various teams to deliver effective, high value reporting solutions by leveraging an established DataOps delivery methodology Continually recommend and implement process improvements and tools for data collection, analysis, and visualization Address production support issues promptly, keeping stakeholders informed of status and resolutions Partner closely with on and offshore technical resources Provide on-call support outside normal business hours as needed Provide status updates to the stakeholders. Identify obstacles and seek assistance with enough lead time to ensure delivery on time Demonstrate technical ability, thoroughness, and accuracy in all assignments Document and communicate on proper operations, standards, policies, and procedures Keep abreast on all new tools and technologies that are related to our Enterprise data architecture Foster a positive work environment by promoting teamwork and open communication. Skills/Qualifications Bachelors degree in computer science with focus on data engineering preferable. 6+ years of experience in data warehouse development, building and managing data pipelines in cloud computing environments Strong proficiency in SQL and Python Experience with Azure cloud services, including Azure Data Lake Storage, Data Factory, and Databricks Expertise in Snowflake or similar cloud warehousing technologies Experience with GitHub, including GitHub Actions. Familiarity with data visualization tools, such as Power BI or Spotfire Excellent written and verbal communication skills Strong team player with interpersonal skills to interact at all levels Ability to translate technical information for both technical and non-technical audiences Proactive mindset with a sense of urgency and initiative Adaptability to changing priorities and needs If you are interested share your updated resume on mail - recruit5@focusonit.com. Also Request you to please spread this message across your Networks or Contacts.
Posted Date not available
5.0 - 10.0 years
10 - 20 Lacs
pune
Work from Office
Job Description: Job Role: Data Engineer Role Yrs of Exp : 4+Years Job Location : Pune Work Model : Hybrid Job Summary: We are seeking a highly skilled Data Engineer with strong expertise in DBT, Java, Apache Airflow, and DAG (Directed Acyclic Graph) design to join our data platform team. You will be responsible for building robust data pipelines, designing and managing workflow DAGs, and ensuring scalable data transformations to support analytics and business intelligence. Key Responsibilities: Design, implement, and optimize ETL/ELT pipelines using DBT for data modeling and transformation. Develop backend components and data processing logic using Java. Build and maintain DAGs in Apache Airflow for orchestration and automation of data workflows. Ensure the reliability, scalability, and efficiency of data pipelines for ingestion, transformation, and storage. Work with cross-functional teams to understand data needs and deliver high-quality solutions. Troubleshoot and resolve data pipeline issues in production environments. Apply data quality and governance best practices, including validation, logging, and monitoring. Collaborate on CI/CD deployment pipelines for data infrastructure. Required Skills & Qualifications: 4+ years of hands-on experience in Data engineering roles. Strong experience with DBT for modular, testable, and version-controlled data transformation. Proficient in Java , especially for building custom data connectors or processing frameworks. Deep understanding of Apache Airflow and ability to design and manage complex DAGs. Solid SQL skills and familiarity with data warehouse platforms (e.g., Snowflake, Redshift, BigQuery). Familiarity with version control tools (Git), CI/CD pipelines, and Agile methodologies. Exposure to cloud environments like AWS, GCP, or Azure .
Posted Date not available
4.0 - 9.0 years
30 - 35 Lacs
pune
Hybrid
Hi, Greetings!!! We are hiring for full time employment with one of the Product based company _ Hybrid Model. Main Skills: Data Engineer, DBT, Java, Apache, Any cloud Required Skills & Qualifications: 4+ years of hands-on experience in Data engineering roles. Strong experience with DBT for modular, testable, and version-controlled data transformation. Proficient in Java , especially for building custom data connectors or processing frameworks. Deep understanding of Apache Airflow and ability to design and manage complex DAGs. Solid SQL skills and familiarity with data warehouse platforms (e.g., Snowflake, Redshift, BigQuery). Familiarity with version control tools (Git), CI/CD pipelines, and Agile methodologies. Exposure to cloud environments like AWS, GCP, or Azure . If your profile matching to the above JD please share your CV to the below email id. poornima.kondi@srsconsultinginc.com
Posted Date not available
5.0 - 10.0 years
10 - 20 Lacs
ahmedabad
Remote
Role & responsibilities At least 5 years of experience in data engineering with a strong background on Azure Databricks and Scala/Python. Experience in handling unstructured data processing and transformation with programming knowledge. Hands on experience in building data pipelines using Scala/Python Big data technologies such as Apache Spark, Structured Streaming, SQL, Databricks Delta Lake Strong analytical and problem solving skills with the ability to troubleshoot spark applications and resolve data pipeline issues. Familiarity with version control systems like Git, CICD pipelines usin
Posted Date not available
5.0 - 8.0 years
8 - 13 Lacs
hyderabad
Work from Office
Position : Data Engineer Location : Hyderabad Experience : 5 Years Technical Skills: Experience with SQL and Pyspark. Tenure : 6 Months Functional Knowledge: Background in pharmaceutical data is beneficial. Ability to communicate with multiple stakeholders. Work collaboratively with data scientists and subject-matter experts to address data requirements. Identify and correct data inconsistencies and irregularities. Design data models and prepare data artifacts to meet business objectives. Experience developing ETL processes in high-performance environments such as Databricks and Snowflake. Fulfill customer requirements as specified by agreed service-level agreements (SLAs), utilizing structured project management approaches, including appropriate documentation and communication during service delivery. Quality Assurance: Verify that deliverables meet established standards for quality and accuracy. Ensure timely completion of projects within set deadlines. Assist in creating and maintaining standard operating procedures (SOPs). Contribute to the development and upkeep of knowledge repositories containing qualitative and quantitative reports.
Posted Date not available
8.0 - 12.0 years
7 - 11 Lacs
indore, hyderabad, ahmedabad
Work from Office
Job Location: Hyderabad, Indore, Ahmedabad (India) Notice Period: Immediate joiners or within 15 days preferred Share Your Resume With: Current CTC Expected CTC Notice Period Preferred Job Location Primary Skills: MSSQL, Redshift, Snowflake T-SQL, LinkSQL, Stored Procedures ETL Pipeline Development Query Optimization & Indexing Schema Design & Partitioning Data Quality, SLAs, Data Refresh Source Control (Git/Bitbucket), CI/CD Data Modeling, Versioning Performance Tuning & Troubleshooting What You Will Do: Design scalable, partitioned schemas for MSSQL, Redshift, and Snowflake. Optimize complex queries, stored procedures, indexing, and performance tuning. Build and maintain robust data pipelines to ensure timely, reliable delivery of data. Own SLAs for data refreshes, ensuring reliability and consistency. Collaborate with engineers, analysts, and DevOps to align data models with product and business needs. Troubleshoot performance issues, implement proactive monitoring, and improve workflows. Enforce best practices for data security, governance, and compliance. Utilize schema migration/versioning tools for database changes. What Youll Bring: Bachelors or Masters in Computer Science, Engineering, or related field. 8+ years of experience in database engineering or backend data systems. Expertise in MySQL, Redshift, Snowflake, and schema optimization. Strong experience in writing functions, procedures, and robust SQL scripts. Proficiency with ETL processes, data modeling, and data freshness SLAs. Experience handling production performance issues and being the go-to database expert. Hands-on with Git, CI/CD pipelines, and data observability tools. Strong problem-solving, collaboration, and analytical skills. If youre interested and meet the above criteria, please share your resume with your current CTC, expected CTC, notice period, and preferred job location. Immediate or 15-day joiners will be prioritized.
Posted Date not available
5.0 - 10.0 years
20 - 30 Lacs
bengaluru
Work from Office
Job Description: We are seeking a talented and experienced Senior Data Engineer to join our Catalog Management department. In this role, you will be responsible for designing, developing, and maintaining robust and scalable data pipelines and infrastructure on Google Cloud Platform (GCP). You will work closely with data scientists, analysts, and other engineers to ensure data is readily available, reliable, and optimized for various analytical and operational needs. A strong focus on building automated testing into our data solutions is a must. The ideal candidate will have a strong background in Java development, Apache Spark, and GCP services, with a passion for building high-quality data solutions. Responsibilities: Data Pipeline Development: Design, develop, and maintain efficient and scalable data pipelines using Apache Spark (primarily with Java) or Apache Beam or Kubeflow to ingest, process, and transform large datasets from various sources. GCP Infrastructure Management: Build, configure, and manage data infrastructure components on GCP, including BigQuery, Dataflow, Dataproc, Cloud Storage, Pub/Sub, and Cloud Functions. API Development and Maintenance: Develop and maintain RESTful APIs using Spring Boot to provide secure and reliable access to processed data and data services. Data Modeling and Design: Design and implement optimized data models for analytical and operational use cases, considering performance, scalability, and data integrity. Data Quality Assurance: Implement comprehensive data quality checks and monitoring systems to ensure data accuracy, consistency, and reliability throughout the data lifecycle. Test Automation: Develop and maintain automated unit, integration, and end-to-end tests for data pipelines and APIs to ensure code quality and prevent regressions. Performance Optimization and Monitoring: Proactively monitor system performance, reliability, and scalability. Analyze system performance metrics (CPU, memory, network) to identify bottlenecks, optimize system health, and ensure cost-efficiency. Collaboration and Communication: Collaborate effectively with data scientists, analysts, product managers, architects, and other engineers to understand data requirements, translate them into technical solutions, and deliver effective data solutions. Documentation: Create and maintain clear, comprehensive, and up-to-date documentation for data pipelines, infrastructure, and APIs, including design specifications, operational procedures, and troubleshooting guides. CI/CD Implementation: Implement and maintain robust CI/CD pipelines for automated deployment of data solutions, ensuring rapid and reliable releases. Production Support and Incident Management: Provide timely and effective support for production systems, including incident management, root cause analysis, and resolution. Continuous Learning: Stay current with the latest trends and technologies in data engineering, GCP, and related fields, and proactively identify opportunities to improve existing systems and processes. Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field. 4-6 years of experience in data engineering or a related role. Strong proficiency in Java programming. Extensive experience with Apache Spark for data processing. Solid experience with Google Cloud Platform (GCP) services, including BigQuery, Dataflow, Dataproc, Cloud Storage, and Pub/Sub. Experience developing RESTful APIs using Spring Boot. Experience with test automation frameworks (e.g., JUnit, Mockito, REST Assured). Experience with CI/CD pipelines (e.g., Jenkins, GitLab CI, Cloud Build). Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Preferred Qualifications: Experience with other data processing technologies (e.g., Apache Beam, Flink). Experience with infrastructure-as-code tools (e.g., Terraform, Cloud Deployment Manager). Experience with data visualization tools (e.g., Tableau, Looker). Experience with containerization technologies (e.g., Docker, Kubernetes). Understanding of AI/GenAI concepts and their data requirements is a plus. Experience building data pipelines to support AI/ML models is a plus. Strong expertise in API testing tools (e.g., Postman). Solid experience in performance testing using JMeter. Proven experience with modern test automation frameworks. Proficient in using JUnit for unit testing. Technical Skills: Strong proficiency in Java programming, including functional programming concepts for scripting and automation. Solid understanding of cloud platforms, with a strong preference for Google Cloud Platform (GCP). Proven experience with modern test automation frameworks (e.g., JUnit, Mockito). Familiarity with system performance monitoring and analysis (CPU, memory, network). Experience with monitoring and support best practices. Strong debugging and troubleshooting skills to identify and resolve complex technical issues. Strong analytical, problem-solving, and communication skills.
Posted Date not available
2.0 - 5.0 years
4 - 9 Lacs
noida
Work from Office
We are seeking a skilled Data Engineer to design, build, and maintain high-performance data pipelines within the Microsoft Fabric ecosystem. The role involves transforming raw data into analytics-ready assets, optimising data performance across both modern and legacy platforms, and collaborating closely with Data Analysts to deliver reliable, business-ready gold tables. You will also coordinate with external vendors during build projects to ensure adherence to standards. Key Responsibilities Pipeline Development & Integration Design and develop end-to-end data pipelines using Microsoft Fabric (Data Factory, Synapse, Notebooks). Build robust ETL/ELT processes to ingest data from both modern and legacy sources. Create and optimise gold tables and semantic models in collaboration with Data Analysts. Implement real-time and batch processing with performance optimisation. Build automated data validation and quality checks across Fabric and legacy environments. Manage integrations with SQL Server (SSIS packages, cube processing). Data Transformation & Performance Optimisation Transform raw datasets into analytics-ready gold tables following dimensional modelling principles. Implement complex business logic and calculations within Fabric pipelines. Create reusable data assets and standardised metrics with Data Analysts. Optimise query performance across Fabric compute engines and SQL Server. Implement incremental loading strategies for large datasets. Maintain and improve performance across both Fabric and legacy environments. Business Collaboration & Vendor Support Partner with Data Analysts and stakeholders to understand requirements and deliver gold tables. Provide technical guidance to vendors during data product development. Ensure vendor-built pipelines meet performance and integration standards. Collaborate on data model design for both ongoing reporting and new analytics use cases. Support legacy reporting systems including Excel, SSRS, and Power BI. Resolve data quality issues across internal and vendor-built solutions. Quality Assurance & Monitoring Write unit and integration tests for data pipelines. Implement monitoring and alerting for data quality. Troubleshoot pipeline failures and data inconsistencies. Maintain documentation and operational runbooks. Support deployment and change management processes. Required Skills & Experience Essential 2+ years of data engineering experience with Microsoft Fabric and SQL Server environments. Strong SQL expertise for complex transformations in Fabric and SQL Server. Proficiency in Python or PySpark for data processing. Integration experience with SSIS, SSRS, and cube processing. Proven performance optimisation skills across Fabric and SQL Server. Experience coordinating with vendors on technical build projects. Strong collaboration skills with Data Analysts for gold table creation. Preferred Microsoft Fabric or Azure certifications (DP-600, DP-203). Experience with Git and CI/CD for data pipelines. Familiarity with streaming technologies and real-time processing. Background in BI or analytics engineering. Experience with data quality tools and monitoring frameworks.
Posted Date not available
5.0 - 10.0 years
15 - 22 Lacs
bengaluru
Work from Office
HPE is seeking Data Engineer with strong experience in machine learning workflows to build and optimize scalable data systems. You'll work closely with data scientists and data engineers to power ML-driven solutions . --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Responsibilities: Collaborate closely with Machine Learning (ML) teams to deploy and monitor models in production, ensuring optimal performance and reliability. Design and implement experiments, and apply statistical analysis to validate model solutions and results. Lead efforts in ensuring high-quality data, proper governance practices, and excellent system performance in complex data architectures. Develop, maintain, and scale data pipelines, enabling machine learning and analytical models to function efficiently. Monitor and troubleshoot issues within data systems, resolving performance bottlenecks and implementing best practices. Required Skills: 5-6 years of data engineering experience, with a proven track record in building scalable data systems. Proficiency in SQL & No SQL databases, Python, and distributed processing technologies such as Apache Spark. Strong understanding of data warehousing concepts, data modelling, and architecture principles. Expertise in cloud platforms (AWS, GCP, Azure) and managing cloud-based data systems would be an added advantage Hands-on experience building and maintaining machine learning pipelines and utilizing tools like MLflow, Kubeflow, or similar frameworks. Experience with search, recommendation engines, or NLP (Natural Language Processing) technologies. Solid foundation in statistics and experimental design, particularly in relation to machine learning systems. Strong problem-solving skills and ability to work independently and in a team-oriented environment.
Posted Date not available
6.0 - 11.0 years
35 - 50 Lacs
hyderabad
Work from Office
7-9 years of experience with data analytics, data modeling, and database design. 3+ years of coding and scripting (Python, Java, Scala) and design experience. 3+ years of experience with Spark framework. 5+ Experience with ELT methodologies and tools. 5+ years mastery in designing, developing, tuning and troubleshooting SQL. Knowledge of Informatica Power center and Informatica IDMC. Knowledge of distributed, column- orientated technology to create high-performant database technologies like - Vertica, Snowflake. Strong data analysis skills for extracting insights from financial data
Posted Date not available
3.0 - 6.0 years
10 - 18 Lacs
hyderabad, pune
Work from Office
Design, develop, and maintain robust and scalable data pipelines for ELT and ETL processes, ensuring data accuracy, completeness, and timeliness. Work with stakeholders to understand data requirements and translate
Posted Date not available
7.0 - 12.0 years
0 - 3 Lacs
hyderabad, pune, bengaluru
Hybrid
We are seeking a Sr. Data Engineer to join our Data Engineering team within our Enterprise Data Insights organization to build data solutions, design and implement ETL/ELT processes and manage our data platform to enable our cross functional stakeholders. As a part of our Corporate Engineering division, our vision is to spearhead technology and data-led solutions and experiences to drive growth & innovation at scale. The ideal candidate will have a strong Data Engineering background, advanced Python knowledge and experience with cloud services and SQL/NoSQL databases. You will work closely with our cross functional stakeholders in Product, Finance and GTM along with Business and Enterprise Technology teams. As a Senior Data Engineer, you will: Collaborating closely with various stakeholders to prioritize requests, identify improvements, and offer recommendations. Taking the lead in analyzing, designing, and implementing data solutions, which involves constructing and designing data models and ETL processes. Cultivating collaboration with corporate engineering, product teams, and other engineering groups. Leading and mentoring engineering discussions, advocating for best practices. Actively participating in design and code reviews. Accessing and exploring third-party data APIs to determine the data required to meet business needs. Ensuring data quality and integrity across different sources and systems. Managing data pipelines for both analytics and operational purposes. Continuously enhancing processes and policies to improve SLA and SOX compliance. You'll be a great addition to the team if you have: Hold a B.S., M.S., or Ph.D. in Computer Science or a related technical field. Possess over 5 years of experience in Data Engineering, focusing on building and maintaining data environments. Demonstrate at least 5 years of experience in designing and constructing ETL/ELT processes, managing data solutions within an SLA-driven environment. Exhibit a strong background in developing data products, APIs, and maintaining testing, monitoring, isolation, and SLA processes. Possess advanced knowledge of SQL/NoSQL databases (such as Snowflake, Redshift, MongoDB). Proficient in programming with Python or other scripting languages. Have familiarity with columnar OLAP databases and data modeling. Experience in building ELT/ETL processes using tools like dbt, AirFlow, Fivetran, CI/CD using GitHub, and reporting in Tableau. Possess excellent communication and interpersonal skills to effectively collaborate with various business stakeholders and translate requirements. Added bonus if you also have: A good understanding of Salesforce & Netsuite systems Experience in SAAS environments Designed and deployed ML models Experience with events and streaming data
Posted Date not available
5.0 - 10.0 years
9 - 12 Lacs
pune
Work from Office
Hiring for a leading MNC for position of Data Engineer , based at Kharadi (Pune) Designation : Data Engineer Shift Timing : 12 PM to 9 PM (Cab Facility Provided) Work Mode: Work from Office Key Responsibilities: - Liaise with stakeholders to define data requirements - Manage Snowflake & SQL databases - Build and optimize semantic models for reporting - Lead modern data architecture adoption - Reverse engineer complex data structures - Mentor peers on data governance best practices - Champion Agile/SCRUM methodologies Preferred Candidates: Experience- 5+ years in data engineering/BI roles - Strong ETL, data modelling, governance, and lineage documentation - Expertise in Snowflake, Azure (SQL Server, Data Factory, Logic Apps, App Services), Power BI - Advanced SQL & Python (OOP, JSON/XML) - Experience with medallion architecture, Fivetran, DBT - Application development using Python, Streamlit, Flask, Node.js, Power Apps - Agile/Scrum project management - Bachelors/Masters in Math, Stats, CS, IT, or Engineering
Posted Date not available
8.0 - 13.0 years
0 - 3 Lacs
hyderabad
Hybrid
Required: Bachelors degree in computer science or engineering. 5+ years of experience with data analytics, data modeling, and database design. 3+ years of coding and scripting ( Python , Java, Scala) and design experience. 3+ years of experience with Spark framework. Experience with ELT methodologies and tools. Experience with Vertica OR Teradata Expertise in tuning and troubleshooting SQL. Strong data integrity, analytical and multitasking skills. Excellent communication, problem solving, organizational and analytical skills. Able to work independently. Additional / preferred skills: Familiar with agile project delivery process. Knowledge of SQL and use in data access and analysis. Experience with Airflow . Ability to manage diverse projects impacting multiple roles and processes. Able to troubleshoot problem areas and identify data gaps and issues. Ability to adapt to fast changing environment. Experience with Python. Basic knowledge of database technologies (Vertica, Redshift, etc.). Experience designing and implementing automated ETL processes.
Posted Date not available
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |