Jobs
Interviews

453 Data Engineer Jobs - Page 13

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

10 - 20 Lacs

Pune, Chennai, Bengaluru

Hybrid

Our client is Global IT Service & Consulting Organization Exp-5+ yrs Skil Apache Spark Location- Bangalore, Hyderabad, Pune, Chennai, Coimbatore, Gr. Noida Excellent Knowledge on Spark; The professional must have a thorough understanding Spark framework, Performance Tuning etc Excellent Knowledge and hands-on experience of at least 4+ years in Scala or PySpark Excellent Knowledge of the Hadoop eco System- Knowledge of Hive mandatory Strong Unix and Shell Scripting Skills Excellent Inter-personal skills and for experienced candidates Excellent leadership skills Mandatory for anyone to have Good knowledge of any of the CSPs like Azure,AWS or GCP; Certifications on Azure will be additional Pl

Posted 3 months ago

Apply

4.0 - 8.0 years

10 - 20 Lacs

Hyderabad, Pune, Chennai

Work from Office

Very strong in python, pyspark and SQL. Good experience in any cloud . They use AWS but any cloud experience is ok. They will train on other things but if candidates have experience with ETL (like AWS Airflow), datalakes like Snowflake

Posted 3 months ago

Apply

5.0 - 7.0 years

10 - 15 Lacs

Hyderabad

Work from Office

Role description Were looking for a driven, organized team member to support the Digital & Analytics team with support of Talent transformation projects from both a systems and process perspective. The role will primarily provide PMO support. The individual will need to demonstrate strong project management skills, be very collaborative and detail oriented to coordinate meetings, track and update project plans, risks, and decision logs. This individual would also need to create project materials, support design sessions, user acceptance testing, and escalate project or system issues as needed. Work youll do As the Talent PMO Support you will: Support the Digital & Analytics Manager with support of Talent transformation projects. Track and drive closure of action items and open decisions. Schedule follow up calls, take notes, and distribute action items from discussions. Coordinate with Talent process owners and subject matter advisors to manage change requests and risks, actions, and decisions. Coordinate across Talent, Technology, and Consulting teams to track and escalate issues as appropriate. Update the Talent project plan items, resource tracker and the risks, actions and decisions log as needed. Leverage shared project team site and OneNote notebook to ensure structure and access to communications, materials, and documents for all project team members. Support testing, cut-over, training, and service rehearsal testing processes as needed. Collaborate with the Consulting, Technology, and Talent team members to ensure project deliverables move forward. Qualifications: Bachelors degree and 5-7 years of relevant work experience Background and experience with project management support to implement Talent processes, from ideation through deployment phases. Strong written/verbal executive communication and presentation skills; strong listening, facilitation and influencing skills with audiences of all management and leadership levels. Works well in a dynamic, complex, client, and team focused environment with minimal oversight and an agile mindset Excited by prospect of working in a developing, ambiguous, and challenging situation. Proficient Microsoft Office skills (e.g., PowerPoint, Excel, OneNote, Word, Teams)

Posted 3 months ago

Apply

5.0 - 9.0 years

13 - 22 Lacs

Hyderabad

Hybrid

Key Responsibilities: 1. Design, build, and deploy new data pipelines within our Big Data Eco-Systems using Streamsets/Talend/Informatica BDM etc. Document new/existing pipelines, Datasets. 2. Design ETL/ELT data pipelines using StreamSets, Informatica or any other ETL processing engine. Familiarity with Data Pipelines, Data Lakes and modern Data Warehousing practices (virtual data warehouse, push down analytics etc.) 3. Expert level programming skills on Python 4. Expert level programming skills on Spark 5. Cloud Based Infrastructure: GCP 6. Experience with one of the ETL Informatica, StreamSets in creation of complex parallel loads, Cluster Batch Execution and dependency creation using Jobs/Topologies/Workflows etc., 7. Experience in SQL and conversion of SQL stored procedures into Informatica/StreamSets, Strong exposure working with web service origins/targets/processors/executors, XML/JSON Sources and Restful APIs. 8. Strong exposure working with relation databases DB2, Oracle & SQL Server including complex SQL constructs and DDL generation. 9. Exposure to Apache Airflow for scheduling jobs 10. Strong knowledge of Big data Architecture (HDFS), Cluster installation, configuration, monitoring, cluster security, cluster resources management, maintenance, and performance tuning 11. Create POCs to enable new workloads and technical capabilities on the Platform. 12. Work with the platform and infrastructure engineers to implement these capabilities in production. 13. Manage workloads and enable workload optimization including managing resource allocation and scheduling across multiple tenants to fulfill SLAs. 14. Participate in planning activities, Data Science and perform activities to increase platform skills Key Requirements: 1. Minimum 6 years of experience in ETL/ELT Technologies, preferably StreamSets/Informatica/Talend etc., 2. Minimum of 6 years hands-on experience with Big Data technologies e.g. Hadoop, Spark, Hive. 3. Minimum 3+ years of experience on Spark 4. Minimum 3 years of experience in Cloud environments, preferably GCP 5. Minimum of 2 years working in a Big Data service delivery (or equivalent) roles focusing on the following disciplines: 6. Any experience with NoSQL and Graph databases 7. Informatica or StreamSets Data integration (ETL/ELT) 8. Exposure to role and attribute based access controls 9. Hands on experience with managing solutions deployed in the Cloud, preferably on GCP 10. Experience working in a Global company, working in a DevOps model is a plus

Posted 3 months ago

Apply

5.0 - 9.0 years

13 - 20 Lacs

Bangalore Rural, Bengaluru

Hybrid

Job Title: Data Engineer Company : Aqilea India(Client : H&M India) Employment Type: Full Time Location: Bangalore(Hybrid) Experience: 4.5 to 9 years Client : H&M India At H&M, we welcome you to be yourself and feel like you truly belong. Help us reimagine the future of an entire industry by making everyone look, feel, and do good. We take pride in our history of making fashion accessible to everyone and led by our values we strive to build a more welcoming, inclusive, and sustainable industry. We are privileged to have more than 120,000 colleagues, in over 75 countries across the world. Thats 120 000 individuals with unique experiences, skills, and passions. At H&M, we believe everyone can make an impact, we believe in giving people responsibility and a strong sense of ownership. Our business is your business, and when you grow, we grow. Website : https://career.hm.com/ We are seeking a skilled and forward-thinking Data Engineer to join our Emerging Tech team. This role is designed for someone passionate about working with cutting-edge technologies such as AI, machine learning, IoT, and big data to turn complex data sets into actionable insights. As the Data Engineer in Emerging Tech, you will be responsible for designing, implementing, and optimizing data architectures and processes that support the integration of next-generation technologies. Your role will involve working with large-scale datasets, building predictive models, and utilizing emerging tools to enable data-driven decision-making across the business. You ll collaborate with technical and business teams to uncover insights, streamline data pipelines, and ensure the best use of advanced analytics technologies. Key Responsibilities: Design and build scalable data architectures and pipelines that support machine learning, analytics, and IoT initiatives. Develop and optimize data models and algorithms to process and analyse large-scale, complex data sets. Implement data governance, security, and compliance measures to ensure high-quality Collaborate with cross-functional teams (engineering, product, and business) to translate business requirements into data-driven solutions. Evaluate, integrate, and optimize new data technologies to enhance analytics capabilities and drive business outcomes. Apply statistical methods, machine learning models, and data visualization techniques to deliver actionable insights. Establish best practices for data management, including data quality, consistency, and scalability. Conduct analysis to identify trends, patterns, and correlations within data to support strategic business initiatives. Stay updated on the latest trends and innovations in data technologies and emerging data management practices. Skills Required : Bachelors or masters degree in data science, Computer Science, Engineering, Statistics, or a related field. 4.5-9 years of experience in data engineering , data science, or a similar analytical role, with a focus on emerging technologies. Proficiency with big data frameworks (e.g., Hadoop, Spark, Kafka) and experience with modern cloud platforms (AWS, Azure, or GCP). Solid skills in Python, SQL, and optionally R, along with experience using machine learning libraries such as Scikit-learn, TensorFlow, or PyTorch. Experience with data visualization tools (e.g., Tableau or Power BI or D3.js) to communicate insights effectively. Familiarity with IoT and edge computing data architectures is a plus. Understanding of data governance, compliance, and privacy standards. Ability to work with both structured and unstructured data. Excellent problem-solving, communication, and collaboration skills, with the ability to work in a fast-paced, cross-functional team environment. A passion for emerging technologies and a continuous desire to learn and innovate. Interested Candidates can share your Resumes to mail id karthik.prakadish@aqilea.com

Posted 3 months ago

Apply

8.0 - 13.0 years

18 - 33 Lacs

Bengaluru

Hybrid

Warm Greetings from SP Staffing!! Role: AWS Data Engineer Experience Required :8 to 15 yrs Work Location :Bangalore Required Skills, Technical knowledge of data engineering solutions and practices. Implementation of data pipelines using tools like EMR, AWS Glue, AWS Lambda, AWS Step Functions, API Gateway, Athena Proficient in Python and Spark, with a focus on ETL data processing and data engineering practices. Interested candidates can send resumes to nandhini.spstaffing@gmail.com

Posted 3 months ago

Apply

6.0 - 11.0 years

11 - 21 Lacs

Kolkata, Pune, Chennai

Work from Office

Role & responsibilities Data Engineer, Expertise in AWS, Databricks and Pyspark

Posted 3 months ago

Apply

6.0 - 11.0 years

8 - 18 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Work from Office

Role & responsibilities Data Engineer, Expertise in AWS, Databricks and Pyspark

Posted 3 months ago

Apply

9.0 - 14.0 years

15 - 20 Lacs

Hyderabad

Work from Office

Job Description: SQL & Database Management: Deep knowledge of relational databases (PostgreSQL), cloud-hosted data platforms (AWS, Azure, GCP), and data warehouses like Snowflake . ETL/ELT Tools: Experience with SnapLogic, StreamSets, or DBT for building and maintaining data pipelines. / ETL Tools Extensive Experience on data Pipelines Data Modeling & Optimization: Strong understanding of data modeling, OLAP systems, query optimization, and performance tuning. Cloud & Security: Familiarity with cloud platforms and SQL security techniques (e.g., data encryption, TDE). Data Warehousing: Experience managing large datasets, data marts, and optimizing databases for performance. Agile & CI/CD: Knowledge of Agile methodologies and CI/CD automation tools. Role & responsibilities Build the data pipeline for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and cloud database technologies. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data needs. Work with data and analytics experts to strive for greater functionality in our data systems. Assemble large, complex data sets that meet functional / non-functional business requirements. – Ability to quickly analyze existing SQL code and make improvements to enhance performance, take advantage of new SQL features, close security gaps, and increase robustness and maintainability of the code. – Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery for greater scalability, etc. – Unit Test databases and perform bug fixes. – Develop best practices for database design and development activities. – Take on technical leadership responsibilities of database projects across various scrum teams. Manage exploratory data analysis to support dashboard development (desirable) Required Skills: – Strong experience in SQL with expertise in relational database(PostgreSQL preferrable cloud hosted in AWS/Azure/GCP) or any cloud-based Data Warehouse (like Snowflake, Azure Synapse). – Competence in data preparation and/or ETL/ELT tools like SnapLogic, StreamSets, DBT, etc. (preferably strong working experience in one or more) to build and maintain complex data pipelines and flows to handle large volume of data. – Understanding of data modelling techniques and working knowledge with OLAP systems – Deep knowledge of databases, data marts, data warehouse enterprise systems and handling of large datasets. – In-depth knowledge of ingestion techniques, data cleaning, de-dupe, etc. – Ability to fine tune report generating queries. – Solid understanding of normalization and denormalization of data, database exception handling, profiling queries, performance counters, debugging, database & query optimization techniques. – Understanding of index design and performance-tuning techniques – Familiarity with SQL security techniques such as data encryption at the column level, Transparent Data Encryption(TDE), signed stored procedures, and assignment of user permissions – Experience in understanding the source data from various platforms and mapping them into Entity Relationship Models (ER) for data integration and reporting(desirable). – Adhere to standards for all database e.g., Data Models, Data Architecture and Naming Conventions – Exposure to Source control like GIT, Azure DevOps – Understanding of Agile methodologies (Scrum, Itanban) – experience with NoSQL database to migrate data into other type of databases with real time replication (desirable). – Experience with CI/CD automation tools (desirable) – Programming language experience in Golang, Python, any programming language, Visualization tools (Power BI/Tableau) (desirable).

Posted 3 months ago

Apply

5.0 - 7.0 years

15 - 22 Lacs

Chennai

Work from Office

Role & responsibilities : Job Description: Primarily looking for a Data Engineer (AWS) with expertise in processing data pipelines using Data bricks, PySpark SQL on Cloud distributions like AWS Must have AWS Data bricks ,Good-to-have PySpark, Snowflake, Talend Requirements- • Candidate must be experienced working in projects involving • Other ideal qualifications include experiences in • Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. • Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python • Familiarity with AWS compute storage and IAM concepts • Experience in working with S3 Data Lake as the storage tier • Any ETL background Talend AWS Glue etc. is a plus but not required • Cloud Warehouse experience Snowflake etc. is a huge plus • Carefully evaluates alternative risks and solutions before taking action. • Optimizes the use of all available resources • Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit • Skills • Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. • Experience on Shell scripting • Exceptionally strong analytical and problem-solving skills • Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses • Strong experience with relational databases and data access methods especially SQL • Excellent collaboration and cross functional leadership skills • Excellent communication skills both written and verbal • Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment • Ability to leverage data assets to respond to complex questions that require timely answers • has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Mandatory Skills: Apache Spark, Databricks, Java, Python, Scala, Spark SQL. Note : Need only Immediate joiners/ Serving notice period. Interested candidates can apply. Regards, HR Manager

Posted 3 months ago

Apply

10.0 - 12.0 years

25 - 30 Lacs

Pune, Mumbai (All Areas)

Hybrid

Design and implement state-of-the-art NLP models, including but not limited to text classification, semantic search, sentiment analysis, named entity recognition, and summary generation. conduct data preprocessing, and feature engineering to improve model accuracy and performance. Stay updated with the latest developments in NLP and ML, and integrate cutting-edge techniques into our solutions. collaborate with Cross-Functional Teams: Work closely with data scientists, software engineers, and product managers to align NLP projects with business objectives. deploy models into production environments and monitor their performance to ensure robustness and reliability. maintain comprehensive documentation of processes, models, and experiments, and report findings to stakeholders. implement and deliver high quality software solutions / components for the Credit Risk monitoring platform. leverage his/her expertise to mentor developers; review code and ensure adherence to standards. apply a broad range of software engineering practices, from analyzing user needs and developing new features to automated testing and deployment ensure the quality, security, reliability, and compliance of our solutions by applying our digital principles and implementing both functional and non-functional requirements build observability into our solutions, monitor production health, help to resolve incidents, and remediate the root cause of risks and issues understand, represent, and advocate for client needs share knowledge and expertise with colleagues , help with hiring, and contribute regularly to our engineering culture and internal communities. Expertise - Bachelor of Engineering or equivalent. Ideally 8-10Yrs years of experience in NLP based applications focused on Banking / Finance sector. Preference for experience in financial data extraction and classification. Interested in learning new technologies and practices, reuse strategic platforms and standards, evaluate options, and make decisions with long-term sustainability in mind. Proficiency in programming languages such as Python & Java. Experience with frameworks like TensorFlow, PyTorch, or Keras. In-depth knowledge of NLP techniques and tools, including spaCy, NLTK, and Hugging Face. Experience with data handling and processing tools like Pandas, NumPy, and SQL. Prior experience in agentic AI, LLMs ,prompt engineering and generative AI is a plus. Backend development and microservices using Java Spring Boot, J2EE, REST for implementing projects with high SLA of data availability and data quality. Experience of building cloud ready and migrating applications using Azure and understanding of the Azure Native Cloud services, software design and enterprise integration patterns. Knowledge of SQL and PL/SQL (Oracle) and UNIX, writing queries, packages, working with joins, partitions, looking at execution plans, and tuning queries. A real passion for and experience of Agile working practices, with a strong desire to work with baked in quality subject areas such as TDD, BDD, test automation and DevOps principles Experience in Azure development including Databricks , Azure Services , ADLS etc. Experience using DevOps toolsets like GitLab, Jenkins

Posted 3 months ago

Apply

7.0 - 12.0 years

22 - 30 Lacs

Pune, Bengaluru

Hybrid

Job Role & responsibilities: - Understanding operational needs by collaborating with specialized teams Supporting key business operations. This involves architecture designing, building and deploying data systems, pipelines etc Designing and implementing agile, scalable, and cost efficiency solution on cloud data services. Lead a team of developers, implement Sprint planning and executions to ensure timely deliveries Technical Skill, Qualification & experience required:- 7-10 years of experience in Azure Cloud Data Engineering, Azure Databricks, datafactory , Pyspark, SQL,Python Hands on experience in Data Engineer, Azure Databricks, Data factory, Pyspark, SQL Proficient in Cloud Services Azure Architect and implement ETL and data movement solutions. Migrate data from traditional database systems to Cloud environment Strong hands-on experience for working with Streaming dataset Building Complex Notebook in Databricks to achieve business Transformations. Hands-on Expertise in Data Refinement using Pyspark and Spark SQL Familiarity with building dataset using Scala. Familiarity with tools such as Jira and GitHub Experience leading agile scrum, sprint planning and review sessions Good communication and interpersonal skills Comfortable working in a multidisciplinary team within a fast-paced environment * Immediate Joiners will be preferred only

Posted 3 months ago

Apply

5.0 - 10.0 years

30 - 45 Lacs

Hyderabad

Hybrid

Key Skills: Data Engineer, AI (Artificial intelligence), SQL, Python, Java. Roles and Responsibilities: Architect and implement modern, scalable data solutions on cloud platforms, specifically Google Cloud Platform (GCP). Collaborate with cross-functional teams to assess, redesign, and modernize legacy data systems. Design and develop efficient ETL pipelines for data extraction, transformation, and loading to support analytics and ML models. Ensure robust data governance by maintaining high standards of data security, integrity, and compliance with regulatory requirements. Monitor, troubleshoot, and optimize data workflows and pipelines for enhanced system performance and scalability. Provide hands-on technical expertise and guidance across data engineering projects, with a focus on cloud adoption and automation. Work in an agile environment and contribute to continuous delivery and improvement initiatives. Experience Requirements: 5-10 years experience in designing and implementing data engineering solutions in GCP or other leading cloud platforms. Solid understanding of legacy data infrastructure with demonstrated success in modernization and migration projects. Proficiency in programming languages such as Python and Java for building data solutions and automation scripts. Strong SQL skills, with experience in working with both relational (SQL) and non-relational (NoSQL) databases. Familiarity with data warehousing concepts, tools, and practices. Hands-on experience with data integration tools and frameworks. Excellent analytical, problem-solving, and communication skills. Experience working in fast-paced agile environments and collaborating with multi-disciplinary teams. Education: B.tech, M.tech, B.com, M.com, MBA, any PG.

Posted 3 months ago

Apply

5.0 - 10.0 years

16 - 25 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Hybrid

Greetings from Accion Labs !!! We are looking for a Sr Data Engineer Location : Bangalore , Mumbai , Pune, Hyderabad, Noida Experience : 5+ years Notice Period : Immediate Joiners/ 15 Days Any references would be appreciated !!! Job Description / Skill set: Python/Spark/PySpark/Pandas SQL AWS EMR/Glue/S3/RDS/Redshift/Lambda/SQS/AWS Step Function/EventBridge

Posted 3 months ago

Apply

5.0 - 10.0 years

16 - 31 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Hybrid

Greetings from Accion Labs !!! We are looking for a Sr Data Engineer Location : Bangalore , Mumbai , Pune, Hyderabad, Noida Experience : 5+ years Notice Period : Immediate Joiners/ 15 Days Any references would be appreciated !!! Job Description / Skill set: Python/Spark/PySpark/Pandas SQL AWS EMR/Glue/S3/RDS/Redshift/Lambda/SQS/AWS Step Function/EventBridge Real - time analytics

Posted 3 months ago

Apply

5.0 - 10.0 years

25 - 30 Lacs

Hyderabad, Pune

Work from Office

Python+ Pyspark (5 to 10 Yrs) joining location Hyderabad and Pune.

Posted 3 months ago

Apply

1.0 - 6.0 years

2 - 7 Lacs

Mumbai, Navi Mumbai, Mumbai (All Areas)

Work from Office

Position Name Data Engineer Total Exp: 3-5 Years Notice Period: Immedidate joiner Work Location: Mumbai, Kandivali Work Type: Work from Office Job Description Must have: Must have Data Engineer having 3 to 5 years of experience Must have Should be an individual contributor to deliver the feature/story within given time and expected quality Must have Should be good in Agile process Must have Should be strong in Programming and SQL queries Must have Should be capable to learn new tools and technologies to scale on Data engineering Must have Should have good communication and client interaction. Technical Skills: Must have Data Engineering using Java/Python, Spark/Py-Spark, Big Data(Hadoop, Hive, Yarn, Oozie, etc.,), Cloud warehouse(Snowflake), Cloud services(AWS EMR, S3, Lambda, RDS/Aurora) Must have Unit testing Framework Junit/Mokito/PowerMock Must have Strong experience on SQL queries(MySQL/SQL server/Oracle/Hadoop/snowflake) Must have Source safe - GITHub Must have Project management tool - VSTS Must have Build management tool - Maven / Gradle Must have CI/CD Azure devops Added advantage: Good to have Shell script, Linux commands

Posted 3 months ago

Apply

8.0 - 12.0 years

15 - 27 Lacs

Mumbai, Pune, Bengaluru

Work from Office

Role & responsibilities : Job Description: Primarily looking for a Data Engineer (AWS) with expertise in processing data pipelines using Data bricks, PySpark SQL on Cloud distributions like AWS Must have AWS Data bricks ,Good-to-have PySpark, Snowflake, Talend Requirements- • Candidate must be experienced working in projects involving • Other ideal qualifications include experiences in • Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. • Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python • Familiarity with AWS compute storage and IAM concepts • Experience in working with S3 Data Lake as the storage tier • Any ETL background Talend AWS Glue etc. is a plus but not required • Cloud Warehouse experience Snowflake etc. is a huge plus • Carefully evaluates alternative risks and solutions before taking action. • Optimizes the use of all available resources • Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit • Skills • Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. • Experience on Shell scripting • Exceptionally strong analytical and problem-solving skills • Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses • Strong experience with relational databases and data access methods especially SQL • Excellent collaboration and cross functional leadership skills • Excellent communication skills both written and verbal • Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment • Ability to leverage data assets to respond to complex questions that require timely answers • has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Mandatory Skills: Apache Spark, Databricks, Java, Python, Scala, Spark SQL. Note : Need only Immediate joiners/ Serving notice period. Interested candidates can apply. Regards, HR Manager

Posted 3 months ago

Apply

5.0 - 10.0 years

0 Lacs

Hyderabad

Work from Office

Dear all., Greetings of the Day!!! We have opening with one of the top MNC. Experience:5-15 Years Notice Period: Immediate to 45 Days Job Description: Design, build, and maintain data pipelines (ETL/ELT) using BigQuery , Python , and SQL Optimize data flow, automate processes, and scale infrastructure Develop and manage workflows in Airflow/Cloud Composer and Ascend (or similar ETL tools) Implement data quality checks and testing strategies Support CI/CD (DevSecOps) processes, conduct code reviews, and mentor junior engineers Collaborate with QA/business teams and troubleshoot issues across environments. DBT for transformation Collibra for data quality Working with unstructured datasets Strong analytical and SQL expertise Interested candidates revert back with your updated CV to sushma.b@technogenindia.com

Posted 3 months ago

Apply

7.0 - 12.0 years

25 - 40 Lacs

Pune

Work from Office

Role : Consultant/Sr. Consultant Mandate Skills : GCP & Hadoop Location : Pune Budget : 7-10 Years - Upto 30 LPA & 10-12 Years - Upto 40 LPA (Non-Negotiable) Interested candidates can share resumes at: "Kashif@d2nsolutions.com"

Posted 3 months ago

Apply

7.0 - 12.0 years

25 - 40 Lacs

Pune

Work from Office

Experience as a Data Analyst with GCP & Hadoop is mandatory. Work From Office

Posted 3 months ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Chennai

Work from Office

5+ Years of experience in ETL development with strong proficiency in Informatica BDM . Hands-on experience with big data platforms like Hadoop, Hive, HDFS, Spark . Proficiency in SQL and working knowledge of Unix/Linux shell scripting. Experience in performance tuning of ETL jobs in a big data environment. Familiarity with data modeling concepts and working with large datasets. Strong problem-solving skills and attention to detail. Experience with job scheduling tools (e.g., Autosys, Control-M) is a plus.

Posted 3 months ago

Apply

4.0 - 7.0 years

6 - 9 Lacs

Pune

Work from Office

Perydot is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey. Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up- to- date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 3 months ago

Apply

6.0 - 10.0 years

20 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Role & responsibilities As a Senior Data Engineer, you will work to solve some of the organizational data management problems that would enable them as a data-driven organization; Seamlessly switch between roles of an Individual Contributor, team member and Data Modeling lead as demanded by each project to define, design, and deliver actionable insights. On a typical day, you might Engage the clients & understand the business requirements to translate those into data models. Create and maintain a Logical Data Model (LDM) and Physical Data Model (PDM) by applying best practices to provide business insights. Contribute to Data Modeling accelerators Create and maintain the Source to Target Data Mapping document that includes documentation of all entities, attributes, data relationships, primary and foreign key structures, allowed values, codes, business rules, glossary terms, etc. Gather and publish Data Dictionaries. Involve in maintaining data models as well as capturing data models from existing databases and recording descriptive information. Use the Data Modelling tool to create appropriate data models. Contribute to building data warehouse & data marts (on Cloud) while performing data profiling and quality analysis. Use version control to maintain versions of data models. Collaborate with Data Engineers to design and develop data extraction and integration code modules. Partner with the data engineers to strategize ingestion logic and consumption patterns. Preferred candidate profile 6+ years of experience in Data space. Decent SQL skills. Significant experience in one or more RDBMS (Oracle, DB2, and SQL Server) Real-time experience working in OLAP & OLTP database models (Dimensional models). Good understanding of Star schema, Snowflake schema, and Data Vault Modelling. Also, on any ETL tool, Data Governance, and Data quality. Eye to analyze data & comfortable with following agile methodology. Adept understanding of any of the cloud services is preferred (Azure, AWS & GCP). You are important to us, lets stay connected! Every individual comes with a different set of skills and qualities so even if you dont tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow. We are an equal-opportunity employer. Our diverse and inclusive culture and values guide us to listen, trust, respect, and encourage people to grow the way they desire. Note: The designation will be commensurate with expertise and experience. Compensation packages are among the best in the industry

Posted 3 months ago

Apply

8.0 - 12.0 years

20 - 35 Lacs

Kolkata, Pune, Chennai

Work from Office

Senior data engineer Job description: - Demonstrate hands-on expertise in Ab Initio GDE, Metadata Hub, Co-operating system & Control-Centre. - Must demonstrate high proficiency in SQL. - Develop and implement solutions for metadata management and data quality assurance. - Able to identify, analyze, and resolve technical issues related to the Ab Initio solution. - Perform unit testing and ensure the quality of developed solutions. - Provide Level 3 support and troubleshoot issues with Ab Initio applications deployed in Production - Working knowledge of Azure Databricks & python will be an advantage. - Any past experience working on SAP HANA Data layer would be good to have. Other traits - Proficient communication skill required as she/he will be directly engaging with client teams. - Technical leadership, open to learn and adopt the complex landscape of data technologies in a new environment.

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies