Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
You are an experienced Data Engineer who will be responsible for leading the end-to-end migration of the data analytics and reporting environment to Looker at Frequence. Your role will involve designing scalable data models, translating business logic into LookML, and empowering teams across the organization with self-service analytics and actionable insights. You will collaborate closely with stakeholders from data, engineering, and business teams to ensure a smooth transition to Looker, establish best practices for data modeling, governance, and dashboard development. Your responsibilities will include: - Leading the migration of existing BI tools, dashboards, and reporting infrastructure to Looker - Designing, developing, and maintaining scalable LookML data models, dimensions, measures, and explores - Creating intuitive, actionable, and visually compelling Looker dashboards and reports - Collaborating with data engineers and analysts to ensure consistency across data sources - Translating business requirements into technical specifications and LookML implementations - Optimizing SQL queries and LookML models for performance and scalability - Implementing and managing Looker's security settings, permissions, and user roles in alignment with data governance standards - Troubleshooting issues and supporting end users in their Looker adoption - Maintaining version control of LookML projects using Git - Advocating for best practices in BI development, testing, and documentation You should have: - Proven experience with Looker and deep expertise in LookML syntax and functionality - Hands-on experience building and maintaining LookML data models, explores, dimensions, and measures - Strong SQL skills, including complex joins, aggregations, and performance tuning - Experience working with semantic layers and data modeling for analytics - Solid understanding of data analysis and visualization best practices - Ability to create clear, concise, and impactful dashboards and visualizations - Strong problem-solving skills and attention to detail in debugging Looker models and queries - Familiarity with Looker's security features and data governance principles - Experience using version control systems, preferably Git - Excellent communication skills and the ability to work cross-functionally - Familiarity with modern data warehousing platforms (e.g., Snowflake, BigQuery, Redshift) - Experience migrating from legacy BI tools (e.g., Tableau, Power BI, etc.) to Looker - Experience working in agile data teams and managing BI projects - Familiarity with dbt or other data transformation frameworks At Frequence, you will be part of a dynamic, diverse, innovative, and friendly work environment that values creativity and collaboration. The company embraces differences and believes they drive creativity and innovation. The team consists of individuals from varied backgrounds who are all trail-blazing team players, thinking big and aiming to make a significant impact. Please note that third-party recruiting agencies will not be involved in this search.,
Posted 1 month ago
4.0 - 7.0 years
6 - 9 Lacs
Ahmedabad
Work from Office
Job Overviews Designation: Software Engineer Location: Ahmedabad Work Mode: Work from Office Vacancy: 1 Experience: 4.0 To 7.0 ManekTech is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 1 month ago
6.0 - 11.0 years
9 - 19 Lacs
Bengaluru
Hybrid
Lead : 6-8 years Focus on production cost for the techniques and features • Mentoring the team on benchmarking costs, performance KPI’s • Guarding the focus on the team towards objectives •Advanced proficiency in Python and/or Scala for data engineering tasks. •Proficiency in PySpark and Scala Spark for distributed data processing, with hands-on experience in Azure Databricks. •Expertise in Azure Databricks for data engineering, including Delta Lake, MLflow, and cluster management. •Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and their big data and data warehousing services (e.g., Azure Data Factory, AWS Redshift). •Expertise in data warehousing platforms such as Snowflake, Azure Synapse Analytics, or Redshift, including schema design, ETL/ELT processes, and query optimization. •Experience with Hadoop ecosystem (HDFS, Hive, HBase, etc.), Apache Airflow for workflow orchestration and scheduling. •Advanced knowledge of SQL for data warehousing and analytics, with experience in NoSQL databases (e.g., MongoDB) as a plus. •Experience with version control systems (e.g., Git) and CI/CD pipelines. •Familiarity with Java or other programming languages is a plus.
Posted 1 month ago
4.0 - 8.0 years
10 - 20 Lacs
Pune, Gurugram, Bengaluru
Work from Office
We are looking for Data Engineer who are having experience of working as Quantexa developer.
Posted 2 months ago
5.0 - 10.0 years
30 - 35 Lacs
Kolkata, New Delhi, Bengaluru
Work from Office
Good hands on experience working as a GCP Data Engineer with very strong experience in SQL and PySpark. Also on BigQuery, Dataform, Dataplex, etc. Looking for only Immediate to currently serving candidates.
Posted 2 months ago
2.0 - 4.0 years
7 - 9 Lacs
Mangaluru
Work from Office
About the Role Were seeking a Data Engineering expert with a passion for teaching and building impactful learning experiences. This role goes beyond traditional instructionit's about designing engaging, industry-relevant content and delivering it in a way that sparks curiosity and problem-solving among young professionals. If youre someone who thrives in a startup-like, hands-on learning environment and loves to simplify complex technical concepts, we want you on our team. Role & responsibilities • Design and deliver an industry-relevant Data Engineering curriculum with a focus on solving complex, real-world problems. • Mentor students through the process of building product-grade data solutions, from identifying the problem to deploying a prototype. • Conduct hands-on sessions, coding labs, and data engineering workshops. • Assess student progress through assignments, evaluations, and project reviews. • Encourage innovation and entrepreneurship by helping students transform ideas into structured products. • Continuously improve content based on student outcomes and industry trends. • Be a role model who inspires, supports, and challenges learners to grow into capable tech professionals Preferred candidate profile Key Skills & Expertise Strong practical experience with data engineering tools and frameworks (e.g., SQL, Python, Spark, Kafka, Airflow, Hadoop). Ability to design course modules that emphasize application, scalability, and problem-solving. Demonstrated experience in mentoring, teaching, or conducting technical workshops. Passion for product thinkingguiding students to go beyond code and build real solutions. Excellent communication and leadership skills. Adaptability and a growth mindset. Contact: 91 97041 22348 / hr@singhtechservices.com
Posted 2 months ago
3.0 - 8.0 years
5 - 12 Lacs
Chennai
Work from Office
Min 3+ yrs in Data engineer(GenAI platform) ETL/ELT workflows using AWS,Azure Databricks,Airflow,Azure Data Factory Exp in Azure Databricks,Snowflake,Airflow,Python,SQL,Spark,Spark Streaming,AWS EKS, CI/CD(Jenkins) Elasticsearch,SOLR,OpenSearch,Vespa
Posted 2 months ago
6.0 - 11.0 years
1 - 6 Lacs
Bengaluru
Work from Office
Location: Bengaluru (Hybrid) Key Responsibilities: Design, develop, and maintain data pipelines using Azure Data Factory, Databricks (PySpark), and Synapse Analytics. Implement and manage real-time data streaming solutions using Kafka. Build and optimize data lake architectures and ensure best practices for scalability and security. Develop efficient SQL and Python scripts for data transformation and cleansing. Collaborate with data scientists, analysts, and other stakeholders to ensure data is readily available and reliable. Utilize Azure DevOps and Git workflows for CI/CD and version control of data pipelines and scripts. Monitor and troubleshoot data ingestion, transformation, and loading issues. Must-Have Skills: Azure Data Factory (ADF) Azure Databricks (with PySpark) Azure Synapse Analytics Apache Kafka Strong proficiency in SQL, Python, and Spark Experience with Azure DevOps and Git workflows Strong understanding of Data Lake architectures and cloud data engineering best practices Good to Have: Experience with data governance tools or frameworks Exposure to Delta Lake, Parquet, or other data formats Knowledge of performance tuning in distributed data environments Familiarity with infrastructure-as-code (e.g., Terraform or ARM templates)
Posted 2 months ago
5.0 - 10.0 years
10 - 20 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Who We Are:- We are a digitally native company that helps organizations reinvent themselves and unleash their potential. We are the place where innovation, design and engineering meet scale. Globant is 20 years old, NYSE listed public organization with more than 33,000+ employees worldwide working out of 35 countries globally. www.globant.com Job location: Pune/Hyderabad/Bangalore Work Mode: Hybrid Experience: 5 to 10 Years Must have skills are 1) AWS (EC2 & EMR & EKS) 2) RedShift 3) Lambda Functions 4) Glue 5) Python 6) Pyspark 7) SQL 8) Cloud watch 9) No SQL Database - DynamoDB/MongoDB/ OR any We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have a strong background in designing, developing, and managing data pipelines, working with cloud technologies, and optimizing data workflows. You will play a key role in supporting our data-driven initiatives and ensuring the seamless integration and analysis of large datasets. Design Scalable Data Models: Develop and maintain conceptual, logical, and physical data models for structured and semi-structured data in AWS environments. Optimize Data Pipelines: Work closely with data engineers to align data models with AWS-native data pipeline design and ETL best practices. AWS Cloud Data Services: Design and implement data solutions leveraging AWS Redshift, Athena, Glue, S3, Lake Formation, and AWS-native ETL workflows. Design, develop, and maintain scalable data pipelines and ETL processes using AWS services (Glue, Lambda, RedShift). Write efficient, reusable, and maintainable Python and PySpark scripts for data processing and transformation. Optimize SQL queries for performance and scalability. Expertise in writing complex SQL queries and optimizing them for performance. Monitor, troubleshoot, and improve data pipelines for reliability and performance. Focusing on ETL automation using Python and PySpark, responsible for design, build, and maintain efficient data pipelines, ensuring data quality and integrity for various applications.
Posted 2 months ago
3.0 - 6.0 years
3 - 5 Lacs
Noida, Gurugram, Delhi / NCR
Work from Office
Job Title: Data Engineer Snowflake & ETL Specialist Experience: 3–6 years (adjust as needed) Employment Type: Full-time Joiner-Immediate Location-Gurgaon Department: Data Engineering / Analytics Job Summary: We are seeking a skilled Data Engineer with strong hands-on experience in Snowflake, ETL development, and AWS Glue. The ideal candidate will be responsible for designing, building, and optimizing scalable data pipelines and data warehouse solutions that support enterprise-level analytics and reporting needs. Key Responsibilities: • Develop, optimize, and manage ETL pipelines using AWS Glue, Python, and Snowflake. • Design and implement data warehouse solutions and data models based on business requirements. • Work closely with data analysts, BI developers, and stakeholders to ensure clean, consistent, and reliable data delivery. • Monitor and troubleshoot performance issues related to data pipelines and queries in Snowflake. • Participate in code reviews, documentation, and knowledge-sharing activities. • Ensure data security, governance, and compliance with organizational
Posted 2 months ago
3.0 - 8.0 years
9 - 19 Lacs
Bengaluru
Hybrid
Data engineers: Help optimize the workflows handling millions of images with a minimum cost. •Every bit of optimization when scaled for million images we have lot of money to save. •You must be extremely good at programming for cloud workflows with a strong eye on optimization considering the scale Comptencies : First thing, belief that “right cost is everything for you” = optimized but fast on cloud •An attitude to make data flow through the workflows on cloud most efficiently, •Right cost can even mean – find the best of storage solution to retrieve cheap and fast •Extremely good at software development to not brag about language proficiency but rather I can learn ‘assembly language’ also if required but for now python mostly
Posted 2 months ago
5.0 - 10.0 years
30 - 37 Lacs
Chennai
Remote
Role & responsibilities Expert in Azure Data Factory Proven experience in Data Modelling for Manufacturing data sources Proficient SQL design +5 years of experience in Data engineering roles Prove experience in PBI: Dashboarding, DAX calculations, Star scheme development and semantic model building Manufacturing knowledge Experience with GE PPA as data source is desirable API dev Knowledge Python skills Location: nearshore or offshore with 3 up to 5 hours overlap with CST time zone Preferred candidate profile
Posted 2 months ago
10.0 - 20.0 years
30 - 40 Lacs
Hyderabad
Hybrid
Job Title: Technical Product Manager Enterprise Applications Summary We are looking for an experienced Technical Product Manager (TPM) to lead the development of enterprise-grade software products. This role bridges deep technical knowledge with strong product management expertise. You will work closely with engineering, architecture, and cross-functional teams to define, prioritize, and deliver high-impact features and platform capabilities. The ideal candidate is someone who can engage engineers on system design, translate complex technical requirements into actionable plans, and communicate product value effectively to internal and external stakeholders. Key Responsibilities Product Strategy & Technical Planning Own the product roadmap and delivery of enterprise application features with a strong technical foundation. Partner with engineering and architecture teams to translate product goals into scalable, performant, and secure solutions. Evaluate technical feasibility and actively participate in design and architecture discussions. Requirements Management & Feature Definition Gather and translate complex functional and technical requirements into clear user stories and acceptance criteria. Own the product backlog and ensure technical integrity in prioritization and trade-off decisions. Define success metrics and track feature impact on platform performance, adoption, and stability. Stakeholder Communication & Alignment Act as the point of contact between engineering, data, product design, and customer-facing teams. Drive alignment and clarity around scope, priorities, and deliverables across cross-functional teams. Communicate technical roadmaps and rationale effectively to both technical and business stakeholders. Execution & Delivery Oversight Lead sprint planning, backlog grooming, and release coordination with agile teams. Proactively identify delivery risks, technical dependencies, and blockers—and work to resolve them. Monitor and optimize delivery velocity, system health, and platform scalability with a hands-on approach. Required Qualifications Bachelor’s or Master’s in Computer Science, Engineering, or related technical discipline. 6–10 years of experience in product management, with at least 3+ years as a Technical Product Manager . Experience delivering enterprise-grade software platforms, APIs, or data-intensive applications. Strong technical acumen—able to engage in architecture, data modeling, system design, and API discussions. Hands-on experience with modern cloud platforms (AWS, GCP, or Azure), microservices, DevOps practices, and CI/CD pipelines. Proven ability to write detailed technical product specs, define clear roadmaps, and manage stakeholder expectations. Preferred Qualifications Background in software engineering or systems architecture. Experience working on AI/ML platforms, developer tools, or infrastructure products. Familiarity with observability, scalability, or performance optimization for enterprise systems. Proficiency with tools like Jira, Confluence, Swagger, Postman, and GitHub. Excellent communicator who can simplify the complex and align diverse teams toward a common goal.
Posted 2 months ago
5.0 - 10.0 years
6 - 16 Lacs
Vadodara
Work from Office
We are seeking an experienced Senior Data Engineer with minimum 5 years of hands-on experience to join our dynamic data team. The ideal candidate will have strong expertise in Microsoft Fabric, demonstrate readiness to adopt cutting-edge tools like SAP Data Sphere, and possess foundational AI knowledge to guide our data engineering initiatives. Key Roles and Responsibilities: Design, develop, and maintain scalable data pipelines and ETL/ELT processes using Microsoft Fabric tools such as Azure Data Factory (ADF) and Power BI. Work on large-scale data processing and analytics using PySpark. Evaluate and implement new data engineering tools like SAP Data Sphere through training or self-learning. Support business intelligence, analytics, and AI/ML initiatives by building robust data architectures. Apply AI techniques to automate workflows and collaborate with data scientists on machine learning projects. Mentor junior data engineers and lead data-related projects across departments. Coordinate with business teams, vendors, and technology partners for smooth project delivery. Create dashboards and reports using tools like Power BI or Tableau, ensuring data accuracy and accessibility. Support self-service analytics across business units and maintain consistency in all visualizations. Experience & Technical Skills 5+ years of professional experience in data engineering with expertise in Microsoft Fabric components Strong proficiency in PySpark for large-scale data processing and distributed computing (MANDATORY) Extensive experience with Azure Data Factory (ADF) for orchestrating complex data workflows (MANDATORY) Proficiency in SQL and Python for data processing and pipeline development Strong understanding of cloud data platforms, preferably Azure ecosystem Experience in data modelling , data warehousing , and modern data architecture patterns Interested candidates can share their updated profiles at "itcv@alembic.co.in"
Posted 2 months ago
6.0 - 10.0 years
10 - 20 Lacs
Pune
Work from Office
Job Description: Job Role: Data Engineer Role Yrs of Exp : 6+Years Job Location : Pune Work Model : Hybrid Job Summary: We are seeking a highly skilled Data Engineer with strong expertise in DBT, Java, Apache Airflow, and DAG (Directed Acyclic Graph) design to join our data platform team. You will be responsible for building robust data pipelines, designing and managing workflow DAGs, and ensuring scalable data transformations to support analytics and business intelligence. Key Responsibilities: Design, implement, and optimize ETL/ELT pipelines using DBT for data modeling and transformation. Develop backend components and data processing logic using Java. Build and maintain DAGs in Apache Airflow for orchestration and automation of data workflows. Ensure the reliability, scalability, and efficiency of data pipelines for ingestion, transformation, and storage. Work with cross-functional teams to understand data needs and deliver high-quality solutions. Troubleshoot and resolve data pipeline issues in production environments. Apply data quality and governance best practices, including validation, logging, and monitoring. Collaborate on CI/CD deployment pipelines for data infrastructure. Required Skills & Qualifications: 4+ years of hands-on experience in Data engineering roles. Strong experience with DBT for modular, testable, and version-controlled data transformation. Proficient in Java , especially for building custom data connectors or processing frameworks. Deep understanding of Apache Airflow and ability to design and manage complex DAGs. Solid SQL skills and familiarity with data warehouse platforms (e.g., Snowflake, Redshift, BigQuery). Familiarity with version control tools (Git), CI/CD pipelines, and Agile methodologies. Exposure to cloud environments like AWS, GCP, or Azure .
Posted 2 months ago
7.0 - 12.0 years
25 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Develop and maintain data pipelines, ETL/ELT processes, and workflows to ensure the seamless integration and transformation of data. Architect, implement, and optimize scalable data solutions. Required Candidate profile Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver actionable insights. Partner with cloud architects and DevOps teams
Posted 2 months ago
5.0 - 10.0 years
25 - 40 Lacs
Bengaluru
Work from Office
Dear Candidate, GyanSys is looking for Azure Databricks/Data Engineer for our overseas customers consulting Projects based in Americas/Europe/APAC region. Please apply for the job role or Share your CV directly to kiran.devaraj@gyansys.com / Call @ 8867163603 to discuss the fitment in detail. Designation: Sr/Lead/Principal Consultant based on Experience) Experience: 5+ Yrs - relevant Location: Bangalore , ITPL Notice Period: Immediate or 30 days max Job Description: 5+ years Experience. We are seeking a Data Engineer with 5-10 years of experience in Databricks, Python, and API. The primary responsibility of this role is to migrate on-premises big data Spark and Impala/Hive scripts to the Databricks environment. The ideal candidate will have a strong background in data migration projects and be proficient in transforming ETL pipelines to Databricks. The role requires excellent problem-solving skills and the ability to work independently on complex data migration tasks. Experience with big data technologies and cloud platforms(Azure) is essential. Join our team to lead the migration efforts and optimize our data infrastructure on Databricks. Excellent problem-solving skills and a passion for data accessibility. Effective communication and collaboration skills. Experience with Agile methodologies. Kinldy apply only if your profile fits the above pre-requisites. Also, Please share this job post in your acquaintances as well.
Posted 2 months ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad/Secunderabad
Hybrid
Job Objective We 're looking for a skilled and passionate Data Engineer to build robust, scalable data platforms using cutting-edge technologies. If you have expertise in Databricks, Python, PySpark, Azure Data Factory, Azure Synapse, SQL Server , and a deep understanding of data modeling, orchestration, and pipeline development, this is your opportunity to make a real impact. Youll thrive in our cloud-first, innovation-driven environment, designing and optimizing end-to-end data workflows that drive meaningful business outcomes. If you're committed to high performance, clean data architecture, and continuous learning, we want to hear from you! Required Qualifications Education: BE, ME/MTech, MCA, MSc, MBA, or equivalent industry experience Experience: 5 to 10 years working with data engineering technologies ( Databricks, Azure, Python, SQL Server, PySpark, Azure Data Factory, Synapse, Delta Lake, Git, CI/CD Tech Stack, MSBI etc. ) Preferred Qualifications & Skills: Must-Have Skills: Expertise in relational & multi-dimensional database architectures Proficiency in Microsoft BI tools (SQL Server SSRS, SSAS, SSIS), Power BI , and SharePoint Strong experience in Power BI MDX, SSAS, SSIS, SSRS , Tabular & DAX Queries Deep understanding of SQL Server Tabular Model & multidimensional database design Excellent SQL-based data analysis skills Strong hands-on experience with Azure Data Factory, Databricks, PySpark/Python Nice-to-Have Skills: Exposure to AWS or GCP Experience with Lakehouse Architecture, Real-time Streaming (Kafka/Event Hubs), Infrastructure as Code (Terraform/ARM) Familiarity with Cognos, Qlik, Tableau, MDM, DQ, Data Migration MS BI, Power BI, or Azure Certifications
Posted 2 months ago
3.0 - 6.0 years
3 - 6 Lacs
Mumbai Suburban, Thane, Navi Mumbai
Work from Office
we have urgent opening for Application Specialist role with one of our client Support functionally and technically different versions of Capital Markets Analyse, validate, and manage business requirements.Must be strong in SQL,c#, Javascript Required Candidate profile Exp: 3+years Location : Powai Good in SQL, Java, c#,Javascript,HTML, CSS Mumbai candidate prefers only 5 days(Sun - Thurs) working 2 days weekoff(Fri & Sat) share cv:snehal@peshr.com call:9137306440
Posted 2 months ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As an Azure Data Engineer, you will be responsible for designing, implementing, and maintaining data pipelines and data solutions on the Azure platform. Your primary focus will be on developing efficient data models, ETL processes, and data integration workflows to support the organization's data needs. You will collaborate with data architects, data scientists, and other stakeholders to understand requirements and translate them into technical solutions. Additionally, you will optimize data storage and retrieval for performance and cost efficiency. In this role, you will also be involved in troubleshooting data issues, monitoring data pipelines, and ensuring data quality and integrity. You will stay current with Azure data services and best practices to continuously improve the data infrastructure. The ideal candidate for this position will have a strong background in data engineering, experience working with Azure data services such as Azure Data Factory, Azure Databricks, and Azure SQL Database, and proficiency in SQL, Python, or other programming languages used in data processing. If you are a data professional with a passion for working with large datasets, building scalable data solutions, and driving data-driven decision-making, this role offers an exciting opportunity to contribute to the organization's data strategy and growth.,
Posted 2 months ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
Wipro Limited is a leading technology services and consulting company dedicated to creating innovative solutions that cater to the complex digital transformation needs of clients. With a comprehensive range of capabilities in consulting, design, engineering, and operations, we assist clients in achieving their most ambitious goals and establishing future-ready, sustainable businesses. Our global presence spans over 65 countries with a workforce of more than 230,000 employees and business partners, committed to supporting our customers, colleagues, and communities in adapting to an ever-evolving world. For more information, please visit our website at www.wipro.com. As a Data Engineer with a minimum of 7 years of experience, including at least 2 years of project delivery experience in DataIku platforms, you will be responsible for configuring and optimizing Dataiku's architecture. This includes managing data connections, security settings, and workflow optimization to ensure seamless operations. Your expertise in Dataiku recipes, Designer nodes, API nodes, and Automation nodes will be instrumental in deploying custom workflows and scripts using Python. Collaboration is key in this role, as you will work closely with data analysts, business stakeholders, and clients to gather requirements and translate them into effective solutions within the DataIku environment. Your ability to independently navigate a fast-paced environment and apply strong analytical and problem-solving skills will be crucial in meeting project timelines and objectives. Additionally, familiarity with agile development methodologies and experience with Azure DevOps for CR/Production deployment implementation are highly desirable. Join us in reinventing the digital landscape at Wipro, where we encourage constant evolution and empower individuals to shape their professional growth. We welcome applications from individuals with disabilities to contribute to our diverse and inclusive workforce.,
Posted 2 months ago
5.0 - 10.0 years
15 - 30 Lacs
Pune, Chennai
Work from Office
Exp- 5 to 10 Yrs Skill - Azure Databricks , Azure Data Factory , Python , Spark location - Pune ,Chennai
Posted 2 months ago
5.0 - 10.0 years
15 - 30 Lacs
Pune, Chennai
Work from Office
Python, AWS Glue, Data Lake, SQL Orchestration tool (Example - Airflow) Data Ingestion Framework
Posted 2 months ago
5.0 - 8.0 years
15 - 30 Lacs
Hyderabad
Work from Office
5+ yrs exp as a Data Engineer with a strong track record of designing and implementing complex data solutions. Expert in SQL for data manipulation, analysis, and optimization. Strong programming skills in Python for data engineering tasks.
Posted 2 months ago
5.0 - 8.0 years
15 - 30 Lacs
Hyderabad
Work from Office
5+ yrs exp as a Data Engineer with a strong track record of designing and implementing complex data solutions. Expert in SQL for data manipulation, analysis, and optimization. Strong programming skills in Python for data engineering tasks.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |