15947 Pyspark Jobs - Page 26

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

8 - 12 Lacs

noida

Work from Office

Must have: Minimum 5+ years of experience in a QE/ Agile environment with a focus on technical, automated validations leveraging variety of tools/ technologies/ frameworks. Undergraduate degree in Technology Very good Python-coding skills, with focus on Spark/PySpark Strong QA or Development background with focus on delivering excellent, well-tested, quality software into production, taking on the role of gatekeeper to production. Must have implemented and be very familiar with CI/CD pipelines. Strong communicator with good organizational and time management capabilities, able to establish good working relationships across teams. Deadline-driven and results-oriented; able to meet consistentl...

Posted 5 days ago

AI Match Score
Apply

3.0 years

0 Lacs

gurgaon, haryana, india

On-site

About the Role: We are seeking a results-driven and detail-oriented data analyst to support data-driven decision-making within banking risk operations. This role involves working closely with stakeholders to provide actionable insights, enhance strategies, and drive operational efficiencies using tools such as SQL, Python, and advanced analytics. Key Responsibilities Analyze large volumes of data to identify trends, patterns, and performance drivers. Collaborate with different teams to support and influence decision-making processes. Perform root cause analysis and recommend improvements to optimize processes Design and track key KPIs Ensure data integrity and accuracy across reporting tools...

Posted 5 days ago

AI Match Score
Apply

0 years

0 Lacs

chennai, tamil nadu, india

On-site

Role Description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be adept at using ETL tools such as Informatica Glue Databricks and DataProc with coding skills in Python PySpark and SQL. Works independently and demonstrates proficiency in at least one domain related to data with a solid understanding of SCD concepts and data warehousing principles. The backend engineer will play a key role in designing and implementing APIs, integrating with data pipelines, and ensuring high performance and reliability of server-side components. Key Respo...

Posted 5 days ago

AI Match Score
Apply

8.0 years

0 Lacs

gurugram, haryana, india

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Title-Data Engineering Lead Overall Years Of Experience-8 To 10 Years Relevant Years of Experience-4+ Data Engineering Lead Data Engineering Lead is responsible for collaborating with the Data Architect to design and implement scalable data lake architecture and data pipelines Position Summary Design and implement scalable data lake architect...

Posted 5 days ago

AI Match Score
Apply

5.0 - 10.0 years

25 - 30 Lacs

hyderabad

Hybrid

Job Summary We're looking for a Senior Data Engineer with 5-8 years of experience to build and maintain scalable, production-grade data pipelines. The ideal candidate is a strong software engineer with hands-on experience in Spark (3.x), Scala, SQL, and Python. You'll be responsible for designing and implementing ETL/ELT solutions, collaborating with teams to deliver data products, and mentoring junior engineers. Key Responsibilities Design and build data pipelines using Apache Spark 3.x, Scala, Python, and SQL . Tune Spark jobs for performance and cost. Ensure code quality with unit tests, CI/CD, and code reviews. Collaborate with platform and DevOps teams to deploy and monitor pipelines. T...

Posted 5 days ago

AI Match Score
Apply

10.0 - 15.0 years

20 - 35 Lacs

pune

Work from Office

Job Title: Big Data Architect GCP Experience: 10–12 Years Location: Pune Role We are looking for a Big Data Architect with deep GCP expertise to define, design, and lead cloud-native data platforms for enterprise clients. You will be responsible for architecting large-scale data lakes, warehouses, and pipelines on GCP , modernizing on-premise workloads, and guiding technical teams to deliver robust and scalable solutions. Key Responsibilities Architect modern data platforms on GCP using BigQuery, Dataproc, Dataflow, Pub/Sub, Cloud Composer, GCS . Lead data modernization and migration programs from on-premise systems (Teradata, Netezza, Exadata, Hadoop) to GCP. Define end-to-end solution arch...

Posted 5 days ago

AI Match Score
Apply

4.0 - 9.0 years

11 - 18 Lacs

new delhi, bengaluru, mumbai (all areas)

Work from Office

Opening for AWS Data Engineer , Exp - 4+ years , Skills, Pyspark, Python, SQL, AWS, Airflow Notice period - 15 days - 30 days Job Description: We are seeking a highly skilled Data Engineer to join our dynamic team. As a Data Engineer at you will play a crucial role in designing, developing, and maintaining our cloud-based data infrastructure to support our BFSI customers. You will work at the intersection of cloud technologies, data engineering, and the BFSI domain to deliver robust and scalable data solutions. Key Responsibilities: Design, develop, and implement data pipelines, ETL processes, and data integration solutions. Collaborate with cross-functional teams to understand data requirem...

Posted 5 days ago

AI Match Score
Apply

8.0 years

0 Lacs

kochi, kerala, india

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Title-Data Engineering Lead Overall Years Of Experience-8 To 10 Years Relevant Years of Experience-4+ Data Engineering Lead Data Engineering Lead is responsible for collaborating with the Data Architect to design and implement scalable data lake architecture and data pipelines Position Summary Design and implement scalable data lake architect...

Posted 5 days ago

AI Match Score
Apply

12.0 - 15.0 years

15 - 30 Lacs

gurugram

Hybrid

Job Title : Data Architect Location: Gurgaon Mode of working: Hybrid (3 days work from office) Exp-12+ Yrs Key Responsibilities Architecture & Strategy Define and drive the data architecture roadmap aligned with business and analytics goals. Design scalable and maintainable Lakehouse architectures using Azure services. Establish best practices for data modeling, storage, and processing across structured and unstructured data. Evaluate and recommend emerging technologies and architectural patterns. Technical Execution Lead the development of data pipelines using Azure Databricks and Delta Live Tables . Architect solutions leveraging Azure Data Lake Storage , Delta Lake , Azure Functions and o...

Posted 5 days ago

AI Match Score
Apply

8.0 - 13.0 years

14 - 24 Lacs

pune

Hybrid

Candidate should have experience in Data Engineering with AWS Databricks Please don't require Azure Databricks

Posted 5 days ago

AI Match Score
Apply

7.0 - 10.0 years

18 - 24 Lacs

pune

Work from Office

Responsibilities: * Design, develop & maintain big data pipelines using PySpark, Snowflake. * Collaborate with cross-functional teams on product development. * Ensure data quality through testing & optimization. Provident fund

Posted 5 days ago

AI Match Score
Apply

8.0 years

0 Lacs

trivandrum, kerala, india

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Title-Data Engineering Lead Overall Years Of Experience-8 To 10 Years Relevant Years of Experience-4+ Data Engineering Lead Data Engineering Lead is responsible for collaborating with the Data Architect to design and implement scalable data lake architecture and data pipelines Position Summary Design and implement scalable data lake architect...

Posted 5 days ago

AI Match Score
Apply

5.0 - 10.0 years

15 - 30 Lacs

hyderabad

Work from Office

Lead Data Engineer Data Management Job description Company Overview Accordion is a global private equity-focused financial consulting firm specializing in driving value creation through services rooted in Data & Analytics and powered by technology. Accordion works at the intersection of Private Equity sponsors and portfolio companies' management teams across every stage of the investment lifecycle. We provide hands-on, execution-oriented support, driving value through the office of the CFO by building data and analytics capabilities and identifying and implementing strategic work, rooted in data and analytics. Accordion is headquartered in New York City with 10 offices worldwide. Join us and...

Posted 5 days ago

AI Match Score
Apply

1.0 - 3.0 years

10 - 20 Lacs

bengaluru

Work from Office

you should apply if you: having 1-3 years of experience in ETL and data engineering are able to read and write complex SQL have prior experience in at least one programming language are aware of your way around data modeling, data warehousing and lakehouse (we use redshift and databricks) come with experience in working on cloud services, preferably AWS constantly learning and looking for ways to improve yourself and the processes around you can be a team player with strong analytical, communication, and troubleshooting skills. how is life at CRED? working at CRED would instantly make you realize one thing: you are working with the best talent around you. not just in the role you occupy, but...

Posted 5 days ago

AI Match Score
Apply

7.0 - 12.0 years

18 - 30 Lacs

bengaluru

Work from Office

Must have atleast 7+ years of experience in Data warehouse, ETL, BI projects • Must have atleast 5+ years of experience in Snowflake • Expertise in Snowflake architecture is must. • Must have atleast 3+ years of experience and strong hold in Python/PySpark • Must have experience implementing complex stored Procedures and standard DWH and ETL concepts • Proficient in Oracle database, complex PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot • Good to have experience with AWS services and creating DevOps templates for various AWS services. • Experience in using Github, Jenkins • Good communication and Analytical skills • Snowflake certification is desirable

Posted 5 days ago

AI Match Score
Apply

6.0 - 9.0 years

0 - 3 Lacs

bengaluru

Hybrid

Experience in designing and architecting distributed data systems • Code, test, and document new or modified data systems to create robust and scalable applications for data analytics. • Work with other Bigdata developers to make sure that all data solutions are consistent. • Partner with business community to understand requirements, determine training needs and deliver user training sessions • Perform technology and product research to better define requirements, resolve important issues and improve the overall capability of the analytics technology stack. • Evaluate and provides feedback on future technologies and new releases/upgrades. Job Specific Knowledge: • Supports Big Data and batc...

Posted 5 days ago

AI Match Score
Apply

5.0 - 7.0 years

0 Lacs

pune, maharashtra, india

On-site

MustHave Bachelors orequivalentdegree in Computer Science, IT, electronics and communications Minimum 5-7yearsof professional experience in relevant Data Engineering field and knowledgeofcloudplatform i.e., Azure. Minimum 3yearsof strong experience in Databricks, pyspark and SQL. Experience into SQL storedprocedures, Views, functions, SAP ODATA etc. Experience with Azure Data Lake, Azure Synapse, Azure Data Factory, SQL Data Warehouse, Azure Blob, Azure Storage Explorer Proficient in creating Data Factory pipelinesfor on-cloud ETL processing; copyactivity, custom Azure development etc. Good verbal and writtencommunicationskills. GoodToHave Goodtohaveknowledgeof BI relatedtechnologies i.e. Po...

Posted 5 days ago

AI Match Score
Apply

1.0 - 3.0 years

10 - 20 Lacs

bengaluru

Work from Office

you should apply if you: having 1-3 years of experience in ETL and data engineering are able to read and write complex SQL have prior experience in at least one programming language are aware of your way around data modeling, data warehousing and lakehouse (we use redshift and databricks) come with experience in working on cloud services, preferably AWS constantly learning and looking for ways to improve yourself and the processes around you can be a team player with strong analytical, communication, and troubleshooting skills. how is life at CRED? working at CRED would instantly make you realize one thing: you are working with the best talent around you. not just in the role you occupy, but...

Posted 5 days ago

AI Match Score
Apply

7.0 - 12.0 years

18 - 30 Lacs

bengaluru

Work from Office

Must have atleast 7+ years of experience in Data warehouse, ETL, BI projects • Must have atleast 5+ years of experience in Snowflake • Expertise in Snowflake architecture is must. • Must have atleast 3+ years of experience and strong hold in Python/PySpark • Must have experience implementing complex stored Procedures and standard DWH and ETL concepts • Proficient in Oracle database, complex PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot • Good to have experience with AWS services and creating DevOps templates for various AWS services. • Experience in using Github, Jenkins • Good communication and Analytical skills • Snowflake certification is desirable

Posted 5 days ago

AI Match Score
Apply

25.0 years

0 Lacs

gurugram, haryana, india

On-site

At McCormick, we bring our passion for flavor to work each day. We encourage growth, respect everyone's contributions and do what's right for our business, our people, our communities and our planet. Join us on our quest to make every meal and moment better. Founded in Baltimore, MD in 1889 in a room and a cellar by 25-year-old Willoughby McCormick with three employees, McCormick is a global leader in flavour. With over 14,000 employees around the world and more than $6 Billion in annual sales, the Company manufactures, markets, and distributes spices, seasoning mixes, condiments and other flavourful products to the entire food industry, retail outlets, food manufactures, food service busine...

Posted 5 days ago

AI Match Score
Apply

6.0 - 11.0 years

4 - 8 Lacs

chennai

Work from Office

Azure Data Tech Lead / Sr.Azure Data Engineer Years of Exp- 6 to 8 Years Location-Chennai Work Mode- Work From Office -Chennai Shift time-UK Shift time Billrate- Ieyond this range) Primary Skillset-Azure datafactory,Azure ,Python, Pyspark, Databricks, SQL,Azure Synapse No of Position-2 JD: Experience with Azure datafactory,Azure ,Python, Pyspark, Databricks, SQL Key Responsibilities Design and build ETL/ELT pipelines to ingest, transform, and load data from clinical, omics, research, and operational sources. Optimize performance and scalability of data flows using tools like Apache Spark, Databricks, or AWS Glue. Collaborate with domain experts in genomics, clinical trials, and lab science t...

Posted 5 days ago

AI Match Score
Apply

8.0 - 13.0 years

8 - 13 Lacs

hyderabad

Work from Office

Key Responsibilities: Team Leadership: Lead and mentor a team of Azure Data Engineers, providing technical guidance and support. Foster a collaborative and innovative team environment. Conduct regular performance reviews and set development goals for team members. Organize training sessions to enhance team skills and technical capabilities. Azure Data Platform: Design, implement, and optimize scalable data solutions using Azure data services such as Azure Databricks, Azure Data Factory, Azure SQL Database, and Azure Synapse Analytics. Ensure data engineering best practices and data governance are followed. Stay up-to-date with Azure data technologies and recommend improvements to enhance dat...

Posted 5 days ago

AI Match Score
Apply

1.0 - 4.0 years

12 - 20 Lacs

bengaluru

Remote

Responsibilities: Build and maintain complex data processing pipelines. Design scalable implementations of Data Science models. Write clean, test-driven code (TDD). Deploy pipelines in production using Continuous Delivery practices. Requirements: Strong in Java/Python; familiarity with Hadoop is a plus. Fast learner with a self-driven attitude. Ability to apply theory to practice in real projects. Good at problem-solving, data structures, and algorithms.

Posted 5 days ago

AI Match Score
Apply

3.0 - 8.0 years

10 - 20 Lacs

chennai

Work from Office

F2F Drive on 11th Oct 2025 (Location - Chennai) Senior ETL Developer AWS | Movate Chennai Experience: 3 to 9 Years Location: Chennai (5 Days Office) Notice Period: Immediate to 30 Days Mandatory Skills: ETL Development: 3+ years overall ETL experience with minimum 3+ years in AWS PySpark scripting . AWS Data Solutions: Hands-on deployment & operations using S3, Lambda, SNS, Step Functions (strong AWS services knowledge is a must). Programming: Advanced PySpark expertise and solid experience with Python packages such as NumPy, Pandas , etc. Individual Contributor: Ability to own end-to-end design, build, and deployment of data pipelines without close supervision. Data Governance: Familiarity ...

Posted 5 days ago

AI Match Score
Apply

5.0 - 10.0 years

6 - 9 Lacs

gurugram

Work from Office

GA4 & Digital Data Integration: Design, build, and deploy robust ETL and data management processes specifically for ingesting, transforming, and loading high-volume digital analytics data from Google Analytics 4 (GA4) into BigQuery. BigQuery Data Architecture: Develop and optimize BigQuery datasets, tables, and views to support various analytical needs, ensuring efficient querying and data integrity. Data Pipeline Development: Design, build, and deploy ETL job workflows with reliable error/exception handling and rollback frameworks, primarily utilizing GCP services. GCP Resource Management: Monitoring and optimizing data processing and storage resources on GCP, with a focus on BigQuery perfo...

Posted 5 days ago

AI Match Score
Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies