23086 Pyspark Jobs - Page 26

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 11.0 years

30 - 45 Lacs

kolkata, hyderabad

Work from Office

Work Location: Hyderabad /Kolkata Experience :6-10yrs Required Skills: Experience AWS cloud and AWS services such as S3 Buckets, Lambda, API Gateway, SQS queues; Experience with batch job scheduling and identifying data/job dependencies; Experience with data engineering using AWS platform and Python; Familiar with AWS Services like EC2, S3, Redshift/Spectrum, Glue, Athena, RDS, Lambda, and API gateway; Familiar with software DevOps CI/CD tools, such Git, Jenkins, Linux, and Shell Script Thanks & Regards Suganya R suganya@spstaffing.in

Posted 4 days ago

AI Match Score
Apply

5.0 years

0 Lacs

chennai, tamil nadu, india

On-site

The Senior Data Engineer will be responsible for the architecture, design, development, and maintenance of our data platforms, with a strong focus on leveraging Python and PySpark for data processing and transformation. This role requires a strong technical leader who can work independently and as part of a team, contributing to the overall data strategy and helping to drive data-driven decision-making across the organization. Key Responsibilities Data Architecture & Design: Design, develop, and optimize data architectures, pipelines, and data models to support various business needs, including analytics, reporting, and machine learning. ETL/ELT Development (Python/PySpark Focus): Build, tes...

Posted 4 days ago

AI Match Score
Apply

10.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Job Description We are seeking an experienced Data Architect specializing in Databricks to lead the architecture, design, and migration of enterprise data workloads from on-premises systems (e.g., Oracle, Exadata, Hadoop) to Databricks on Azure or AWS . The role involves designing scalable, secure, and high-performing data platforms based on the medallion architecture (bronze, silver, gold layers), supporting large-scale ingestion, transformation, and publishing of data. Responsibilities Key Responsibilities Define and design cloud-native data architectures on Databricks using Delta Lake, Unity Catalog, and related components. Develop and execute migration strategies for moving on-premises d...

Posted 4 days ago

AI Match Score
Apply

4.0 years

0 Lacs

bengaluru, karnataka, india

On-site

About Agoda Agoda is an online travel booking platform for accommodations, flights, and more. We build and deploy cutting-edge technology that connects travelers with a global network of 4.7M hotels and holiday properties worldwide, plus flights, activities, and more. Based in Asia and part of Booking Holdings, our 7,100+ employees representing 95+ nationalities in 27 markets foster a work environment rich in diversity, creativity, and collaboration. We innovate through a culture of experimentation and ownership, enhancing the ability for our customers to experience the world. Our Purpose - Bridging the World Through Travel We believe travel allows people to enjoy, learn and experience more ...

Posted 4 days ago

AI Match Score
Apply

10.0 years

0 Lacs

hyderabad, telangana, india

On-site

Job Description We are seeking an experienced Data Architect specializing in Databricks to lead the architecture, design, and migration of enterprise data workloads from on-premises systems (e.g., Oracle, Exadata, Hadoop) to Databricks on Azure or AWS . The role involves designing scalable, secure, and high-performing data platforms based on the medallion architecture (bronze, silver, gold layers), supporting large-scale ingestion, transformation, and publishing of data. Responsibilities Key Responsibilities Define and design cloud-native data architectures on Databricks using Delta Lake, Unity Catalog, and related components. Develop and execute migration strategies for moving on-premises d...

Posted 4 days ago

AI Match Score
Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana, india

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collabo...

Posted 4 days ago

AI Match Score
Apply

3.0 - 8.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Job Description Data Scientist Description: Degree and Qualification: BE/B.Tech, ME/M.Tech in CSE/IT, Statistics, or a related field. Master’s degree in data science, AI, or a related field is preferred. Number of Years of Experience as a Data Analyst / Scientist: 3-8 years Language Skills: Good communication skills in English, proficiency in German is an added advantage. Domain Knowledge: Strong understanding of supply chain and supplier performance evaluation processes. Familiarity with procurement, supplier management, inbound processes, and logistics concepts like goods receipt, delivery note, and part numbers. Basic knowledge of plant logistics and operational efficiency. Technical Skil...

Posted 4 days ago

AI Match Score
Apply

6.0 - 11.0 years

17 - 32 Lacs

bengaluru

Work from Office

Opening for Data Engineer Exp:6-12 Yrs Location: Bangalore kindly share your details, if you are available for F2F Interview on Coming Weekends. Job Description, BigQuery Cloud Composer or Airflow (any one is required Dataflow or Dataproc or Datafusion (any one is required) Python or Pyspark (any one is required) DBT SQL GKE Looker

Posted 4 days ago

AI Match Score
Apply

5.0 - 9.0 years

0 Lacs

pune, all india

On-site

Role Overview: As a Senior Data Engineer at DATAECONOMY, you will be responsible for leading the end-to-end development of complex models for compliance and supervision. Your expertise in cloud-based infrastructure, ETL pipeline development, and financial domains will be crucial in creating robust, scalable, and efficient solutions. Key Responsibilities: - Lead the development of advanced models using AWS services such as EMR, Glue, and Glue Notebooks. - Design, build, and optimize scalable cloud infrastructure solutions with a minimum of 5 years of experience. - Create, manage, and optimize ETL pipelines using PySpark for large-scale data processing. - Build and maintain CI/CD pipelines for...

Posted 4 days ago

AI Match Score
Apply

3.0 - 8.0 years

15 - 30 Lacs

hyderabad, pune, bengaluru

Hybrid

We are seeking an experienced Big Data Developer with strong expertise in Apache Spark , Scala , and Python (PySpark) . The ideal candidate will design and develop data processing pipelines , ETL workflows , and distributed applications , ensuring scalability and performance in large-scale data environments. Key Responsibilities: Develop and maintain data pipelines using Apache Spark , Scala , and PySpark . Apply functional programming principles for clean and efficient Scala code. Optimize Spark jobs for performance , cost efficiency , and scalability . Collaborate with data engineers , data scientists , and cloud teams to deliver robust solutions. Integrate Spark applications with data lak...

Posted 4 days ago

AI Match Score
Apply

3.0 - 8.0 years

4 - 9 Lacs

hyderabad, pune, bengaluru

Hybrid

We are seeking an experienced Apache Spark Developer to design, develop, and optimize big data processing pipelines using Spark . The ideal candidate will have strong expertise in distributed computing , data engineering , and performance tuning , with experience in building scalable data solutions on cloud or on-prem platforms. Key Responsibilities: Design and implement data processing pipelines using Apache Spark (Core, SQL, Streaming). Develop and optimize ETL workflows for large-scale data ingestion and transformation. Work with Spark on Hadoop , Databricks , or cloud-based Spark environments . Collaborate with data engineers , data scientists , and business teams to deliver analytics so...

Posted 4 days ago

AI Match Score
Apply

5.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Job Purpose: The Senior Engineer – Data Analytics (Microsoft Fabric) is responsible for designing, developing, and managing scalable data solutions using Microsoft Fabric’s integrated services, including OneLake, Data Factory, Synapse, and Power BI. This role enables end-to-end data engineering, advanced analytics, and business intelligence to support data-driven decision-making across the organization. Key Responsibilities: Design, develop, and maintain robust data pipelines and ETL processes using Fabric’s Data Factory and Synapse Data Engineering capabilities. Manage and optimize OneLake storage to ensure data availability, consistency, and performance. Build and maintain Data Warehouses ...

Posted 4 days ago

AI Match Score
Apply

2.0 - 6.0 years

10 - 18 Lacs

bengaluru

Work from Office

We are Hiring: Data Engineering Professionals! (Drive in Bangalore, Chennai & Gurgaon - Final Job Location: Bangalore) Are you ready to elevate your career with a fast-growing, innovation-driven team? Were on the lookout for passionate Data Engineers who love solving complex problems and turning data into powerful insights. If this sounds like you, we want to meet you! Who Were Looking For: Experience: Minimum 2+ years Skills Required: Python SQL PySpark AWS Availability: Immediate Joiners Preferred! Drive Locations: Bangalore Chennai Gurgaon (Final work location will be Bangalore after selection.) Interested? Please fill out the form below: https://forms.gle/bnR5mGwVLZWjyKGS7 Lets build the...

Posted 4 days ago

AI Match Score
Apply

5.0 - 7.0 years

10 - 20 Lacs

hyderabad, chennai, bengaluru

Hybrid

We are seeking a skilled Databricks Developer to design, develop, and optimize data pipelines and analytics solutions on the Databricks platform. Key Responsibilities: Design and implement ETL pipelines using Databricks and PySpark . Develop and maintain Delta Lake solutions for data reliability and performance. Optimize Spark jobs for scalability and cost efficiency. Collaborate with data scientists , analysts , and business teams to deliver data-driven insights. Ensure data security , compliance , and governance within cloud environments. Integrate Databricks with cloud services (AWS, Azure, or GCP). Implement CI/CD pipelines for data workflows and maintain version control using Git. Requi...

Posted 4 days ago

AI Match Score
Apply

3.0 - 8.0 years

3 - 7 Lacs

bengaluru

Work from Office

Responsibilities: * Design, develop & maintain data pipelines using Python, Java, SQL & Snowflake. * Optimize performance through Data Bricks & AWS. * Collaborate with cross-functional teams on data engineering projects.

Posted 4 days ago

AI Match Score
Apply

3.0 - 5.0 years

7 - 8 Lacs

thiruvananthapuram

On-site

3 - 5 Years 15 Openings Trivandrum Role description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be adept at using ETL tools such as Informatica Glue Databricks and DataProc with coding skills in Python PySpark and SQL. Works independently and demonstrates proficiency in at least one domain related to data with a solid understanding of SCD concepts and data warehousing principles. Outcomes: Collaborate closely with data analysts data scientists and other stakeholders to ensure data accessibility quality and security across various data ...

Posted 4 days ago

AI Match Score
Apply

3.0 - 5.0 years

0 - 1 Lacs

cochin

On-site

Our story At Alight, we believe a company’s success starts with its people. At our core, we Champion People, help our colleagues Grow with Purpose and true to our name we encourage colleagues to “Be Alight.” Our Values: Champion People – be empathetic and help create a place where everyone belongs. Grow with purpose – Be inspired by our higher calling of improving lives. Be Alight – act with integrity, be real and empower others. It’s why we’re so driven to connect passion with purpose. Alight helps clients gain a benefits advantage while building a healthy and financially secure workforce by unifying the benefits ecosystem across health, wealth, wellbeing, absence management and navigation....

Posted 4 days ago

AI Match Score
Apply

8.0 years

0 - 1 Lacs

gurgaon

On-site

Our story At Alight, we believe a company’s success starts with its people. At our core, we Champion People, help our colleagues Grow with Purpose and true to our name we encourage colleagues to “Be Alight.” Our Values: Champion People – be empathetic and help create a place where everyone belongs. Grow with purpose – Be inspired by our higher calling of improving lives. Be Alight – act with integrity, be real and empower others. It’s why we’re so driven to connect passion with purpose. Alight helps clients gain a benefits advantage while building a healthy and financially secure workforce by unifying the benefits ecosystem across health, wealth, wellbeing, absence management and navigation....

Posted 4 days ago

AI Match Score
Apply

10.0 years

4 - 5 Lacs

gurgaon

On-site

Lead – Supply Chain Analytics Hub Location: Gurugram, India (initial) Kuala Lumpur, Malaysia (future transition) Are you a data-driven supply chain leader ready to drive digital transformation across a global FMCG network? Nestlé is seeking a Supply Chain Analytics Lead to build and scale our Analytics Hub, driving end-to-end data and digital solutions from India, with a transition to our Malaysia Hub. About the Role This leadership role sits at the intersection of technology, analytics and supply chain strategy, driving innovation and visibility across Nestlé’s global network. You’ll lead a team of data scientists, analysts and programmers to deliver advanced analytics tools, create scalabl...

Posted 4 days ago

AI Match Score
Apply

0 years

4 - 8 Lacs

hyderābād

On-site

CACI International Inc is an American multinational professional services and information technology company headquartered in Northern Virginia. CACI provides expertise and technology to enterprise and mission customers in support of national security missions and government transformation for defense, intelligence, and civilian customers. CACI has approximately 23,000 employees worldwide. Headquartered in London, CACI Ltd is a wholly owned subsidiary of CACI International Inc., a publicly listed company on the NYSE with annual revenue in excess of US $6.2bn. Founded in 2022, CACI India is an exciting, growing and progressive business unit of CACI Ltd. CACI Ltd currently has over 2000 intell...

Posted 4 days ago

AI Match Score
Apply

50.0 years

6 - 10 Lacs

hyderābād

On-site

About Gap Inc. Our past is full of iconic moments — but our future is going to spark many more. Our brands — Gap, Banana Republic, Old Navy and Athleta — have dressed people from all walks of life and all kinds of families, all over the world, for every occasion for more than 50 years. But we’re more than the clothes that we make. We know that business can and should be a force for good, and it’s why we work hard to make product that makes people feel good, inside and out. It’s why we’re committed to giving back to the communities where we live and work. If you're one of the super-talented who thrive on change, aren't afraid to take risks and love to make a difference, come grow with us. Abo...

Posted 4 days ago

AI Match Score
Apply

5.0 years

4 - 6 Lacs

hyderābād

On-site

Job Description: Senior Data Engineer: The Senior Data Engineer will be responsible for driving the solution design and delivery of self-service BI and analytics by implementing modern data solutions on the Azure Databricks platform. The role focuses on building efficient, scalable ELT/ETL data pipelines, robust data ingestion, transformation, and orchestration processes, and enforcing data quality controls using Databricks and DBT as the core technologies—leveraging the broader Azure data ecosystem to deliver curated, analytics-ready datasets. Mandatory Requirements: 1. 5+ years of experience in Big Data technologies with strong proficiency in Azure Databricks, PySpark/SQL, and DBT for end-...

Posted 4 days ago

AI Match Score
Apply

15.0 years

8 - 9 Lacs

bhubaneshwar

On-site

Project Role : Data Platform Architect Project Role Description : Architects the data platform blueprint and implements the design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : PySpark Good to have skills : Snowflake Data Warehouse, AWS Athena, AWS Glue Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Data Platform Architect, you will be responsible for architecting the data platform blueprint and implementing the design, which includes various relevant components of the dat...

Posted 4 days ago

AI Match Score
Apply

5.0 - 7.0 years

3 - 6 Lacs

chennai

On-site

We are seeking an experienced Databricks Developer with 5 to 7 years of expertise in big data engineering and analytics platforms. The ideal candidate will have deep experience in Apache Spark Databricks PySpark and cloud platforms such as Azure or AWS and a strong background in data pipeline development performance optimization and data governance. Key Responsibilities Design develop and maintain scalable data pipelines using Apache Spark on Databricks Write efficient and production ready PySpark or Scala code for data transformation and ETL processes Integrate data from various structured and unstructured sources into a unified platform Implement Delta Lake and manage data versioning updat...

Posted 4 days ago

AI Match Score
Apply

0 years

15 - 20 Lacs

ahmedabad

On-site

Data & AI Engineer Develop and deploy AI solutions for regulatory compliance, utilizing AWS Bedrock and large language models LLMs. ● Develop parsing and chunking strategies for various types of data sources Files, APIs, Newsletters, Email Lists, Web pages) and embed documents into vector databases ● Manage and maintain vector databases with additional custom metadata. ● Design and implement RAG, experiment with prompt engineering. ● Integrate AWS Bedrock and LLM APIs into the system. ● Monitor and optimize RAG model performance. ● Design and implement data storage solutions for regulatory data. ● Ensure data integrity, security, and availability. ● Optimize database performance and scala...

Posted 4 days ago

AI Match Score
Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies