Jobs
Interviews

530 Apache Spark Jobs - Page 17

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 9.0 years

8 - 13 Lacs

Kolkata

Work from Office

As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databrick...

Posted 2 months ago

Apply

4.0 - 9.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Role Senior Databricks Engineer As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and st...

Posted 2 months ago

Apply

5.0 - 10.0 years

3 - 5 Lacs

Bengaluru, Delhi / NCR, Mumbai (All Areas)

Work from Office

Azure Databricks Developer Job Title: Azure Databricks Developer Experience: 5+ Years Location: PAN India (Remote/Hybrid as per project requirement) Employment Type: Full-time Job Summary: We are hiring an experienced Azure Databricks Developer to join our dynamic data engineering team. The ideal candidate will have strong expertise in building and optimizing big data solutions using Azure Databricks, Spark, and other Azure data services. Key Responsibilities: Design, develop, and maintain scalable data pipelines using Azure Databricks and Apache Spark. Integrate and manage large datasets using Azure Data Lake, Azure Data Factory, and other Azure services. Implement Delta Lake for efficient ...

Posted 2 months ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Coimbatore

Work from Office

About the job : Exp :5+yrs NP : Imm-15 days Rounds : 3 Rounds (Virtual) Mandate Skills : Apache spark, hive, Hadoop, spark, scala, Databricks Job Description : The Role : - Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. - Constructing infrastructure for efficient ETL processes from various sources and storage systems. - Leading the implementation of algorithms and prototypes to transform raw data into useful information. - Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. - Creating innovative data validation methods and data analysis too...

Posted 2 months ago

Apply

5.0 - 9.0 years

20 - 35 Lacs

Bengaluru

Hybrid

Job Description: Data Development Engineer for Data Initiative at Global Link, the ideal candidate will: Work with the team to define high-level technical requirements and architecture for the back-end services , Data components, data monetization components Develop new application features and enhance existing ones Develop relevant documentation and diagrams. Work with other teams for deployment, testing, training, and production support. Integration with Data Engineering teams Ensure that development, coding, privacy, and security standards are adhered to Write clean and quality code. Ready to work on new technologies as business demands Strong communication skills and work ethics. Core/Mu...

Posted 2 months ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Bengaluru

Work from Office

About the Role Love deep data? Does innovative-thinking describe you? Then you may be our next Lead Data Scientist. In this role youll be the Dumbledore to our team of wizards - our junior data scientists. You will be responsible for transforming scattered pieces of information into valuable data that can be used to achieve goals effectively. You will extract and mine critical bits of information and drive insightful discussions that result in app innovations. What you will do Own and deliver solutions across multiple charters by formulating well-scoped problem statements and driving them to execution with measurable impact Mentor a team of data scientists (DS2s and DS3s), helping them with ...

Posted 2 months ago

Apply

5.0 - 8.0 years

22 - 32 Lacs

Bengaluru

Work from Office

Work with the team to define high-level technical requirements and architecture for the back-end services ,Data components,data monetization component Develop new application features & enhance existing one Develop relevant documentation and diagram Required Candidate profile min 5+ yr of exp in Python development, with a focus on data-intensive application exp with Apache Spark & PySpark for large-scale data process understand of SQL & exp working with relational database

Posted 2 months ago

Apply

3.0 - 5.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Key Responsibilities : - Design, build, and maintain scalable and robust data pipelines and ETL workflows using GCP services. - Work extensively with BigQuery, Cloud Storage, Cloud Dataflow, and other GCP components to ingest, process, and transform large datasets. - Leverage big data frameworks such as Apache Spark and Hadoop to process structured and unstructured data efficiently. - Develop and optimize SQL queries and Python scripts for data transformation and automation. - Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. - Implement best practices for data quality, monitoring, and alerting for data workflows. - Ensur...

Posted 2 months ago

Apply

8.0 - 10.0 years

5 - 6 Lacs

Navi Mumbai, SBI Belapur

Work from Office

ISA Non captive RTH-Y Note: 1.This position requires the candidate to work from the office starting from day one clinet office. 2.Ensure that you perform basic validation and gauge the interest level of the candidate before uploading their profile to our system. 3.Candidate Band will be count as per their relevant experience. We will not entertain lesser experience profile for higher band. 4. Candidate full BGV is required before onboarding the candidate. 5. If required will regularize the candidate after 6months. Hence 6 months NOC is required from the DOJ. Mode of Interview: Face to Face (Mandatory). **JOB DESCRIPTION** Total Years of Experience : 8-10 Years Relevant Years of Experience : ...

Posted 2 months ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Bengaluru

Work from Office

We are seeking an experienced Databricks Developer / Data Engineer to design, develop, and optimize data pipelines, ETL workflows, and big data solutions using Databricks. The ideal candidate should have expertise in Apache Spark, PySpark, SQL, and cloud-based data platforms (Azure, AWS, GCP). This role involves working with large-scale datasets, data lakes, and data warehouses to drive business intelligence and analytics. Key Responsibilities: Design, build, and optimize ETL and ELT pipelines using Databricks and Apache Spark. Work with big data processing frameworks (PySpark, Scala, SQL) for data transformation and analytics. Implement Delta Lake architecture for data reliability, ACID tra...

Posted 2 months ago

Apply

8.0 - 13.0 years

5 - 14 Lacs

Pune

Work from Office

Role & responsibilities Mandatory skills* API, Java, Databricks and AWS Detailed JD *(Roles and Responsibilities) Technical Two or more years of API Development experience (specifically Rest APIs using Java, Spring boot, Hibernate) Two or more years of Data Engineering and the respective tools and technologies (e.g., Apache Spark, Databricks, SQL DB, NoSQL DB, Data Lake concepts) Working knowledge of Test-driven development Working knowledge of experience leveraging DevOps and lean development principles such as Continuous Integration, Continuous Delivery/Deployment using tools like Git Working knowledge of ETL, Data Modeling, Data Warehousing, and working with large-scale datasets Working K...

Posted 2 months ago

Apply

10.0 - 15.0 years

96 - 108 Lacs

Bengaluru

Work from Office

Responsibilities: * Design data solutions using Java, Python & Apache Spark. * Collaborate with cross-functional teams on Azure cloud projects. * Ensure data security through Redis caching and HDFS storage.

Posted 2 months ago

Apply

5.0 - 10.0 years

18 - 30 Lacs

Pune

Hybrid

Title: Big Data (Apache Spark) Experience: 4+ Yrs Location: Pune (Hybrid) JD: True Hands-On Developer in Programming Languages like Java or Scala . Expertise in Apache Spark . Database modelling and working with any of the SQL or NoSQL Database is must. Working knowledge of Scripting languages like shell/python. Experience of working with Cloudera is Preferred Orchestration tools like Airflow or Oozie would be a value addition. Knowledge of Table formats like Delta or Iceberg is plus to have. Working experience of Version controls like Git, build tools like Maven is recommended. Having software development experience is good to have along with Data Engineering experience. Qualifications: A b...

Posted 2 months ago

Apply

4.0 - 9.0 years

10 - 20 Lacs

Bengaluru

Remote

Job Title: Software Engineer GCP Data Engineering Work Mode: Remote Base Location: Bengaluru Experience Required: 4 to 6 Years Job Summary: We are seeking a Software Engineer with a strong background in GCP Data Engineering and a solid understanding of how to build scalable data processing frameworks. The ideal candidate will be proficient in data ingestion, transformation, and orchestration using modern cloud-native tools and technologies. This role requires hands-on experience in designing and optimizing ETL pipelines, managing big data workloads, and supporting data quality initiatives. Key Responsibilities: Design and develop scalable data processing solutions using Apache Beam, Spark, a...

Posted 2 months ago

Apply

7.0 - 12.0 years

25 - 35 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Must have: Scala, Spark, Azure databricks, Kubernetes Note: Quantexa certification is a Must. Good to have: Python, Pyspark, Elastic, Restful APIs ROLE PURPOSE The purpose of the Data Engineer is to design, build and unit test data pipelines and jobs for Projects and Programmes on Azure Platform. This role is purposed for Quantexa Fraud platform programme, Quantexa certified engineer is a preferred. KEY ACCOUNTABILITIES Analyse business requirements and support and maintain Quantexa platform. Build and deploy new/changes to data mappings, sessions, and workflows in Azure Cloud Platform – key focus area would be Quantexa platform on Azure. Develop both batch (using Azure Databricks) and real ...

Posted 2 months ago

Apply

5.0 - 10.0 years

15 - 27 Lacs

Pune

Work from Office

Hi, Wishes from GSN!!! Pleasure connecting with you We been into Corporate Search Services for Identifying & Bringing in Stellar Talented Professionals for our reputed IT / Non-IT clients in India. We have been successfully providing results to various potential needs of our clients for the last 20 years. We have been mandated by one of our prestigious MNC client to identify Scala Developer - Pune professionals. Kindly find below the required details. ******** Looking for SHORT JOINERs ******** Position : Permanent Mandatory Skill : Scala Developer Exp Range : 5+ years Job Role : Senior Developer / Tech Lead Location : Only Pune Work Mode : WFO - All 5 Days Job Description: Bachelor's or Mas...

Posted 2 months ago

Apply

8.0 - 10.0 years

20 - 25 Lacs

Hyderabad, Pune, Chennai

Hybrid

Please Note - NP should be 0-15 days. Primary Responsibilities - Create and maintain data storage solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage. Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure Create data models for analytics purposes Utilizing Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations Use Azure Data Factory and Databricks to assemble large, complex data sets Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data. Ensure data security and compliance Collaborate with data...

Posted 2 months ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Mumbai

Work from Office

Role Senior Databricks Engineer As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and st...

Posted 2 months ago

Apply

5.0 - 8.0 years

32 - 35 Lacs

Bengaluru

Remote

We are seeking a MLOps Engineer to design, implement, and manage scalable machine learning infrastructure and automation pipelines. The ideal candidate will have deep hands-on expertise in Azure, AKS, Infrastructure as Code, and CI/CD, with a passion for enabling efficient and reliable deployment of machine learning models in production environments. Responsibilities:- Architect & Deploy: Design and manage scalable ML infrastructure on Azure (AKS), leveraging Infrastructure as Code principles. Automate & Accelerate: Build and optimize CI/CD pipelines with GitHub Actions for seamless software, data, and model delivery. Engineer Performance: Develop efficient and reliable data pipelines using ...

Posted 2 months ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Chennai

Work from Office

As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databrick...

Posted 2 months ago

Apply

4.0 - 9.0 years

8 - 13 Lacs

Bengaluru

Work from Office

As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databrick...

Posted 2 months ago

Apply

4.0 - 9.0 years

8 - 13 Lacs

Kolkata

Work from Office

Role Senior Databricks Engineer As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and st...

Posted 2 months ago

Apply

1.0 - 4.0 years

5 - 9 Lacs

Mumbai

Work from Office

The Role: Company Overview : Kennect is the Modern Sales Compensation Platform designed for enterprises. We are leading the way in Sales Performance Management, with everything businesses need to harness the power of Sales Incentives. Our mission is to deliver customised enterprise-level commission calculation and tracking software for the most innovative businesses around the world. Key Responsibilities: Writing well-designed, testable and efficient code. Building reusable components and libraries for future. Troubleshooting and debugging to optimize performance Providing code documentation and other inputs to technical documents and participating in code reviews. Working on big projects wi...

Posted 2 months ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

Bengaluru

Work from Office

The Platform Data Engineer will be responsible for designing and implementing robust data platform architectures, integrating diverse data technologies, and ensuring scalability, reliability, performance, and security across the platform. The role involves setting up and managing infrastructure for data pipelines, storage, and processing, developing internal tools to enhance platform usability, implementing monitoring and observability, collaborating with software engineering teams for seamless integration, and driving capacity planning and cost optimization initiatives.

Posted 2 months ago

Apply

5.0 - 7.0 years

11 - 15 Lacs

Coimbatore

Work from Office

Mandate Skills : Apache spark, hive, Hadoop, spark, scala, Databricks The Role : - Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. - Constructing infrastructure for efficient ETL processes from various sources and storage systems. - Leading the implementation of algorithms and prototypes to transform raw data into useful information. - Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. - Creating innovative data validation methods and data analysis tools. - Ensuring compliance with data governance and security policies. - Interpreting data ...

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies