Jobs
Interviews

92 Databricks Engineer Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7 - 12 years

25 - 35 Lacs

Hyderabad

Remote

Job Title: Data Engineer Job Summary: Are you passionate about building scalable data pipelines, optimizing ETL processes, and designing efficient data models? We are looking for a Databricks Data Engineer to join our team and play a key role in managing and transforming data in Azure cloud environments. In this role, you will work with Azure Data Factory (ADF), Databricks, Python, and SQL to develop robust data ingestion and transformation workflows. Youll also be responsible for integrating SAP IS-Auto data, optimizing performance, and ensuring data quality & governance. If you have strong experience in big data processing, distributed computing (Spark), and data modeling, we’d love to hear from you! Key Responsibilities: Develop & Optimize ETL Pipelines : Build robust and scalable data pipelines using ADF, Databricks, and Python for data ingestion, transformation, and loading. Data Modeling & Systematic Layer Modeling : Design logical, physical, and systematic data models for structured and unstructured data. Integrate SAP IS-Auto : Extract, transform, and load data from SAP IS-Auto into Azure-based data platforms. Database Management : Develop and optimize SQL queries, stored procedures, and indexing strategies to enhance performance. Big Data Processi ng: Work with Azure Databricks for distributed computing, Spark for large-scale processing, and Delta Lake for optimized storage. Data Quality & Governance : Implement data validation, lineage tracking, and security measures for high-quality, compliant data. Collaboration : Work closely with business analysts, data scientists, and DevOps teams to ensure data availability and usability. Testing and Debugging : Write unit tests and perform debugging to ensure the Implementation is robust and error-free. Conduct performance optimization and security audits. Required Skills and Qualifications: Azure Cloud Expertise: Strong experience in Azure Data Factory (ADF), Databricks, and Azure Synapse. Programming: Proficiency in Python for data processing, automation, and scripting. SQL & Database Skills: Advanced knowledge of SQL, T-SQL, or PL/SQL for data manipulation. SAP IS-Auto Data Handling: Experience integrating SAP IS-Auto as a data source into data pipelines. Data Modeling: Hands-on experience in dimensional modeling, systematic layer modeling, and entity-relationship modeling. Big Data Frameworks: Strong understanding of Apache Spark, Delta Lake, and distributed computing. Performance Optimization: Expertise in query optimization, indexing, and performance tuning. Data Governance & Security: Knowledge of RBAC, encryption, and data privacy standards. Preferred Qualifications: 1. Experience with CI/CD for data pipelines using Azure DevOps. 2. Knowledge of Kafka/Event Hub for real-time data processing. 3. Experience with Power BI/Tableau for data visualization (not mandatory but a plus).

Posted 4 months ago

Apply

7 - 12 years

20 - 35 Lacs

Hyderabad

Remote

Job Title: Databricks Data Modeler Job Summary We are looking for a Data Modeler to design and optimize data models supporting automotive industry analytics and reporting. The ideal candidate will work with SAP ECC as a primary data source, leveraging Databricks and Azure Cloud to design scalable and efficient data architectures. This role involves developing logical and physical data models, ensuring data consistency, and collaborating with data engineers, business analysts, and domain experts to enable high-quality analytics solutions. Key Responsibilities 1. Data Modeling & Architecture: Design and maintain conceptual, logical, and physical data models for structured and unstructured data. 2. SAP ECC Data Integration: Define data structures for extracting, transforming, and integrating SAP ECC data into Azure Databricks. 3. Automotive Domain Modeling: Develop and optimize industry-specific data models covering customer, vehicle, material, and location data. 4. Databricks & Delta Lake Optimization: Design efficient data models for Delta Lake storage and Databricks processing. 5. Performance Tuning: Optimize data structures, indexing, and partitioning strategies for performance and scalability. 6. Metadata & Data Governance: Implement data standards, data lineage tracking, and governance frameworks to maintain data integrity and compliance. 7. Collaboration: Work closely with business stakeholders, data engineers, and data analysts to align models with business needs. 8. Documentation: Create and maintain data dictionaries, entity-relationship diagrams (ERDs), and transformation logic documentation. Skills & Qualifications 1. Data Modeling Expertise: Strong experience in dimensional modeling, 3NF, and hybrid modeling approaches. 2. Automotive Industry Knowledge: Understanding of customer, vehicle, material, and dealership data models. 3. SAP ECC Data Structures: Hands-on experience with SAP ECC tables, business objects, and extraction processes. 4. Azure & Databricks Proficiency: Experience working with Azure Data Lake, Databricks, and Delta Lake for large-scale data processing. 5. SQL & Database Management: Strong skills in SQL, T-SQL, or PL/SQL, with a focus on query optimization and indexing. 6. ETL & Data Integration: Experience collaborating with data engineering teams on data transformation and ingestion processes. 7. Data Governance & Quality: Understanding of data governance principles, lineage tracking, and master data management (MDM). 8. Strong Documentation Skills: Ability to create ER diagrams, data dictionaries, and transformation rules. Preferred Qualifications 1. Experience with data modeling tools such as Erwin, Lucidchart, or DBT. 2. Knowledge of Databricks Unity Catalog and Azure Synapse Analytics. 3. Familiarity with Kafka/Event Hub for real-time data streaming. 4. Exposure to Power BI/Tableau for data visualization and reporting.

Posted 4 months ago

Apply

10.0 - 20.0 years

25 - 40 Lacs

pune, chennai

Hybrid

We are seeking a results-driven Data Project Manager (PM) to lead data initiatives leveraging Databricks and Confluent Kafka in a regulated banking environment. The ideal candidate will have a strong background in data platforms, project governance, and financial services, and will be responsible for ensuring successful end-to-end delivery of complex data transformation initiatives aligned with business and regulatory requirements. Key Responsibilities: Lead planning, execution, and delivery of enterprise data projects using Databricks and Confluent. Develop detailed project plans, delivery roadmaps, and work breakdown structures. Ensure resource allocation, budgeting, and adherence to timelines and quality standards. Collaborate with data engineers, architects, business analysts, and platform teams to align on project goals. Act as the primary liaison between business units, technology teams, and vendors. Facilitate regular updates, steering committee meetings, and issue/risk escalations. Oversee solution delivery on Databricks (for data processing, ML pipelines, analytics). Manage real-time data streaming pipelines via Confluent Kafka. Ensure alignment with data governance, security, and regulatory frameworks (e.g., GDPR, CBUAE, BCBS 239). Ensure all regulatory reporting data flows are compliant with local and international financial standards. Manage controls and audit requirements in collaboration with Compliance and Risk teams. Required Skills & Experience: 7+ years of experience in Project Management within the banking or financial services sector. Proven experience leading data platform projects (especially Databricks and Confluent Kafka). Strong understanding of data architecture, data pipelines, and streaming technologies. Experience managing cross-functional teams (onshore/offshore). Strong command of Agile/Scrum and Waterfall methodologies. Databricks (Delta Lake, MLflow, Spark) Confluent Kafka (Kafka Connect, kSQL, Schema Registry) Azure or AWS Cloud Platforms (preferably Azure) Integration tools (Informatica, Data Factory), CI/CD pipelines Oracle ERP Implementation experience PMP / Prince2 / Scrum Master certification Familiarity with regulatory frameworks: BCBS 239, GDPR, CBUAE regulations Strong understanding of data governance principles (e.g., DAMA-DMBOK) Education: Bachelors or Master’s in Computer Science, Information Systems, Engineering, or related field. KPIs: On-time, on-budget delivery of data initiatives Uptime and SLAs of data pipelines User satisfaction and stakeholder feedback Compliance with regulatory milestones

Posted Date not available

Apply

7.0 - 12.0 years

19 - 34 Lacs

bengaluru

Work from Office

Job Summary: We are seeking a talented Data Engineer with strong expertise in Databricks, specifically in Unity Catalog, PySpark, and SQL, to join our data team. Youll play a key role in building secure, scalable data pipelines and implementing robust data governance strategies using Unity Catalog. Key Responsibilities: Design and implement ETL/ELT pipelines using Databricks and PySpark. Work with Unity Catalog to manage data governance, access controls, lineage, and auditing across data assets. Develop high-performance SQL queries and optimize Spark jobs. Collaborate with data scientists, analysts, and business stakeholders to understand data needs. Ensure data quality and compliance across all stages of the data lifecycle. Implement best practices for data security and lineage within the Databricks ecosystem. Participate in CI/CD, version control, and testing practices for data pipelines. Required Skills: Proven experience with Databricks and Unity Catalog (data permissions, lineage, audits). Strong hands-on skills with PySpark and Spark SQL. Solid experience writing and optimizing complex SQL queries. Familiarity with Delta Lake, data lakehouse architecture, and data partitioning. Experience with cloud platforms like Azure or AWS. Understanding of data governance, RBAC, and data security standards. Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with tools like Airflow, Git, Azure Data Factory, or dbt. Exposure to streaming data and real-time processing. Knowledge of DevOps practices for data engineering .

Posted Date not available

Apply

4.0 - 8.0 years

8 - 16 Lacs

kolkata, hyderabad, bengaluru

Hybrid

Role: Sr. Databricks Developer Description - External With a startup spirit and 100,000+ curious and courageous minds, we have the expertise to go deep with the worlds biggest brandsand we have fun doing it. Now, we’re calling all you rule-breakers and risk-takers who see the world differently and are bold enough to reinvent it. Responsibilities Closely work with Architect and lead to design solutions to meet functional and non-functional requirements. Participate to understand architecture and solution design artifacts. Evangelize re-use through the implementation of shared assets. Proactively implement engineering methodologies, standards, and leading practices. Provide insight and direction on roles and responsibilities required for solution operations. Identify, communicate and mitigate Risks, Assumptions, Issues, and Decisions throughout the full lifecycle. Considers the art of the possible, compares various solution options based on feasibility and impact, and proposes actionable plans. Demonstrate strong analytical and technical problem-solving skills. Ability to analyze and operate at various levels of abstraction. Ability to balance what is strategically right with what is practically realistic. Qualifications we seek in you! Minimum qualifications Excellent technical skills to enabling the creation of future-proof, complex global solutions Bachelor’s Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and leads for solutioning to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have at least 5+ years of experience in Data Engineering domain with total of 7+ years. Must have implemented at least 2 project end-to-end in Databricks. Must have at least 2+ years of experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have strong understanding of Data warehousing and various governance and security standards around Databricks. Must have knowledge of cluster optimization and its integration with various cloud services. Must have good understanding to create complex data pipeline. Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql. Must have strong performance optimization skills to improve efficiency and reduce cost. Must have worked on both Batch and streaming data pipeline. Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test. Must have strong communication skills and have worked on the team of size 5 plus. Must have great attitude towards learning new skills and upskilling the existing skills.

Posted Date not available

Apply

2.0 - 7.0 years

20 - 35 Lacs

hosur, bengaluru

Work from Office

Required Skills: 2+ years of experience : in data engineering and solution architecture. Expertise in Azure Data Factory (ADF) and Azure Synapse Analytics . Strong hands-on experience with Databricks and PySpark . Proficiency in SQL , including performance tuning on large datasets. Familiarity with Data Mesh architecture . Good knowledge of Microsoft Fabric . Understanding of CI/CD pipelines . Good to Have Skills: Experience with AWS services : (Glue, S3, Redshift, etc.). Familiarity with Airflow or other orchestration tools . Databricks certification .

Posted Date not available

Apply

2.0 - 5.0 years

5 - 10 Lacs

pune

Work from Office

L&T Technology Services is Hiring!! Job Title: Data Engineer Location: Tower 14, John Deere, Pune Experience Required: 3-5years in data engineering Skills Required: Python, Pyspark, SQL, AWS, Databricks, CI/CD, Docker, Devops or Dataops

Posted Date not available

Apply

5.0 - 10.0 years

30 - 32 Lacs

pune, bengaluru

Work from Office

Responsibilties: Bachelors degree in computer science, information systems, computer engineering, systems analysis or a related discipline, or equivalent work experience. 4 to 8 years of experience building enterprise, SaaS web applications using one or more of the following modern frameworks technologies: Java/ .Net/C etc. Exposure to Python & Familiarity with AI/ML-based data cleansing, deduplication and entity resolution techniques Familiarity with a MVC framework such as Django or Rails Full stack web development experience with hands-on experience building responsive UI, Single Page Applications, reusable components, with a keen eye for UI design and usability. Understanding of micro services and event driven architecture Strong knowledge of APIs, and integration with the backend Experience with relational SQL and NoSQL databases such MySQL / PostgreSQL / AWS Aurora / Cassandra Proven expertise in Performance Optimization and Monitoring Tools. Strong knowledge of Cloud Platforms (e.g., AWS, Azure, or GCP) Experience with CI/CD Tooling and software delivery and bundling mechanisms Desired profile: Nice to have: Expertise in Python & Familiarity with AI/ML-based data cleansing, deduplication and entity resolution techniques Nice to have: Experience with Kafka or other pub-sub mechanisms Nice to have: Experience with Redis or other caching mechanisms

Posted Date not available

Apply

6.0 - 10.0 years

15 - 22 Lacs

pune

Work from Office

Responsibilities Design, develop, and maintain scalable and robust data pipelines on Databricks. Collaborate with data scientists and analysts to understand data requirements and deliver solutions. Optimize and troubleshoot existing data pipelines for performance and reliability. Ensure data quality and integrity across various data sources. Implement data security and compliance best practices. Monitor data pipeline performance and conduct necessary maintenance and updates. Document data pipeline processes and technical specifications. Qualifications Bachelor's degree in Computer Science, Engineering, or a related field. 6+ years of experience in data engineering. Proficiency with Databricks and PySpark, Python. Strong SQL skills and experience with relational databases. Experience with big data technologies (e.g., Hadoop, Kafka). Knowledge of data warehousing concepts and ETL processes. Excellent problem-solving and analytical skills

Posted Date not available

Apply

4.0 - 7.0 years

0 - 0 Lacs

kolkata, pune

Work from Office

Position Azure Databricks Engineer with Banking Domain Exp. Location –Pune and Kolkata Azure Databricks AML Transactions KYC Fraud Management Risk Management Credit SQL Banking Domain Exp Thanks and Regards Rajat Kumar Recruitment Lead (Sales and Marketing) Savi Technologies Inc. Cell: 7906842558 E-Mail: rajat@savi-tech.com

Posted Date not available

Apply

6.0 - 11.0 years

15 - 30 Lacs

kochi

Hybrid

Immediate to 30 days notice period candidates preferred Key Responsibilities: Design and implement general architecture for complex data systems. Translate business requirements into functional and technical specifications. Design and implement lakehouse architecture. Develop and manage cloud-based data architecture and reporting solutions. Apply data modelling principles for relational and dimensional data structures. Design Data Warehouses following established principles (e.g., Kimball, Inmon). Create and manage source-to-target mappings for ETL/ELT processes. Mentor junior engineers and contribute to architectural decisions and code reviews. Minimum Qualifications: Bachelors Degree in Computer Science, Computer Engineering, MIS, or related field. 5+ years of experience with Microsoft SQL Server and strong proficiency in T- SQL, SQL performance tuning (Indexing, Structure, Query Optimization). 5+ years of experience in Microsoft data platform development and implementation. 5+ years of experience with Power BI or other competitive technologies. 3+ years of experience in consulting, with a focus on analytics and data solutions. 2+ years of experience with Databricks, including Unity Catalog, Databricks SQL, Workflows, and Delta Sharing. Proficiency in Python and Apache Spark. Develop and manage Databricks notebooks for data transformation, exploration, and model deployment. Expertise in Microsoft Azure services, including Azure SQL, Azure Data Factory (ADF), Azure Data Warehouse (Synapse Analytics), Azure Data Lake, and Stream Analytics Preferred Qualifications: Experience with Microsoft Fabric. Familiarity with CI/CD pipelines and infrastructure-as-code tools like Terraform or Azure Resource Manager (ARM). Knowledge of taxonomies, metadata management, and master data management. Familiarity with data stewardship, ownership, and data quality management. Expertise in Big Data technologies and tools: o Big Data Platforms: HDFS, MapReduce, Pig, Hive. o General DBMS experience with Oracle, DB2, MySQL, etc. o NoSQL databases such as HBase, Cassandra, DataStax, MongoDB, CouchDB, etc. o Experience with non-Microsoft reporting and BI tools, such as Qlik, Cognos, MicroStrategy, Tableau, etc.

Posted Date not available

Apply

3.0 - 8.0 years

8 - 18 Lacs

kolkata, chennai, bengaluru

Work from Office

Role & responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Preferred candidate profile Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable).

Posted Date not available

Apply

12.0 - 21.0 years

37 - 95 Lacs

chennai

Remote

Job Summary As a key member of the Data business leadership team, the role will be responsible for building and expanding the Google Cloud Platform data and analytics capability within the organization. This individual will drive technical excellence, innovative solution development, and successful delivery of GCP-based data initiatives. The role requires close collaboration with clients, delivery teams, GCP alliance partners, and internal stakeholders to grow GCP offerings, build talent pipelines, and ensure delivery excellence. Areas of Responsibility 1. Offering and Capability Development Design and enhance GCP-based data platform offerings and accelerators Define architectural standards, best practices, and reusable components Collaborate with alliance teams to strengthen the strategic partnership. 2. Technical Leadership Provide architectural guidance for data solutions on GCP Lead solutioning for proposals, RFIs, and RFPs that involve GCP services. Conduct technical reviews to ensure alignment with GCP architecture best practices Act as the escalation point for complex architecture or engineering challenges 3. Delivery Oversight Support project delivery teams with deep technical expertise in GCP Drive project quality, performance optimization, and technical risk mitigation Ensure best-in-class delivery aligned with GCPs security, performance, and cost standards. 4. Talent Development Build and lead a high-performing GCP data engineering and architecture team Define certification and upskilling paths aligned with GCP learning programs Mentor team members and foster a culture of technical excellence and knowledge sharing 5. Business Development Support Collaborate with sales and pre-sales teams to position solutions effectively Assist in identifying new opportunities within existing and new accounts Participate in client presentations, solution demos, and technical workshops 6. Thought Leadership and Innovation Develop whitepapers, blogs, and technical assets to showcase GCP leadership Stay current on market updates and innovations in the data engineering landscape Contribute to internal innovation initiatives and PoCs involving GCP Job Requirements 12–15 years of experience in Data Engineering & Analytics, with 3–5 years of deep GCP expertise. Proven experience leading data platforms using GCP technologies (BigQuery, Dataflow, Dataproc, Vertex AI, Looker), Containerization (Kubernetes, Docker), API-based microservices architecture, CI/CD pipelines, and infrastructure-as-code tools like Terraform Experience with tools such as DBT, Airflow, Informatica, Fivetran, and Looker/Tableau and programming skills in languages such as PySpark, Python, Java, or Scala Architectural best practices in cloud around user management, data privacy, data security, performance and other non-functional requirements Familiarity with building AI/ML models on cloud solutions built in GCP GCP certifications preferred (e.g., Professional Data Engineer, Professional Cloud Architect) Exposure to data governance, privacy, and compliance practices in cloud environments Strong presales, client engagement, and solution architecture experience Excellent communication and stakeholder management skills Prior experience in IT consulting, system integration, or technology services environment

Posted Date not available

Apply

5.0 - 8.0 years

15 - 25 Lacs

bengaluru

Work from Office

Key Responsibilities: • Design, implement, and optimize scalable data pipelines using Databricks and Apache Spark. • Architect data lakes using Delta Lake, ensuring reliable and efficient data storage. Manage metadata, security, and lineage through Unity Catalog for governance and compliance. • Ingest and process streaming data using Apache Kafka and real-time frameworks. Collaborate with ML engineers and data scientists on LLM-based AI/GenAI project pipelines. Apply CI/CD and DevOps practices to automate data workflows and deployments (e.g., with GitHub Actions, Jenkins, Terraform). Optimize query performance and data transformations using advanced SQL. Implement and uphold data governance, quality, and access control policies. • Support production data pipelines and respond to issues and performance bottlenecks. • Contribute to architectural decisions around data strategy and platform scalability. Required Skills & Experience: • 5+ years of experience in data engineering roles. Proven expertise in Databricks, Delta Lake, and Apache Spark (PySpark preferred). Deep understanding of Unity Catalog for fine-grained data governance and lineage tracking. Proficiency in SQL for large-scale data manipulation and analysis. Hands-on experience with Kafka for real-time data streaming. Solid understanding of CI/CD, infrastructure automation, and DevOps principles. • Experience contributing to or supporting Generative AI / LLM projects with structured or unstructured data. • Familiarity with cloud platforms (AWS, Azure, or GCP) and data services. • Strong problem-solving, debugging, and system design skills. • Excellent communication and collaboration abilities in cross-functional teams.

Posted Date not available

Apply

6.0 - 11.0 years

3 - 8 Lacs

hyderabad

Remote

5+ years of experience building Tableau dashboards and visualizations. 1+ years of hands-on experience integrating Tableau with Databricks (including SQL, Delta Lake, and Spark environments).

Posted Date not available

Apply

5.0 - 10.0 years

20 - 27 Lacs

bhopal, pune, delhi / ncr

Hybrid

Were Hiring: Senior Data Engineer GCP | Databricks | E-commerce Domain Work Locations : Chennai | Bangalore | Hyderabad | Gurugram | Jaipur | Pune | Bhopal Experience : 7–8+ Years Shift Timing : 2 PM – 10 PM IST Work Mode : Hybrid – 3 days/week from office Only Immediate Joiners (0–15 Days Notice) Are you a seasoned Data Engineer passionate about building next-gen data platforms? We’re hiring for an offshore role supporting one of our top global e-commerce clients. You'll work on large-scale consumer data and cloud-native architectures, helping create impactful data products. Join Xebia and become part of a high-performance team working with the latest technologies in data engineering. Role Responsibilities : Design and build robust, scalable data pipelines Develop data products and solutions using GCP , BigQuery , Databricks , DBT , Airflow , and Spark Handle ETL/ELT processes across structured and unstructured data Work with both SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra) Model and manage data lakes , data warehouses, and schema designs Collaborate cross-functionally with data consumers and product teams Contribute to best practices in data engineering, including unit testing and documentation Required Skills : 7–8+ years of relevant experience in data engineering and architecture Mandatory hands-on expertise with Google Cloud Platform (GCP) Proficiency in Python , SQL , and cloud-based data engineering frameworks Experience with Databricks , Airflow , DBT , BigQuery , and Spark Strong knowledge of data modeling and pipeline orchestration Solid communication and stakeholder collaboration skills Good to Have : Experience working in eCommerce or consumer data domains Exposure to data visualization tools (Tableau, Looker) Knowledge of machine learning workflows Certifications in GCP or other cloud platforms How to Apply : Send your updated CV to vijay.s@xebia.com along with these details: Full Name Total Experience Current CTC Expected CTC Current Location Preferred Xebia Location Notice Period / Last Working Day Primary Skills LinkedIn Profile Join us and power digital transformation in the consumer-tech world. Note : This is a hybrid role – 3 days/week from office is mandatory. #Xebia #HiringNow #SeniorDataEngineer #GCPJobs #Databricks #BigQuery #Airflow #DBT #PythonJobs #HybridJobs #EcommerceData #ImmediateJoiners #IndiaJobs #CloudDataEngineering

Posted Date not available

Apply

7.0 - 12.0 years

50 - 60 Lacs

bengaluru

Work from Office

Role & responsibilities 8+ years of experience as a Data Engineer, with a focus on Databricks and cloud-based data platforms, with a minimum of 4 years of experience in writing unit/end-to-end tests for data pipelines and ETL processes on Databricks. Hands-on experience in PySpark programming for data manipulation, transformation, and analysis. Strong experience in SQL and writing complex queries for data retrieval and manipulation. Experience in Docker for containerising and deploying data engineering applications is good to have. Strong knowledge of the Databricks platform and its components, including Databricks notebooks, clusters, and jobs. Experience in designing and implementing data models to support analytical and reporting needs will be an added advantage. Preferred candidate profile Someone who has experience working as a senior individual contributor and handling a team with hands on experience in the above tech stacks preferably on Unit testing strictly not manual but using tools like JEST, JUNIT, pytest. Candidates who can join within 30 days are preferred who are comfortable working 4 days from office.

Posted Date not available

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies