Jobs
Interviews

92 Databricks Engineer Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

15 - 25 Lacs

noida, pune, gurugram

Hybrid

Why Join Us? Are you inspired to grow your career at one of Indias Top 25 Best Workplaces in IT industry? Do you want to do the best work of your life at one of the fastest growing IT services companies ? Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations ? Its happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our clients most trusted technology partner, and the first choice for the industrys top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about Being Your Best as a professional and person. Iis about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. Were a place where everyone can discover and be their best version. Core skills required for the role : Need 7- 12 years of exp in IT. Databricks Level: Advanced (5+ Years) SQL (MSSQL Server) Joins, SQ optimization, basic knowledge of Stored Procedure, Functions PySpark Level: Advanced Azure Delta lake Python should be 4+ years Mandatory- Big Data - Pyspark Cloud - Azure - Azure Data Factory (ADF), Azure Databricks, Azure Data Lake Storage, Event Hubs, HDInsight Learning - Python Data Science and Machine Learning - Data Science and Machine Learning - Python If Interested, Kindly share your resume on kanika.singh@irissoftware.com Notice Period- 1 Month Max Perks and Benefits for Irisians- At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness.

Posted 1 day ago

Apply

6.0 - 11.0 years

10 - 18 Lacs

bengaluru

Hybrid

Experience: 6–8 years of overall Data Engineering experience, . At least 5–6 years of hands-on Databricks experience, including delivering 2+ end-to-end Databricks projects . 3 - 4 years of experience in designing Databricks Data Platform

Posted 4 days ago

Apply

5.0 - 8.0 years

12 - 22 Lacs

pune, chennai, bengaluru

Work from Office

Responsibilties: AWS Cloud Dev Engineer having experience working on Dev platform Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. CDK scripting/hands on dev exp. with services like Fargate, Redshift, Glue, Airflow, Athena etc.,/Angular/Node.js/SQL/Stored Proc/Python Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Create and manage applications using Python, SQL, Databricks, and various AWS technologies. Automate repetitive tasks and build reusable frameworks to improve efficiency. Skill Proficiency Level expected AWS Cloud Data Engineer - AWS Glue, Amazon Redshift, S3 ETL Process , SQl, Databricks CDK scripting/hands on dev exp. with services like Fargate, Redshift, Glue, Airflow, Athena etc.,/Angular/Node.js/SQL/Stored Proc/Python

Posted 5 days ago

Apply

7.0 - 12.0 years

15 - 27 Lacs

bengaluru

Work from Office

Note - This opportunity is on behalf of one of our esteemed clients were assisting them with the hiring process. They are looking to fill this position immediately (within this week). Only candidates from Bengaluru or nearby (commutable) locations will be considered. Role Overview We are looking for an experienced AI Data Architect to design and implement robust, scalable, and secure data architectures that power AI/ML solutions. The role involves defining data strategies, enabling advanced analytics, ensuring data quality/governance, and optimizing infrastructure to support modern AI-driven applications. Key Responsibilities Design and implement end-to-end data architectures to support AI/ML workloads. Define data strategy, governance, and frameworks for structured, semi-structured, and unstructured data. Architect scalable data pipelines, warehouses, and lakehouses optimized for AI/ML. Collaborate with Data Scientists, ML Engineers, and business teams to translate requirements into data architecture solutions. Ensure data security, compliance, lineage, and metadata management. Optimize data platforms for performance, scalability, and cost efficiency. Guide teams on best practices for integrating data platforms with AI/ML model training, deployment, and monitoring. Evaluate emerging tools and technologies in the Data & AI ecosystem. Required Skills & Experience Proven experience as a Data Architect with a focus on AI/ML workloads. Strong expertise in cloud platforms (AWS, Azure, GCP) and cloud-native data services. Hands-on experience with data lakehouse architectures (Databricks, Snowflake, Delta Lake, BigQuery, Synapse). Proficiency in data pipeline frameworks (Apache Spark, Kafka, Airflow, DBT). Strong understanding of ML data lifecycle feature engineering, data versioning, training pipelines, MLOps. Knowledge of data governance frameworks , security, and compliance standards. Experience with SQL, Python, and distributed data systems . Familiarity with AI/ML platforms (SageMaker, Vertex AI, Azure ML) is a plus. Excellent problem-solving and stakeholder management skills. Note - This opportunity is on behalf of one of our esteemed clients were assisting them with the hiring process. They are looking to fill this position immediately (within this week). Only candidates from Bengaluru or nearby (commutable) locations will be considered.

Posted 6 days ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

pune, bengaluru, mumbai (all areas)

Hybrid

MS Fabric, Data Engineering (PySpark), DBX

Posted 6 days ago

Apply

4.0 - 8.0 years

20 - 35 Lacs

pune, jaipur, bengaluru

Hybrid

Job Title: Teradata Migration Specialist Company: Celebal Technologies Location: Bangalore, Hyderabad, Pune, Noida, Jaipur Experience Required: 5+ Years Employment Type: Full-time About Us Celebal Technologies is a leading Solution Services company providing expertise in Data Science, Big Data, Enterprise Cloud, and Automation . We are at the forefront of leveraging cutting-edge technologies to drive innovation and enhance business processes. As part of our growth strategy, we are looking for a Teradata Migration Specialist with strong Databricks and SQL/Python expertise to join our dynamic team. Job Summary We are seeking a highly skilled Data Engineer experienced in Teradata-to-Databricks migrations , ETL development , and large-scale data validation . The role requires hands-on expertise in SQL, Python, Databricks , and orchestration frameworks , along with the ability to troubleshoot migration challenges and work with cross-functional enterprise teams . Key Responsibilities Convert Teradata ETL workloads into Databricks SQL . Refactor ingestion pipelines using SQL and/or Python . Design and implement data validation scripts for side-by-side comparisons . Configure orchestration and job scheduling within the Databricks environment. Perform unit, integration, UAT, and performance testing for migrated code. Validate data consistency between Teradata and Databricks environments. Deploy workloads in Unity Catalog-enabled workspaces and ensure governance best practices . Implement job monitoring using Databricks-native tools (system tables, workflows). Collaborate with Databricks teams and customers for code fixes, optimization, and best practices . Required Technical Skills Strong expertise in Teradata ETL development & migration . Proficiency in Azure Databricks, SQL, and Python . Deep understanding of orchestration, scheduling, and data validation techniques . Hands-on knowledge of Unity Catalog for permissions and governance. Experience in data testing : unit, integration, UAT, and performance. Excellent debugging, troubleshooting, and optimization skills. Familiarity with cloud environments : Azure / AWS / GCP . Proven Experience In Migrating large-scale Teradata workloads to Databricks . Implementing data validation frameworks and resolving data drift . Performing side-by-side data comparisons between legacy and modern systems. Applying performance tuning for query and job optimization. Collaborating with cross-functional stakeholders on enterprise-scale projects . Working in retail, telecom, semiconductor, or manufacturing domains (preferred). Why Join Us? Opportunity to work on cutting-edge technologies . Exposure to large-scale migration projects and cloud-first architectures . Collaborative and innovation-driven work culture . Continuous learning and career growth opportunities. How to Apply Interested candidates can share their updated resumes at latha.kolla@celebaltech.com

Posted 6 days ago

Apply

3.0 - 6.0 years

0 - 1 Lacs

pune

Hybrid

Role: DataBricks Engineer Notice Period: Immediate joiners only Location: Pune Work Mode: Hybrid Position Overview: We need a skilled Databricks Data Engineer with strong analytical and troubleshooting abilities. The ideal candidate should have hands-on experience in building scalable data pipelines using PySpark on cloud platforms like Azure/AWS along with a solid hands-on experience on ETL tools like Informatica PC/SSIS/DataStage, and relational databases. Mandatory skills needed: 1) Strong hands-on experience with Databricks - PySpark notebooks and Datbricks pipeline development 2) Experience using PySpark with Databricks 3) Strong understanding of SQL and query optimization techniques. 4) Hands-on experience on either of on-prem ETL tools like Informatica PowerCenter, IBM DataStage, or SSIS 5) Strong experience working with relational databases (e.g., SQL Server, Oracle etc.) 6) Excellent verbal and written communication skills. Good to have : 1) Basic proficiency in Unix/Linux commands and shell scripting 2) Relevant certifications like Databricks Data Engineer Databricks Data Engineer 3) DevOps Interested candidates share your updated resume at "Shruti.Wanjari@bitwiseglobal.com"

Posted 1 week ago

Apply

8.0 - 13.0 years

10 - 16 Lacs

hyderabad, bengaluru

Work from Office

Skill: Databricks Production Support - Senior Level Experience: 8-12 Years Location: Hyderabad & Bangalore Notice Period: Immediate - 15 Days Shift Timings: 1PM - 10PM Detailed job description - Skill Set: We are seeking a highly skilled Databricks Platform Operations to join our team, responsible for daily monitoring and resolution of data load issues, platform optimization, capacity planning, and governance management. This role is pivotal in ensuring the stability, scalability, and security of our Databricks environment while acting as a technical architect for platform best practices. The ideal candidate will bring a strong operational background, potentially with earlier experience as a Linux, Hadoop, or Spark administrator, and possess deep expertise in managing cloud-based data platforms. Mandatory Skills Terraform, Azure Purview, Apache Spark and SQL, Data lakehouse

Posted 1 week ago

Apply

6.0 - 9.0 years

12 - 16 Lacs

hyderabad

Hybrid

Location: Hyderabad, Telangana. Mode of work: Hybrid Kindly Send your Resume to 9361912009. Fullstack with Data exposure for Catalog ====== We are seeking an experienced Full Stack Developer with strong expertise in Databricks, SQL, and modern web technologies. The ideal candidate will be responsible for designing, developing, and maintaining scalable applications that integrate data engineering, analytics, and interactive user interfaces. This role requires a blend of front-end and back-end development skills, coupled with hands-on experience in Databricks and large-scale data management. Key Responsibilities: Design and develop full-stack applications using [React/Angular/Vue] for front-end and [Node.js/Java/.NET] for back-end. Build and optimize data pipelines, transformations, and workflows in Databricks. Write efficient, scalable, and maintainable SQL queries for analytics, reporting, and data integration. Collaborate with data engineers, analysts, and business stakeholders to translate requirements into technical solutions. Integrate APIs and services to support real-time and batch data-driven applications. Ensure code quality, security, and performance through testing, code reviews, and best practices. Deploy applications in cloud environments (Azure/AWS/GCP) with CI/CD pipelines. Required Skills & Experience: 6+ years of experience as a Full Stack Developer. Strong experience with Databricks (data pipelines, notebooks, Delta Lake, Spark SQL). Proficiency in SQL (complex queries, performance tuning, stored procedures). Hands-on experience with at least one modern front-end framework (React, Angular, or Vue). Back-end expertise with Node.js, Java Strong understanding of REST APIs, microservices, and integration patterns. Experience with cloud platforms (Azure preferred; AWS/GCP acceptable). Familiarity with CI/CD tools (GitHub Actions, Jenkins, Azure DevOps, etc.). Excellent problem-solving and communication skills. Preferred Qualifications: Experience with data visualization tools (Power BI, Tableau, or similar). Knowledge of Python for data-related tasks. Exposure to big data technologies (Apache Spark, Kafka, Delta Lake). Familiarity with containerization (Docker, Kubernetes).

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

pune, chennai

Hybrid

Key Responsibilities: 5 to 10 years of experience designing and building data pipelines using Apache Spark, Databricks or equivalent bigdata frameworks Handson expertise with streaming and messaging systems such as Apache Kafka (publish subscribe architecture), Confluent Cloud, RabbitMQ or Azure Event Hub. Experience creating producers, consumers and topics and integrating them into downstream processing. Deep understanding of relational databases and CDC. Proficiency in SQL Server, Oracle, or other RDBMSs; experience capturing change events using Debezium or native CDC tools and transforming them for downstream consumption. Implement CDC and deduplication logic. Capture change events from source databases using Debezium, built-in CDC features of SQL Server/ Oracle or other connectors. Apply watermarking and drop duplicate strategies based on primary keys and event timestamps. Proficiency in programming languages such as Python, Scala or Java and solid knowledge of SQL for data manipulation and transformation. Cloud platform expertise. Experience with Azure or AWS services for data storage, compute, and orchestration (e.g., ADLS, S3, Azure Data Factory, AWS Glue, Airflow, DBX, DLT). Data modelling and warehousing. Knowledge of data Lakehouse architectures, Delta Lake, partitioning strategies, and performance optimisation. Version control and DevOps. Familiarity with Git and CI/CD pipelines; ability to automate deployment and manage infrastructure as code. Strong problem solving and communication skills. Ability to work with cross functional teams and articulate complex technical concepts to nontechnical stakeholders.

Posted 1 week ago

Apply

6.0 - 11.0 years

19 - 34 Lacs

pune

Work from Office

About Position: The MDM Lead drives enterprise-wide Master Data Management efforts, ensuring data consistency, quality, and governance. This role involves leading cross-functional teams, defining data standards, and aligning MDM strategies with business objectives. Role: MDM-Lead Location: All Persistent locations Experience: 6-12 years Job Type: Full Time Employment Mandatory Mention 3 skills: More than 2 MDM tools, databricks and Lead experience What You'll Do: Lead and oversee MDM project execution across teams. Partner with stakeholders to define and refine master data requirements. Ensure adherence to data quality, governance, and compliance standards. Manage project timelines, risks, and resource allocation. Communicate progress and key updates to leadership regularly. Expertise You'll Bring: Deep knowledge of MDM principles, tools, and best practices. Hands-on experience in driving data governance and quality programs. Strong stakeholder management and cross-functional leadership skills. Expertise in project execution, planning, and risk mitigation. Familiarity with MDM platforms like Informatica SaaS, Reltio, or similar Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Values-Driven, People-Centric & Inclusive Work Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We support hybrid work and flexible hours to fit diverse lifestyles. Our office is accessibility-friendly, with ergonomic setups and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment Let's unleash your full potential at Persistent - persistent.com/careers Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”

Posted 1 week ago

Apply

3.0 - 8.0 years

18 - 27 Lacs

gurugram, bengaluru

Work from Office

What would a typical day at your work be like? Design, build, and maintain robust data pipelines using Snowflake, DBT, AWS Glue, Python, SQL, Fivetran, and Snaplogic. Work closely with team leads to deliver high-quality and scalable data solutions. Implement data ingestion, transformation, and integration from diverse sources. Optimize SQL queries and ensure performance tuning for large-scale datasets. Collaborate with data scientists to integrate ML workflows into data pipelines. Ensure data quality, documentation, and adherence to project standards. What Do We Expect? 3 to 9 years of hands-on experience in data engineering. Strong expertise in Snowflake and DBT for transformation and orchestration. Proficiency in Python, SQL, AWS Glue, Fivetran, Snaplogic. Good understanding of data modeling and data warehousing concepts. Experience working with relational (Postgres, MySQL) and NoSQL (MongoDB, Cassandra) databases. Exposure to modern cloud platforms (AWS/Azure/GCP). Strong problem-solving and communication skills.

Posted 1 week ago

Apply

8.0 - 12.0 years

3 - 8 Lacs

gurugram, bengaluru

Hybrid

Role : GCP Architect Skills cluster : GCP, AWS, Data Engineer Data bricks, Scala, Python Cloud : AWS Experience : 8 Yrs to 12 Yrs Location : Bangalore, Gurugram

Posted 1 week ago

Apply

7.0 - 12.0 years

15 - 25 Lacs

noida, pune, gurugram

Hybrid

Why Join Us? Are you inspired to grow your career at one of Indias Top 25 Best Workplaces in IT industry? Do you want to do the best work of your life at one of the fastest growing IT services companies ? Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations ? Its happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our clients most trusted technology partner, and the first choice for the industrys top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about Being Your Best as a professional and person. Iis about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. Were a place where everyone can discover and be their best version. Core skills required for the role : Need 7- 12 years of exp in IT. Databricks Level: Advanced (4+ Years) SQL (MSSQL Server) Joins, SQ optimization, basic knowledge of Stored Procedure, Functions PySpark Level: Advanced Azure Delta lake Python Basic Mandatory- Big Data - Pyspark Cloud - Azure - Azure Data Factory (ADF), Azure Databricks, Azure Data Lake Storage, Event Hubs, HDInsight Learning - Python Data Science and Machine Learning - Data Science and Machine Learning - Python If Interested, Kindly share your resume on kanika.singh@irissoftware.com Notice Period- 1 Month Max Perks and Benefits for Irisians- At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

noida, bengaluru, mumbai (all areas)

Hybrid

Required Qualifications: 5+ years of Data Engineering experience 2+ years of Python development experience Experience with Databricks, Spark AWS technologies: S3, Lambda DataStage or similar ETL systems experience Experience w Airflow Mid to senior level experience with ability to work with minimal technical guidance Technical Stack: Databricks, Python, Spark, SQL AWS (S3, Lambda) Healthcare data processing systems Batch processing frameworks

Posted 2 weeks ago

Apply

8.0 - 13.0 years

20 - 35 Lacs

pune

Hybrid

Kindly only apply if you have worked on Power BI Paginated Reports. Preference will be given for early and immediate serving notice. Senior Power BI Developer Location : Pune - hybrid Notice Period - 0 to 30 days (Immediate Joiners preferred) Must have 6+ years experience in BI Tools. Strong experience in Power BI Development. Be experienced in tools and systems on MS SQL Server BI Stack, including SSRS and TSQL, Power Query, MDX, PowerBI Report Builder, DAX and Data Modeling. Must have proven experience in developing and deploying paginated reports in Power BI Report Builder. Advanced SQL Skills including performance tuning and complex query writing. Establishing and optimizing robust connections form Power BI to Databricks SQL end points and other data sources. Develop operational reports, build automated reports and dashboards with the help of Power BI. Understand business requirements to set functional specifications for reporting applications. Be able to quickly shape data into reporting and analytics solutions. Create functional reporting. Strong experience in Data Visualization principles and user experience design for dashboards. Have knowledge of database fundamentals such as multidimensional database design, relational database design, and more. Please refer to the company website: www.maantic.com

Posted 2 weeks ago

Apply

1.0 - 4.0 years

9 - 16 Lacs

hyderabad

Remote

Job Title: Databricks Engineer (PySpark Developer) Employment Type: Full-Time | Permanent | Remote (Work From Home) Industry: IT Services & Consulting | Software Development | Data Engineering | Cloud Services Functional Area: Data Engineering | Big Data | Cloud Platforms | Analytics About Oblytech: Oblytech is a fast-growing IT consulting and software services firm, specializing in delivering cutting-edge IT solutions to clients across the United States, Canada, and Australia. We are an official Salesforce Partner and ServiceNow consulting provider, with expertise spanning ServiceNow, Salesforce, cloud platforms (AWS, Azure, Google Cloud), custom application development, AI/ML integrations, and offshore IT staff augmentation. Our clients rely on us to solve critical IT challenges around scalability, cost optimization, and digital transformation. As we expand into advanced data engineering and analytics services, we are looking to onboard skilled professionals who can architect and deliver solutions leveraging modern big data and cloud platforms. Job Description: We are seeking a highly skilled Databricks Engineer (PySpark Developer) with 2 to 5 years of hands-on experience in building, optimizing, and managing big data pipelines and analytics solutions. The ideal candidate will have strong expertise in Databricks, PySpark, and cloud platforms (Azure/AWS/GCP), along with experience in large-scale ETL, data warehousing, and performance tuning. You will work with global clients to design and implement scalable data engineering solutions that support business intelligence, machine learning, and advanced analytics use cases. This role requires a mix of technical proficiency, problem-solving ability, and communication skills to collaborate with cross-functional teams across geographies. Key Responsibilities: Design, develop, and optimize data pipelines and ETL workflows using Databricks (PySpark, Spark SQL, Delta Lake) . Work with structured, semi-structured, and unstructured data to build scalable big data solutions. Integrate Databricks with cloud platforms (AWS S3, Azure Data Lake, GCP Storage) for data ingestion, transformation, and analytics. Implement and optimize Delta Lake for data versioning, ACID transactions, and scalable storage. Collaborate with business analysts, data scientists, and product teams to deliver data-driven insights. Ensure performance tuning, monitoring, and troubleshooting of Spark jobs and pipelines. Build and maintain CI/CD pipelines for Databricks deployments using DevOps tools (Azure DevOps, GitHub Actions, Jenkins). Document workflows, maintain code repositories, and adhere to best practices in version control and data governance. Participate in client discussions to understand requirements, propose solutions, and ensure smooth project delivery. Skills and Experience Required: 2 to 5 years of professional experience in data engineering or big data development . Strong hands-on expertise in Databricks (workspace, clusters, notebooks, jobs). Proficiency in PySpark , Spark SQL, and performance tuning of Spark jobs. Experience with Delta Lake for scalable storage and data consistency. Familiarity with at least one major cloud platform (Azure/AWS/GCP) , including data services (Azure Data Factory, AWS Glue, GCP Dataflow). Good understanding of data warehousing, ETL concepts, and data modeling . Experience with Git, CI/CD pipelines, and DevOps practices . Strong English communication skills to work with international teams and clients. Exposure to BI/analytics tools (Power BI, Tableau) or ML workflows (MLflow, Databricks ML) is a plus. What We Offer: Competitive salary package with performance-based incentives. Opportunity to work on global big data and cloud transformation projects . Remote work flexibility from anywhere in India. Growth path into Senior Data Engineer, Solution Architect, or Cloud Data Specialist roles. Direct exposure to U.S., Canadian, and Australian clients. Mentorship from senior architects and leadership in data engineering and cloud services. Technologies & Platforms You Will Work With: Databricks (PySpark, Delta Lake, Spark SQL) Cloud Platforms : Azure, AWS, or Google Cloud Data Services : ADF, AWS Glue, GCP Dataflow DevOps Tools : Azure DevOps, Jenkins, GitHub Actions BI/ML Tools : Power BI, Tableau, MLflow Ideal Candidate Profile: Proven experience in Databricks and PySpark-based development . Strong problem-solving skills and the ability to design scalable solutions. Familiarity with data lakehouse architecture and modern ETL pipelines. Comfort working in a remote, international environment . Keen interest in learning new tools and adapting to evolving data technologies. Location: Remote (India-based) Work From Home Working Hours: Flexible, with partial overlap to U.S. and/or Australian time zones preferred. Compensation: Fixed Salary + Performance-Based Incentives

Posted 2 weeks ago

Apply

3.0 - 8.0 years

6 - 14 Lacs

gurugram, chennai, bengaluru

Work from Office

Role & responsibilities Primary Roles and Responsibilities: Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Preferred candidate profile Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail

Posted 2 weeks ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

pune, chennai, bengaluru

Hybrid

We are seeking a highly skilled Data Engineer to join our team at Tredence Inc. in Pune, India. As a Data Engineer, you will be responsible for designing, developing, and maintaining large-scale data processing systems. Role & responsibilities Design and develop ETL/ELT solutions using Azure services and tools Collaborate with business stakeholders to identify and meet data requirements Implement batch & near real-time data ingestion pipelines Work on event-driven cloud platforms for cloud services and apps, data integration for building and managing pipelines, and data warehouse running on serverless infrastructure Develop workflow orchestration using Azure Cloud Data Engineering components Databricks, Synapse, etc. Requirements: In-depth technical knowledge of tools like Azure Data Factory, Databricks, Azure Synapse, SQL DB, ADLS, etc. 5-14 years of total IT experience Excellent written and oral communication skills Ability to work with large cross-functional teams and drive customer calls independently Azure Certified Data Engineer certification Good to Have: Experience on cloud migration methodologies and processes Exposure to Azure DevOps and GitHub

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 - 3 Lacs

hyderabad

Work from Office

JD: • Design, implement and monitor complex data pipelines using ELT processes • Understand business requests and needs, and translate them into data engineering tasks to build useful datasets • Work with a large internal data team to build the best efficient Cloud Data Warehouse in the world • Investigate new technologies that may be relevant in Stellantis context SKILLS Soft skills • Combination of business focus, analytical and problem solving skills in order to quickly define a data-driven solution within different initiatives • Ability to work within a team and with a proactive attitude • Ability to communicate and popularize technical topics Tech skills Primary skills • Strong programming skills in Python and SQL • Strong knowledge of Database Management Systems (RDBMS, NoSQL ) and Big Data architectures (Spark, Hadoop, Hive ) Job Description • Familiarity with the use of some of the following Cloud Data Warehouses: Snowflake, Databricks, BigQuery, Amazon Redshift • Knowledge of DevOps and orchestrator tools (code versioning, CI/CD, Docker, Kubernetes, Airflow )

Posted 2 weeks ago

Apply

2.0 - 6.0 years

9 - 19 Lacs

ahmedabad, mumbai (all areas)

Work from Office

i) Build and manage data pipelines, models, and integrations aligned with Medallion Lake architecture, with hands-on experience in Databricks notebooks and scripting. ii) Support cloud-based analytics delivery using Azure, Databricks, and ERP systems (SAP/Oracle). iii) Enforce data governance and enable data-driven initiatives across industries. (Experience: 25 years)

Posted 3 weeks ago

Apply

6.0 - 9.0 years

19 - 30 Lacs

ahmedabad, mumbai (all areas)

Work from Office

i) Define and deliver secure, scalable cloud-native solutions across consulting engagements, ensuring alignment with business, technology, and regulatory needs. ii) Lead architecture strategy and guide implementation of complex systems across domains like data platforms, AI, ERP, and digital experiences. iii) Collaborate across teams to translate business vision into technical design, balancing innovation, compliance, and cost efficiency. (Experience: 6- 9 years)

Posted 3 weeks ago

Apply

2.0 - 6.0 years

9 - 14 Lacs

ahmedabad, mumbai (all areas)

Work from Office

i) Lead enterprise-scale analytics initiatives, designing scalable data architectures on Azure/Databricks. ii) Work with ERP data (SAP, Oracle) and modern platforms like Databricks Unity Catalog and Medallion Lakes. iii) Collaborate across functions to deliver data-driven transformation with strong governance and stakeholder alignment. (Experience: 69 years)

Posted 3 weeks ago

Apply

10.0 - 14.0 years

25 - 37 Lacs

pune, bengaluru, mumbai (all areas)

Hybrid

Role & responsibilities Job Description Summary The purpose of this role is to oversee the development of our database marketing solutions, using database technologies such as Microsoft SQL Server/Azure, Amazon Redshift, Google BigQuery. The role will be involved in design, specifications, troubleshooting and issue resolution. The ability to communicate to both technical and non-technical audiences is key. Business Title Associate Technical Architect Years of Experience 10+ Years Must have skills 1. Database (one or more of MS SQL Server, Oracle, Cloud SQL, Cloud Spanner, etc.) 2. Data Warehouse (one or more of Big Query, SnowFlake, etc.) 3. ETL tool (two or more of Cloud Data Fusion, Dataflow, Dataproc, Pub/Sub, Composer, Cloud Functions, Cloud Run, etc.) 4. Experience in Cloud platforms - GCP 5. Python, PySpark, Project & resource management 6. SVN, JIRA, Automation workflow (Composer, Cloud Scheduler, Apache Airflow, Tidal, Tivoli or similar) Good to have skills 1. UNIX shell scripting, SnowFlake, Reshift, Familiar with NoSQL such as MongoDB, etc 2. ETL tool (Databricks / AWS Glue / AWS Lambda / Amazon Kinesis / Amazon Firehose / Azure Data Factory / ADF / DBT / Talend, Informatica, IICS (Informatica cloud) ) 3. Experience in Cloud platforms - AWS / Azure 4. Client-facing skills Job Descreption The Technical Lead / Technical Consultant is a core role and focal point of the project team responsible for the whole technical solution and managing the day to day delivery. The role will focus on the technical solution architecture, detailed technical design, coaching of the development/implementation team and governance of the technical delivery. Technical ownership of the solution from bid inception through implementation to client delivery, followed by after sales support and best practise advice. Interactions with internal stakeholders and clients to explain technology solutions and a clear understanding of clients business requirements through which to guide optimal design to meet their needs. Key responsibiltes Ability to design simple to medium data solutions for clients by using cloud architecture using GCP • Strong understanding of DW, data mart, data modelling, data structures, databases, and data ingestion and transformation. • Working knowledge of ETL as well as database skills • Working knowledge of data modelling, data structures, databases, and ETL processes • Strong understand of relational and non-relational databases and when to use them • Leadership and communication skills to collaborate with local leadership as well as our global teams • Translating technical requirements into ETL/ SQL application code • Document project architecture, explain detailed design to team and create low level to high level design • Create technical documents for ETL and SQL developments using Visio, PowerPoint and other MS Office package • Will need to engage with Project Managers, Business Analysts and Application DBA to implement ETL Solutions • Perform mid to complex level tasks independently • Support Client, Data Scientists and Analytical Consultants working on marketing solution • Work with cross functional internal team and external clients • Strong project Management and organization skills. Ability to lead 1 2 projects of team size 2 3 team members. • Code management systems which includes Code review, deployment, cod • Work closely with the QA / Testing team to help identify/implement defect reduction initiatives • Work closely with the Architecture team to make sure Architecture standards and principles are followed during development • Performing Proof of Concepts on new platforms/ validate proposed solutions • Work with the team to establish and reinforce disciplined software development, processes, standards, and error recovery procedures are deployed • Must understand software development methodologies including waterfall and agile • Distribute and manage SQL development Work across the team • The candidate must be willing to work during overlapping hours with US-based teams to ensure effective collaboration and communication, typically between [e.g., 6:00 PM to 11:00 PM IST], depending on project needs. Education Qulification Bachelors or Master Degree in Computer Science Certification (Must): Snowflake Associate / Core or Min. Basic level certification in AZURE Shift timing GMT (UK Shift) - 2 PM to 11 PM

Posted 3 weeks ago

Apply

7.0 - 12.0 years

22 - 27 Lacs

hyderabad, chennai, bengaluru

Hybrid

Role & responsibilities Data Engineering & Analytics: Strong background in building scalable data pipelines and analytics platforms. Databricks (AWS preferred): Mandatory hands-on expertise in Databricks, including cluster management, notebooks, job orchestration, and optimization. AWS Cloud Services: Proficiency in AWS ecosystem (S3, Glue, EMR, Lambda, Redshift, IAM, CloudWatch). Programming: Expertise in PySpark and Python for ETL, transformations, and analytics. GenAI & LLMs: Experience with Large Language Models (LLM), fine-tuning, and enterprise integration. CI/CD & DevOps Knowledge: Familiarity with Git-based workflows, deployment pipelines, and automation. Preferred candidate profile 812 years of IT experience with a strong focus on Data Engineering & Cloud Analytics. Minimum 45 years of hands-on Databricks experience (preferably on AWS). Strong expertise in PySpark, Python, SQL, and AWS Data Services. Experience in LLM fine-tuning, GenAI automation, and enterprise integration. Proven ability to lead teams, deliver projects, and engage stakeholders. Strong problem-solving, communication, and analytical skills.

Posted 3 weeks ago

Apply
Page 1 of 4
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies