Home
Jobs

13 Aws Databricks Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

15 - 27 Lacs

Chennai

Hybrid

Naukri logo

Key Responsibilities: Technical Skills: Strong proficiency in SQL for data manipulation and querying. Knowledge of Python scripting for data processing and automation. Experience in Reltio Integration Hub (RIH) and handling API-based integrations. Familiarity with Data Modelling Matching, Survivorship concepts and methodologies. Experience with D&B, ZoomInfo, and Salesforce connectors for data enrichment. Understanding of MDM workflow configurations and role-based data governance Experience with AWS Databricks, Data Lake and Warehouse Implement and configure MDM solutions using Reltio while ensuring alignment with business requirements and best practices. Develop and maintain data models, workflows, and business rules within the MDM platform. Work on Reltio Workflow (DCR Workflow & Custom Workflow) to manage data approvals and role-based assignments. Support data integration efforts using Reltio Integration Hub (RIH) to facilitate data movement across multiple systems. Develop ETL pipelines using SQL, Python, and integration tools to extract, transform, and load (ETL) data. Work with D&B, ZoomInfo, and Salesforce connectors for data enrichment and integration. Perform data analysis and profiling to identify data quality issues and recommend solutions for data cleansing and enrichment. Collaborate with stakeholders to define and document data governance policies, procedures, and standards. Optimize MDM workflows to enhance data stewardship and governance.

Posted 1 week ago

Apply

8.0 - 12.0 years

15 - 27 Lacs

Mumbai, Pune, Bengaluru

Work from Office

Naukri logo

Role & responsibilities : Job Description: Primarily looking for a Data Engineer (AWS) with expertise in processing data pipelines using Data bricks, PySpark SQL on Cloud distributions like AWS Must have AWS Data bricks ,Good-to-have PySpark, Snowflake, Talend Requirements- • Candidate must be experienced working in projects involving • Other ideal qualifications include experiences in • Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. • Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python • Familiarity with AWS compute storage and IAM concepts • Experience in working with S3 Data Lake as the storage tier • Any ETL background Talend AWS Glue etc. is a plus but not required • Cloud Warehouse experience Snowflake etc. is a huge plus • Carefully evaluates alternative risks and solutions before taking action. • Optimizes the use of all available resources • Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit • Skills • Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. • Experience on Shell scripting • Exceptionally strong analytical and problem-solving skills • Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses • Strong experience with relational databases and data access methods especially SQL • Excellent collaboration and cross functional leadership skills • Excellent communication skills both written and verbal • Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment • Ability to leverage data assets to respond to complex questions that require timely answers • has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Mandatory Skills: Apache Spark, Databricks, Java, Python, Scala, Spark SQL. Note : Need only Immediate joiners/ Serving notice period. Interested candidates can apply. Regards, HR Manager

Posted 1 week ago

Apply

6.0 - 7.0 years

6 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking an experienced Data Engineer to join our innovative data team data team and help build scalable data infrastructure, software consultancy, and development services that powers business intelligence, analytics, and machine learning initiatives. The ideal candidate will design, develop, and maintain robust high-performance data pipelines and solutions while ensuring data quality, reliability, and accessibility across the organization working with cutting-edge technologies like Python, Microsoft Fabric, Snowflake, Dataiku, SQL Server, Oracle, PostgreSQL, etc. Required Qualifications 5 + years of experience in Data engineering role. Programming Languages: Proficiency in Python Cloud Platforms: Hands-on experience with Azure (Fabric, Synapse, Data Factory, Event Hubs) Databases: Strong SQL skills and experience with both relational (Microsoft SQL Server, PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra) databases Version Control: Proficiency with Git and collaborative development workflows Proven track record of building production-grade data pipelines handling large-scale data or solutions. Desired experience with containerization (Docker) and orchestration (Kubernetes) technologies . Knowledge of machine learning workflows and MLOps practices Familiarity with data visualization tools (Tableau, Looker, Power BI) Experience with stream processing and real-time analytics Experience with data governance and compliance frameworks (GDPR, CCPA) Contributions to open-source data engineering projects Relevant Cloud certifications (e.g., Microsoft Certified: Azure Data Engineer Associate, AWS Certified Data Engineer, Google Cloud Professional Data Engineer). Specific experience or certifications in Microsoft Fabric, or Dataiku, Snowflake.

Posted 2 weeks ago

Apply

7.0 - 8.0 years

7 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking an experienced Data Engineer to join our innovative data team data team and help build scalable data infrastructure, software consultancy, and development services that powers business intelligence, analytics, and machine learning initiatives. The ideal candidate will design, develop, and maintain robust high-performance data pipelines and solutions while ensuring data quality, reliability, and accessibility across the organization working with cutting-edge technologies like Python, Microsoft Fabric, Snowflake, Dataiku, SQL Server, Oracle, PostgreSQL, etc. Required Qualifications 5 + years of experience in Data engineering role. Programming Languages: Proficiency in Python Cloud Platforms: Hands-on experience with Azure (Fabric, Synapse, Data Factory, Event Hubs) Databases: Strong SQL skills and experience with both relational (Microsoft SQL Server, PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra) databases Version Control: Proficiency with Git and collaborative development workflows Proven track record of building production-grade data pipelines handling large-scale data or solutions. Desired experience with containerization (Docker) and orchestration (Kubernetes) technologies . Knowledge of machine learning workflows and MLOps practices Familiarity with data visualization tools (Tableau, Looker, Power BI) Experience with stream processing and real-time analytics Experience with data governance and compliance frameworks (GDPR, CCPA) Contributions to open-source data engineering projects Relevant Cloud certifications (e.g., Microsoft Certified: Azure Data Engineer Associate, AWS Certified Data Engineer, Google Cloud Professional Data Engineer). Specific experience or certifications in Microsoft Fabric, or Dataiku, Snowflake.

Posted 2 weeks ago

Apply

5.0 - 6.0 years

5 - 6 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking an experienced Data Engineer to join our innovative data team data team and help build scalable data infrastructure, software consultancy, and development services that powers business intelligence, analytics, and machine learning initiatives. The ideal candidate will design, develop, and maintain robust high-performance data pipelines and solutions while ensuring data quality, reliability, and accessibility across the organization working with cutting-edge technologies like Python, Microsoft Fabric, Snowflake, Dataiku, SQL Server, Oracle, PostgreSQL, etc. Required Qualifications 5 + years of experience in Data engineering role. Programming Languages: Proficiency in Python Cloud Platforms: Hands-on experience with Azure (Fabric, Synapse, Data Factory, Event Hubs) Databases: Strong SQL skills and experience with both relational (Microsoft SQL Server, PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra) databases Version Control: Proficiency with Git and collaborative development workflows Proven track record of building production-grade data pipelines handling large-scale data or solutions. Desired experience with containerization (Docker) and orchestration (Kubernetes) technologies . Knowledge of machine learning workflows and MLOps practices Familiarity with data visualization tools (Tableau, Looker, Power BI) Experience with stream processing and real-time analytics Experience with data governance and compliance frameworks (GDPR, CCPA) Contributions to open-source data engineering projects Relevant Cloud certifications (e.g., Microsoft Certified: Azure Data Engineer Associate, AWS Certified Data Engineer, Google Cloud Professional Data Engineer). Specific experience or certifications in Microsoft Fabric, or Dataiku, Snowflake.

Posted 2 weeks ago

Apply

8.0 - 12.0 years

15 - 27 Lacs

Mumbai, Pune, Bengaluru

Work from Office

Naukri logo

Role & responsibilities : Job Description: Primarily looking for a Data Engineer (AWS) with expertise in processing data pipelines using Data bricks, PySpark SQL on Cloud distributions like AWS Must have AWS Data bricks ,Good-to-have PySpark, Snowflake, Talend Requirements- • Candidate must be experienced working in projects involving • Other ideal qualifications include experiences in • Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. • Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python • Familiarity with AWS compute storage and IAM concepts • Experience in working with S3 Data Lake as the storage tier • Any ETL background Talend AWS Glue etc. is a plus but not required • Cloud Warehouse experience Snowflake etc. is a huge plus • Carefully evaluates alternative risks and solutions before taking action. • Optimizes the use of all available resources • Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit • Skills • Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. • Experience on Shell scripting • Exceptionally strong analytical and problem-solving skills • Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses • Strong experience with relational databases and data access methods especially SQL • Excellent collaboration and cross functional leadership skills • Excellent communication skills both written and verbal • Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment • Ability to leverage data assets to respond to complex questions that require timely answers • has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Mandatory Skills: Apache Spark, Databricks, Java, Python, Scala, Spark SQL. Note : Need only Immediate joiners/ Serving notice period. Interested candidates can apply. Regards, HR Manager

Posted 3 weeks ago

Apply

11 - 20 years

20 - 35 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 11 - 20 Yrs Location- Pan India Job Description : - Minimum 2 Years hands on experience in Solution Architect ( AWS Databricks ) If interested please forward your updated resume to sankarspstaffings@gmail.com With Regards, Sankar G Sr. Executive - IT Recruitment

Posted 1 month ago

Apply

12 - 18 years

20 - 35 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

Role & responsibilities Job Description: Cloud DataInformation Architect Core skillset with implementing Cloud data pipelines Tools AWS Databricks Snowflake Python Fivetran Requirements Candidate must be experienced working in projects involving AWS Databricks Python AWS Native data Architecture and services like S3 lamda Glue EMR Databricks Spark Experience with handing AWS Cloud platform Responsibilities Identify define foundational business data domain data domain elements Identifyingcollaborating with data product and stewards in business circles to capture data definitions Driving data sourceLineage report Reference data needs identification Recommending data extraction and replication patterns Experience on data migration from big data to AWS Cloud on S3 Snowflake Redshift Understands where to obtain information needed to make the appropriate decisions Demonstrates ability to break down a problem to manageable pieces and implement effective timely solutions Identifies the problem versus the symptom Manages problems that require involvement of others to solve Reaches sound decisions quickly Carefully evaluates alternative risks and solutions before taking action Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Skills Hands on experience on AWS databricks especially S3 Snowflake python Experience on Shell scripting Exceptionally strong analytical and problem solving skills Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Excellent collaboration and crossfunctional leadership skills Excellent communication skills both written and verbal Ability to manage multiple initiatives and priorities in a fastpaced collaborative environment Ability to leverage data assets to respond to complex questions that require timely answers Has working knowledge on migrating relational and dimensional databases on AWS Cloud platform

Posted 1 month ago

Apply

12 - 16 years

20 - 35 Lacs

Chennai, Bengaluru, Mumbai (All Areas)

Hybrid

Naukri logo

Cloud DataInformation Architect Core skillset with implementing Cloud data pipelines. Tools AWS Databricks Snowflake Python Fivetran. Requirements Candidate must be experienced working in projects involving AWS Databricks Python AWS Native data Architecture and services like S3 lamda Glue EMR Databricks Spark Experience with handing AWS Cloud platform Responsibilities Identify define foundational business data domain data domain elements Identifyingcollaborating with data product and stewards in business circles to capture data definitions Driving data sourceLineage report Reference data needs identification Recommending data extraction and replication patterns Experience on data migration from big data to AWS Cloud on S3 Snowflake Redshift Understands where to obtain information needed to make the appropriate decisions Demonstrates ability to break down a problem to manageable pieces and implement effective timely solutions Identifies the problem versus the symptom Manages problems that require involvement of others to solve Reaches sound decisions quickly Carefully evaluates alternative risks and solutions before taking action Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Skills Hands on experience on AWS databricks especially S3 Snowflake python Experience on Shell scripting Exceptionally strong analytical and problem solving skills Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Excellent collaboration and crossfunctional leadership skills Excellent communication skills both written and verbal Ability to manage multiple initiatives and priorities in a fastpaced collaborative environment Ability to leverage data assets to respond to complex questions that require timely answers Has working knowledge on migrating relational and dimensional databases on AWS Cloud platform

Posted 2 months ago

Apply

8 - 10 years

27 - 30 Lacs

Pune

Work from Office

Naukri logo

Role; C2H Location: PAN India 1. Mandatory Skills - Big Data, Data Bricks, SQL, Python, Databricks Sr Data Engineer JD as below-Must have 8+ years of total experience. Hands-on experience in SQL, Python, Pyspark, AWS Databricks. Good knowledge of Big Data, Data Warehousing concepts. Good knowledge of GIT, CI/CD Customer-focused, react well to changes, work with teams and able to multi-task. Must be a proven performer and team player that enjoy challenging assignments in a high-energy, fast growing and start-up workplace. Must be a self-starter who can work well with minimal guidance and in fluid environment. Act as a technical mentor and guide/support junior team members technically. 2. Mandatory Skills - SQL, Python, Big Data, PySpark, Data Warehousing, Team Player, Databricks ,AWS, GIT, React, Big Data, Data Bricks Must have 8+ years of total experience. Hands-on experience in SQL, Python, Pyspark, AWS Databricks. Good knowledge of Big Data, Data Warehousing concepts. Good knowledge of GIT, CI/CD Customer-focused, react well to changes, work with teams and able to multi-task. Must be a proven performer and team player that enjoy challenging assignments in a high-energy, fast growing and start-up workplace. Must be a self-starter who can work well with minimal guidance and in fluid environment. Act as a technical mentor and guide/support junior team members technically.

Posted 2 months ago

Apply

6 - 11 years

15 - 27 Lacs

Hyderabad, Noida, Mumbai (All Areas)

Work from Office

Naukri logo

Primarily looking for a Data Engineer with expertise in processing data pipelines using Databricks PySpark SQL on Cloud distributions like AWS Must have AWS Databricks Good to have PySpark Snowflake Talend

Posted 2 months ago

Apply

4 - 8 years

5 - 15 Lacs

Chennai, Bengaluru, Hyderabad

Work from Office

Naukri logo

Experience: Data integration, pipeline development, and data warehousing, with a strong focus on AWS Databricks. Proficiency in Databricks platform, management, and optimization. Strong experience in AWS Cloud, particularly in data engineering and administration, with expertise in Apache Spark, S3, Athena, Glue, Kafka, Lambda, Redshift, and RDS. Proven experience in data engineering performance tuning and analytical understanding in business and program contexts. Solid experience in Python development, specifically in pySpark within the AWS Cloud environment, including experience with Terraform. Knowledge of databases (Oracle, SQL Server, PostgreSQL, Redshift, MySQL, or similar) and advanced database querying. Experience with source control systems (Git, Bitbucket) and Jenkins for build and continuous integration. Understanding of continuous deployment (CI/CD) processes. Experience with Airflow and additional Apache Spark knowledge is advantageous. Exposure to ETL tools, including Informatica. Job Responsibilities: Administer, manage, and optimize the Databricks environment to ensure efficient data processing and pipeline development. Perform advanced troubleshooting, query optimization, and performance tuning in a Databricks environment. Collaborate with development teams to guide, optimize, and refine data solutions within the Databricks ecosystem. Ensure high performance in data handling and processing, including the optimization of Databricks jobs and clusters. Engage with and support business teams to deliver data and analytics projects effectively. Manage source control systems and utilize Jenkins for continuous integration. Actively participate in the entire software development lifecycle, focusing on data integrity and efficiency within Databricks. PAN India

Posted 3 months ago

Apply

8 - 10 years

20 - 25 Lacs

Pune, hybrid

Hybrid

Naukri logo

Hands-on experience with AWS Glue or Databricks, PySpark, and Python. Minimum of 2 years of hands-on expertise in PySpark, including Spark job performance optimization techniques. Minimum of 2 years of hands-on involvement with AWS Cloud Hands on experience in StepFunction, Lambda, S3, Secret Manager, Snowflake/Redshift, RDS, Cloudwatch Proficiency in crafting low-level designs for data warehousing solutions on AWS cloud. Proven track record of implementing big-data solutions within the AWS ecosystem including Data Lakes. Familiarity with data warehousing, data quality assurance, and monitoring practices. Demonstrated capability in constructing scalable data pipelines and ETL processes. Proficiency in testing methodologies and validating data pipelines. Experience with or working knowledge of DevOps environments. Practical experience in Data security services. Understanding of data modeling, integration, and design principles. Strong communication and analytical skills. A dedicated team player with a goal-oriented mindset, committed to delivering quality work with attention to detail.

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies