Home
Jobs

935 Data Bricks Jobs - Page 7

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 9.0 years

25 - 30 Lacs

Pune, Mumbai (All Areas)

Work from Office

Naukri logo

Role & responsibilities Develop and maintain scalable data pipelines using Databricks and PySpark . Collaborate with cross-functional teams to deliver effective data solutions. Optimize ETL processes for enhanced performance and reliability. Ensure adherence to data quality and governance best practices. Deploy and manage data solutions in cloud environments ( Azure/ AWS ). Preferred candidate profile Proven experience as a Data Engineer , with a focus on Databricks and PySpark . Strong proficiency in Python and SQL . Experience with cloud platforms such as Azure(mainly) or AWS . Familiarity with data warehousing and integration technologies .

Posted 6 days ago

Apply

3.0 - 6.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Naukri logo

Job Description: We are looking for a talented and motivated Data Analyst / BI Developer with 3-5 years of experience to join our team. The ideal candidate will have a strong background in SQL, experience with dashboard creation using Tableau, and hands-on knowledge of either AWS Redshift (or other AWS cloud data warehouse services) or Databricks . A problem-solver with excellent solution-finding abilities and a proactive, independent work ethic is essential. As a key contributor to the team, you will work with various business stakeholders to deliver actionable insights, and drive data-driven decision-making within the organization. A strong understanding of US healthcare ecosystem will be an added advantage. Key Responsibilities: Develop, design, and maintain dashboards and reports using Tableau to support business decision-making. Write and optimize complex SQL queries to extract, manipulate, and analyze data from multiple sources. Collaborate with cross-functional teams to understand business needs and translate them into effective data solutions. Work with AWS Redshift and/or Databricks for data extraction, transformation, and loading (ETL) processes. Proactively identify and resolve data issues, acting as a solution finder to overcome challenges and drive improvements. Work independently, taking ownership of tasks and ensuring high-quality deliverables within deadlines. Be a strong team player, contributing to team knowledge sharing and fostering a collaborative environment. Apply knowledge of US healthcare systems to help build relevant data solutions and insights. Required Skills & Qualifications: Minimum 3 years of experience in data analysis, business intelligence, or related roles. Strong expertise in SQL for data querying and manipulation. Extensive experience creating dashboards and reports using Tableau and Power BI . Hands-on experience working with AWS Redshift and/or Databricks . Proven problem-solving skills with a focus on providing actionable data solutions. Self-motivated and able to work independently, while being a proactive team player. Experience or strong understanding of US healthcare systems and data-related needs will be a plus. Excellent communication skills with the ability to work across different teams and stakeholders.

Posted 6 days ago

Apply

3.0 - 6.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Naukri logo

Job Description: We are looking for a talented and motivated Data Analyst / BI Developer with 3-5 years of experience to join our team. The ideal candidate will have a strong background in SQL, experience with dashboard creation using Tableau, and hands-on knowledge of either AWS Redshift (or other AWS cloud data warehouse services) or Databricks . A problem-solver with excellent solution-finding abilities and a proactive, independent work ethic is essential. As a key contributor to the team, you will work with various business stakeholders to deliver actionable insights, and drive data-driven decision-making within the organization. A strong understanding of US healthcare ecosystem will be an added advantage. Key Responsibilities: Develop, design, and maintain dashboards and reports using Tableau to support business decision-making. Write and optimize complex SQL queries to extract, manipulate, and analyze data from multiple sources. Collaborate with cross-functional teams to understand business needs and translate them into effective data solutions. Work with AWS Redshift and/or Databricks for data extraction, transformation, and loading (ETL) processes. Proactively identify and resolve data issues, acting as a solution finder to overcome challenges and drive improvements. Work independently, taking ownership of tasks and ensuring high-quality deliverables within deadlines. Be a strong team player, contributing to team knowledge sharing and fostering a collaborative environment. Apply knowledge of US healthcare systems to help build relevant data solutions and insights. Required Skills & Qualifications: Minimum 3 years of experience in data analysis, business intelligence, or related roles. Strong expertise in SQL for data querying and manipulation. Extensive experience creating dashboards and reports using Tableau and Power BI . Hands-on experience working with AWS Redshift and/or Databricks . Proven problem-solving skills with a focus on providing actionable data solutions. Self-motivated and able to work independently, while being a proactive team player. Experience or strong understanding of US healthcare systems and data-related needs will be a plus. Excellent communication skills with the ability to work across different teams and stakeholders.

Posted 6 days ago

Apply

5.0 - 7.0 years

9 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Seeking a Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. Requires strong skills in SQL, Python, ETL, and cloud platforms (AWS/GCP/Azure). Experience with big data tools like Spark and Kafka preferred.

Posted 6 days ago

Apply

3.0 - 5.0 years

8 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

Understanding the requirements and developing ADF pipelines Good knowledge of data bricks Strong understanding of the existing ADF pipelines and enhancements Deployment and Monitoring ADF Jobs Good understanding of SQL concepts and Strong in SQL query writing Understanding and writing the stored procedures Performance Tuning Roles and Responsibilities Understand business and data integration requirements. Design, develop, and implement scalable and reusable ADF pipelines for ETL/ELT processes. Leverage Databricks for advanced data transformations within ADF pipelines. Collaborate with data engineers to integrate ADF with Azure Databricks notebooks for big data processing. Analyze and understand existing ADF workflows. Implement improvements, optimize data flows, and incorporate new features based on evolving requirements. Manage deployment of ADF solutions across development, staging, and production environments. Set up monitoring, logging, and alerts to ensure smooth pipeline executions and troubleshoot failures. Write efficient and complex SQL queries to support data analysis and ETL tasks. Tune SQL queries for performance, especially in large-volume data scenarios. Design, develop, and maintain stored procedures for data transformation and business logic. Ensure procedures are optimized and modular for reusability and performance. Identify performance bottlenecks in queries and data processing routines. Apply indexing strategies, query refactoring, and execution plan analysis to enhance performance

Posted 6 days ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Naukri logo

Dear Candidate, We are hiring a Cloud Data Scientist to build and scale data science solutions in cloud-native environments. Ideal for candidates who specialize in analytics and machine learning using cloud ecosystems. Key Responsibilities: Design predictive and prescriptive models using cloud ML tools Use BigQuery, SageMaker, or Azure ML Studio for scalable experimentation Collaborate on data sourcing, transformation, and governance in the cloud Visualize insights and present findings to stakeholders Required Skills & Qualifications: Strong Python/R skills and experience with cloud ML stacks (AWS, GCP, or Azure) Familiarity with cloud-native data warehousing and storage (Redshift, BigQuery, Data Lake) Hands-on with model deployment, CI/CD, and A/B testing in the cloud Bonus: Background in NLP, time series, or geospatial analysis Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies

Posted 6 days ago

Apply

3.0 - 6.0 years

3 - 6 Lacs

Bengaluru, Karnataka, India

On-site

Foundit logo

Our new member - who are you You are driven by curiosity and are passionate about partnering with a diverse range of business and tech colleagues to deeply understand their customers, uncover new opportunities, advise and support them in design, execution and analysis of experiments, or to develop ML solutions for ML-driven personalisation (e.g., supervised or unsupervised) that drive substantial customer and business impact. You will use your expertise in experiment design, data science, causal inference and machine learning to stimulate data-driven innovation. This is an incredibly exciting role with high impact. You are, like us, a team player who cares about your team members, about growing professionally and personally, about helping your teammates grow, and about having fun together. Basic Qualifications: Bachelors or masters degree in computer science, Software Engineering, Data Science, or related field 35 years of professional experience in designing, building, and maintaining scalable data pipelines, both in on-premises and cloud (Azure preferred) environments. Strong expertise inworking with large datasets from Salesforce, port operations, cargo tracking, and enterprise systems etc. Proficient writing scalable and high-quality SQL queries, Python coding and object-oriented programming, with a solid grasp of data structures and algorithms. Experience in software engineering best practices, including version control (Git), CI/CD pipelines, code reviews, and writing unit/integration tests. Familiarity with containerization and orchestration tools (Docker, Kubernetes) for data workflows and microservices. Hands-on experience with distributed data systems (e.g., Spark, Kafka, Delta Lake, Hadoop). Experience in data modelling, and workflow orchestration tools like Airflow Ability to support ML engineers and data scientists by building production-grade data pipelines Demonstrated experience collaborating with product managers, domain experts, and stakeholders to translate business needs into robust data infrastructure. Strong analytical and problem-solving skills, with the ability to work in a fast-paced, global, and cross-functional environment. Preferred Qualifications: Experience deploying data solutions in enterprise-grade environments, especially in the shipping, logistics, or supply chain domain. Familiarity with Databricks, Azure Data Factory, Azure Synapse, or similar cloud-native data tools. Knowledge of MLOps practices, including model versioning, monitoring, and data drift detection. Experience building or maintaining RESTful APIs for internal ML/data services using FastAPI, Flask, or similar frameworks. Working knowledge of ML concepts, such as supervised learning, model evaluation, and retraining workflows. Understanding of data governance, security, and compliance practices. Passion for clean code, automation, and continuously improving data engineering systems to support machine learning and analytics at scale. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing .

Posted 6 days ago

Apply

8.0 - 13.0 years

22 - 37 Lacs

Noida, Pune, Bengaluru

Work from Office

Naukri logo

Desired Profile - Collect, analyse, and document all business and functional requirements for the Data Lake infrastructure. Support advancements in Business Analytics to ensure the system meets evolving business needs. Profile new and existing data sources to define and refine the data warehouse model. Collaborate with architects and stakeholders to define data workflows and strategies. Drive process improvements to optimize data handling and performance. Perform deep data analysis to ensure accuracy, consistency, and quality of data. Work with QA resources on test planning to ensure quality and consistency within the data lake and Data Warehouse. Gather data governance requirements and ensure implementation of data governance practices within the Data Lake infrastructure. Collaborate with functional users to gather and define metadata for the Data Lake. Key Skills: Azure Data Factory, Synapse, Power BI, Data Lake, SQL, KQL, Azure Security, data integration, Oracle EBS, cloud computing, data visualization, CI/CD pipelines, communication skills Pls share cv at parul@mounttalent.com

Posted 1 week ago

Apply

8.0 - 13.0 years

25 - 35 Lacs

Pune, Gurugram, Bengaluru

Hybrid

Naukri logo

Roles & Responsibilities Build strong relationships and channels of communication with other team members When necessary, challenge the team on their estimation values to gain a deeper understanding of the product from a business, design, and technical perspective Support the team in building a trusting and respectful environment where issues can be discussed openly and in a calm and friendly way Facilitate all reporting on scrum health and help to identify key learnings and areas of improvement Actively help the team in becoming self-organized and support them in aligning to the 12 principles of agile Display strong communication skills and be comfortable in dealing with conflict resolution to facilitate continuous improvement and empowerment Manage dependencies and mitigate them and support the team to accomplish sprint goal Collaborate effectively with Scrum Leads to standards and best practices Accurate reporting to the management depicting the true picture and resolving impediments on daily basis Scrum Masters must deliver facilitation of all SCRUM rituals, including Person should have exp in Business Analysis, Should have worked as Business Analyst in past Daily Stand-ups Backlog Grooming Estimation Sessions Sprint Planning Retrospectives Key Skills 8+ years of experience working as a scrum master Experienced in working with Atlassian tools Experienced in assisting product owners on product backlogs Experienced in coaching team members Excellent verbal and written communication skills

Posted 1 week ago

Apply

5.0 - 7.0 years

15 - 25 Lacs

Pune, Ahmedabad

Hybrid

Naukri logo

Key Responsibilities: Design, develop, and optimize data pipelines and ETL/ELT workflows using Microsoft Fabric, Azure Data Factory, and Azure Synapse Analytics. Implement Lakehouse and Warehouse architectures within Microsoft Fabric, supporting medallion (bronze-silver-gold) data layers. Collaborate with business and analytics teams to build scalable and reliable data models (star/snowflake) using Azure SQL, Power BI, and DAX. Utilize Azure Analysis Services, Power BI Semantic Models, and Microsoft Fabric Dataflows for analytics delivery. Very good hands-on experience with Python for data transformation and processing. Apply CI/CD best practices and manage code through Git version control. Ensure data security, lineage, and quality using data governance best practices and Microsoft Purview (if applicable). Troubleshoot and improve performance of existing data pipelines and models. Participate in code reviews, testing, and deployment activities. Communicate effectively with stakeholders across geographies and time zones. Required Skills: Hands-on experience with Microsoft Fabric (Lakehouse, Warehouse, Dataflows, Pipelines). Strong knowledge of Azure Synapse Analytics, Azure Data Factory, Azure SQL, and Azure Analysis Services. Proficiency in Power BI and DAX for data visualization and analytics modeling. Strong Python skills for scripting and data manipulation. Experience in dimensional modeling, star/snowflake schemas, and Kimball methodologies. Familiarity with CI/CD pipelines, DevOps, and Git-based versioning. Understanding of data governance, data cataloging, and quality management practices. Excellent verbal and written communication skills.

Posted 1 week ago

Apply

5.0 - 10.0 years

13 - 23 Lacs

Mumbai, Hyderabad, Bengaluru

Hybrid

Naukri logo

Primarily looking for a Data Engineer with expertise in processing data pipelines using Databricks PySpark SQL on Cloud distributions like AWS Must have AWS Databricks Good to have PySpark Snowflake Talend Requirements- Candidate must be experienced working in projects involving Other ideal qualifications include experiences in Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python Familiarity with AWS compute storage and IAM concepts Experience in working with S3 Data Lake as the storage tier Any ETL background Talend AWS Glue etc. is a plus but not required Cloud Warehouse experience Snowflake etc. is a huge plus Carefully evaluates alternative risks and solutions before taking action. Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Skills Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. Experience on Shell scripting Exceptionally strong analytical and problem-solving skills Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Excellent collaboration and cross functional leadership skills Excellent communication skills both written and verbal Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment Ability to leverage data assets to respond to complex questions that require timely answers has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Mandatory Skills : Apache Spark, Databricks, Java, Python, Scala, Spark SQL.

Posted 1 week ago

Apply

3.0 - 6.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Naukri logo

Join us as a Data Engineer II in Bengaluru! Build scalable data pipelines using Python, SQL, AWS, Airflow, and Kafka. Drive real-time & batch data systems across analytics, ML, and product teams. A hybrid work option is available. Required Candidate profile 3+ yrs in data engineering with strong Python, SQL, AWS, Airflow, Spark, Kafka, Debezium, Redshift, ETL & CDC experience. Must know data lakes, warehousing, and orchestration tools.

Posted 1 week ago

Apply

5.0 - 10.0 years

11 - 21 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

Job Title: Senior Data Engineer ADF | Snowflake | DBT | Databricks Experience: 5 to 8 Years Locations: Pune / Hyderabad / Gurgaon / Bangalore (Hybrid) Job Type: Full Time, Permanent Job Description: We are hiring for a Senior Data Engineer role with strong expertise in Azure Data Factory (ADF) , Snowflake , DBT , and Azure Databricks . The ideal candidate will be responsible for designing, building, and maintaining scalable cloud-based data pipelines and enabling high-quality data delivery for analytics and reporting. Key Responsibilities Build and manage ETL/ELT pipelines using ADF, Snowflake, DBT, and Databricks Create parameterized, reusable components within ADF pipelines Perform data transformations and modeling in Snowflake using DBT Use Databricks for data processing using PySpark/SQL Collaborate with stakeholders to define and implement data solutions Optimize data workflows for performance, scalability , and cost-efficiency Ensure data quality, governance, and documentation standards Mandatory Skills Azure Data Factory (ADF) Snowflake DBT (Data Build Tool) Azure Databricks Strong SQL and data modeling experience Good-to-Have Skills Azure Data Lake, Azure Synapse, Blob Storage CI/CD using Azure DevOps or GitHub Python scripting, PySpark Power BI/Tableau integration Experience in metadata/data governance tools Role Requirements Education : Bachelors/Masters degree in Computer Science, Data Engineering, or related field Certifications : Azure or Snowflake certification is a plus Strong problem-solving and communication skills Keywords: Azure Data Factory, ADF, Snowflake, DBT, Azure Databricks, PySpark, SQL, Data Engineer, Azure Data Lake, ETL, ELT, Azure Synapse, Power BI, CI/CD

Posted 1 week ago

Apply

3.0 - 7.0 years

12 - 22 Lacs

Bengaluru

Hybrid

Naukri logo

Job Description: We are looking for a talented and motivated Data Analyst / BI Developer with 35 years of experience to join our team. The ideal candidate will have a strong background in SQL, experience with dashboard creation using Tableau, and hands-on knowledge of either AWS Redshift (or other AWS cloud data warehouse services) or Databricks . A problem solver with excellent solution-finding abilities and a proactive, independent work ethic is essential. As a key contributor to the team, you will work with various business stakeholders to deliver actionable insights and drive data-driven decision-making within the organization. A strong understanding of the US healthcare ecosystem will be an added advantage. Key Responsibilities: Develop, design, and maintain dashboards and reports using Tableau to support business decision-making. Write and optimize complex SQL queries to extract, manipulate, and analyze data from multiple sources. Collaborate with cross-functional teams to understand business needs and translate them into effective data solutions. Work with AWS Redshift and/or Databricks for data extraction, transformation, and loading (ETL) processes. Proactively identify and resolve data issues, acting as a solution finder to overcome challenges and drive improvements. Work independently, taking ownership of tasks and ensuring high-quality deliverables within deadlines. Be a strong team player, contributing to team knowledge sharing and fostering a collaborative environment. Apply knowledge of US healthcare systems to help build relevant data solutions and insights. Required Skills & Qualifications: Minimum 3 years of experience in data analysis, business intelligence, or related roles. Strong expertise in SQL for data querying and manipulation. Extensive experience creating dashboards and reports using Tableau and Power BI . Hands-on experience working with AWS Redshift and/or Databricks . Proven problem-solving skills with a focus on providing actionable data solutions. Self-motivated and able to work independently, while being a proactive team player. Experience or strong understanding of US healthcare systems and data-related needs will be a plus. Excellent communication skills with the ability to work across different teams and stakeholders. Additional Details: Work Mode: Hybrid Notice Period: Preferably looking for Immediate Joiners Job Location: Cessna Business Park, Kadubeesanahalli, Bangalore Interested candidates can share your updated cv to the below mail ID Contact Person- Pawan Contact No -8951873995 Mail ID - pawanbehera@infinitiresearch.com

Posted 1 week ago

Apply

5.0 - 9.0 years

6 - 15 Lacs

Pune

Work from Office

Naukri logo

Greeting from Infosys BPM Ltd., You are kindly invited for the Infosys BPM: Walk-In Drive on 28th June 2025 at Pune. Note : Please carry copy of this email to the venue and make sure you register your application before attending the walk-in. Please mention Candidate ID on top of the Resume Please use below link to apply and register your application.https://career.infosys.com/jobdesc?jobReferenceCod e = PROGEN-HRODIRECT-216785 Interview Information Interview Date: 28th June 2025 Interview Time: 10:00 Am till 01:00PM Interview Venu e: Taluka Mulshi, Plot No. 1, Pune, Phase 1, Building B1 Ground floor, Hinjewadi Rajiv Gandhi Infotech Park, Pune, Maharashtra-411057 Documents to Carry Please carry 2 set of updated CV (Hard Copy) Carry any 2 photo Identity proof (PAN Card mandatory /Driving License/Voters ID card/Passport ) About the Job We're seeking a skilled Azure Data Engineer to join our dynamic team and contribute to our data management and analytics initiatives. Job Role: Azure Data Engineer Job Location: Pune Experience: 5+ Yrs Skills: SQL + ETL + Azure + Python + Pyspark + Databricks Job Description: As an Azure Data Engineer, you will play a crucial role in designing, implementing, and maintaining our data infrastructure on the Azure platform. You will collaborate with cross-functional teams to develop robust data pipelines, optimize data workflows, and ensure data integrity and reliability. Responsibilities: Design, develop, and deploy data solutions on Azure, leveraging SQL Azure, Azure Data Factory, and Databricks. Build and maintain scalable data pipelines to ingest, transform, and load data from various sources into Azure data repositories. Implement data security and compliance measures to safeguard sensitive information. Collaborate with data scientists and analysts to support their data requirements and enable advanced analytics and machine learning initiatives. Optimize and tune data workflows for performance and efficiency. Troubleshoot data-related issues and provide timely resolution. Stay updated with the latest Azure data services and technologies and recommend best practices for data engineering. Qualifications: Bachelors degree in computer science, Information Technology, or related field. Proven experience as a data engineer, preferably in a cloud environment. Strong proficiency in SQL Azure for database design, querying, and optimization. Hands-on experience with Azure Data Factory for ETL/ELT workflows. Familiarity with Azure Databricks for big data processing and analytics. Experience with other Azure data services such as Azure Synapse Analytics, Azure Cosmos DB, and Azure Data Lake Storage is a plus. Solid understanding of data warehousing concepts, data modeling, and dimensional modeling. Excellent problem-solving and communication skills. Regards, Infosys BPM

Posted 1 week ago

Apply

0.0 - 5.0 years

0 - 108 Lacs

Kolkata

Work from Office

Naukri logo

We are a AI Company, led by Adithiyaa Tulshan, with 15 years of experience in AI. We are looking for associates who are hungry to adopt to the change due to AI and deliver value for clients fill in this form: https://forms.gle/BUcqTK3gBHARPcxv5 Flexi working Work from home Over time allowance Annual bonus Sales incentives Performance bonus Joining bonus Retention bonus Referral bonus Career break/sabbatical

Posted 1 week ago

Apply

4.0 - 8.0 years

30 - 37 Lacs

Bengaluru

Work from Office

Naukri logo

ECMS ID/ Title 525632 Number of Openings 1 Duration of contract 6 No of years experience Relevant 4-8 Years. Detailed job description - Skill Set: Attached Mandatory Skills* Azure Data Factory, PySpark notebooks, Spark SQL, and Python. Good to Have Skills ETL Processes, SQL, Azure Data Factory, Data Lake, Azure Synapse, Azure SQL, Databricks etc. Vendor Billing range 9000- 10000/Day Remote option available: Yes/ No Hybrid Mode Work location: Most Preferrable Pune and Hyderabad Start date: Immediate Client Interview / F2F Applicable yes Background check process to be followed: Before onboarding / After onboarding: BGV Agency: Post

Posted 1 week ago

Apply

6.0 - 7.0 years

14 - 17 Lacs

Hyderabad

Work from Office

Naukri logo

As Data Engineer, you wi deveop, maintain, evauate and test big data soutions. You wi be invoved in the deveopment of data soutions using Spark Framework with Python or Scaa on Hadoop and Azure Coud Data Patform Responsibiities: Experienced in buiding data pipeines to Ingest, process, and transform data from fies, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Coud Data Patform or HDFS Experienced in deveop efficient software code for mutipe use cases everaging Spark Framework / using Python or Scaa and Big Data technoogies for various use cases buit on the patform Experience in deveoping streaming pipeines Experience to work with Hadoop / Azure eco system components to impement scaabe soutions to meet the ever-increasing data voumes, using big data/coud technoogies Apache Spark, Kafka, any Coud computing etc Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Tota 6 - 7+ years of experience in Data Management (DW, DL, Data Patform, Lakehouse) and Data Engineering skis Minimum 4+ years of experience in Big Data technoogies with extensive data engineering experience in Spark / Python or Scaa; Minimum 3 years of experience on Coud Data Patforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to exceent SQL skis Preferred technica and professiona experience Certification in Azure and Data Bricks or Coudera Spark Certified deveopers

Posted 1 week ago

Apply

4.0 - 9.0 years

12 - 16 Lacs

Kochi

Work from Office

Naukri logo

As Data Engineer, you wi deveop, maintain, evauate and test big data soutions. You wi be invoved in the deveopment of data soutions using Spark Framework with Python or Scaa on Hadoop and Azure Coud Data Patform Responsibiities: Experienced in buiding data pipeines to Ingest, process, and transform data from fies, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Coud Data Patform or HDFS Experienced in deveop efficient software code for mutipe use cases everaging Spark Framework / using Python or Scaa and Big Data technoogies for various use cases buit on the patform Experience in deveoping streaming pipeines Experience to work with Hadoop / Azure eco system components to impement scaabe soutions to meet the ever-increasing data voumes, using big data/coud technoogies Apache Spark, Kafka, any Coud computing etc Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Minimum 4+ years of experience in Big Data technoogies with extensive data engineering experience in Spark / Python or Scaa; Minimum 3 years of experience on Coud Data Patforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to exceent SQL skis Exposure to streaming soutions and message brokers ike Kafka technoogies Preferred technica and professiona experience Certification in Azure and Data Bricks or Coudera Spark Certified deveopers

Posted 1 week ago

Apply

4.0 - 9.0 years

12 - 16 Lacs

Kochi

Work from Office

Naukri logo

As Data Engineer, you wi deveop, maintain, evauate and test big data soutions. You wi be invoved in the deveopment of data soutions using Spark Framework with Python or Scaa on Hadoop and AWS Coud Data Patform Responsibiities: Experienced in buiding data pipeines to Ingest, process, and transform data from fies, streams and databases. Process the data with Spark, Python, PySpark, Scaa, and Hive, Hbase or other NoSQL databases on Coud Data Patforms (AWS) or HDFS Experienced in deveop efficient software code for mutipe use cases everaging Spark Framework / using Python or Scaa and Big Data technoogies for various use cases buit on the patform Experience in deveoping streaming pipeines Experience to work with Hadoop / AWS eco system components to impement scaabe soutions to meet the ever-increasing data voumes, using big data/coud technoogies Apache Spark, Kafka, any Coud computing etc Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Minimum 4+ years of experience in Big Data technoogies with extensive data engineering experience in Spark / Python or Scaa ; Minimum 3 years of experience on Coud Data Patforms on AWS; Experience in AWS EMR / AWS Gue / DataBricks, AWS RedShift, DynamoDB Good to exceent SQL skis Exposure to streaming soutions and message brokers ike Kafka technoogies Preferred technica and professiona experience Certification in AWS and Data Bricks or Coudera Spark Certified deveopers

Posted 1 week ago

Apply

10.0 - 11.0 years

8 - 12 Lacs

Noida

Work from Office

Naukri logo

What We’re Looking For: Solid understanding of data pipeline architecture , cloud infrastructure , and best practices in data engineering Strong grip on SQL Server , Oracle , Azure SQL , and working with APIs Skilled in data analysis – identify discrepancies, recommend fixes Proficient in at least one programming language: Python , Java , or C# Hands-on experience with Azure Data Factory (ADF) , Logic Apps , Runbooks Knowledge of PowerShell scripting and Azure environment Excellent problem-solving , analytical , and communication skills Able to collaborate effectively and manage evolving project priorities Roles and Responsibilities Senior Data Engineer - Azure & Databricks Development and maintenance of Data Pipelines, Modernisation of cloud data platform At least 8 years experience in Data Engineering space At least 4 experiences in Apache Spark/ Databricks At least 4 years of experience in Python & at least 7 years in SQL and ETL stack .

Posted 1 week ago

Apply

4.0 - 7.0 years

25 - 35 Lacs

Hyderabad

Hybrid

Naukri logo

Looking for Immediate Joiners only. Pls share your updated CV, CTC, ECTC, Notice Period/LWD @ monika.yadav@ness.com Job Description: Data Engineer As a Data Engineer, you will develop, maintain Data Pipelines. You will be involved in the design of data solutions for ESG. You will implement and manage cluster for streaming using technologies like Postgres, Oracle, Scala, Azure Data Lake, Spark, Kafka, Data bricks, ETL and Advanced SQL. You will be responsible for: Converting existing manual & semi-automated Data Ingress/ Egress processes to automated Data pipelines Create data pipelines for AI/ ML using Python / Pyspark Full operational lifecycle of Data platform including creating a streaming platform & helping with Kafka apps development Implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc. Job Requirements: 4 years of experience in Big Data technologies Experience in developing data processing flows using Python/ PySpark Hands on experience on Data ingestion, data cleansing, ETL, Data Mart creation and exposing data for consumers Strong experience in Oracle / PostgreSQL Experience of implementing and managing large scale cluster for streaming Kafka, Flink, Druid, NoSQL DB (MongoDB) etc Experience with Elastic Search, Splunk, Kibana or similar technologies Good to have experience in Business Intelligence tool (Qlik Sense) Knowledge of Microservices Familiarity with packages such as Numpy/pandas is desirable Qualifications: Bachelors degree in computer science, Information Technology, or a similar field (Minimum Qualification) Experience in Big Data technologies Experience in developing data processing flows using Python/ PySpark Hands on experience on Data ingestion, data cleansing, ETL, Data Mart creation and exposing data for consumers Strong experience in Oracle / PostgreSQL Experience of implementing and managing large scale cluster for streaming Kafka, Flink, Druid, NoSQL DB (MongoDB) etc Experience with Elastic Search, Splunk, Kibana or similar technologies Good to have experience in Business Intelligence tool (Qlik Sense) Knowledge of Microservices Familiarity with packages such as Numpy/pandas is desirable

Posted 1 week ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Pune

Work from Office

Naukri logo

Job Title: Data Engineer Data Solutions Delivery + Data Catalog & Quality Engineer About Advanced Energy Advanced Energy Industries, Inc (NASDAQ: AEIS), enables design breakthroughs and drives growth for leading semiconductor and industrial customers Our precision power and control technologies, along with our applications know-how, inspire close partnerships and innovation in thin-film and industrial manufacturing We are proud of our rich heritage, award-winning technologies, and we value the talents and contributions of all Advanced Energy's employees worldwide, Department: Data and Analytics Team: Data Solutions Delivery Team Job Summary: We are seeking a highly skilled Data Engineer to join our Data and Analytics team As a member of the Data Solutions Delivery team, you will be responsible for designing, building, and maintaining scalable data solutions The ideal candidate should have extensive knowledge of Databricks, Azure Data Factory, and Google Cloud, along with strong data warehousing skills from data ingestion to reporting Familiarity with the manufacturing and supply chain domains is highly desirable Additionally, the candidate should be well-versed in data engineering, data product, data platform concepts, data mesh, medallion architecture, and establishing enterprise data catalogs using tools like Ataccama, Collibra, or Microsoft Purview The candidate should also have proven experience in implementing data quality practices using tools like Great Expectations, Deequ, etc Key Responsibilities: Design, build, and maintain scalable data solutions using Databricks, ADF, and Google Cloud, Develop and implement data warehousing solutions, including ETL processes, data modeling, and reporting, Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions, Ensure data integrity, quality, and security across all data platforms, Provide expertise in data engineering, data product, and data platform concepts, Implement data mesh principles and medallion architecture to build scalable data platforms, Establish and maintain enterprise data catalogs using tools like Ataccama, Collibra, or Microsoft Purview, Implement data quality practices using tools like Great Expectations, Deequ, etc Work closely with the manufacturing and supply chain teams to understand domain-specific data requirements, Develop and maintain documentation for data solutions, data flows, and data models, Act as an individual contributor, picking up tasks from technical solution documents and delivering high-quality results, Qualifications: Bachelors degree in computer science, Information Technology, or a related field, Proven experience as a Data Engineer or similar role, In-depth knowledge of Databricks, Azure Data Factory, and Google Cloud, Strong data warehousing skills, including ETL processes, data modelling, and reporting, Familiarity with manufacturing and supply chain domains, Proficiency in data engineering, data product, data platform concepts, data mesh, and medallion architecture, Experience in establishing enterprise data catalogs using tools like Ataccama, Collibra, or Microsoft Purview, Proven experience in implementing data quality practices using tools like Great Expectations, Deequ, etc Excellent problem-solving and analytical skills, Strong communication and collaboration skills, Ability to work independently and as part of a team, Preferred Qualifications: Master's degree in a related field, Experience with cloud-based data platforms and tools, Certification in Databricks, Azure, or Google Cloud, As part of our total rewards philosophy, we believe in offering and maintaining competitive compensation and benefits programs for our employees to attract and retain a talented, highly engaged workforce Our compensation programs are focused on equitable, fair pay practices including market-based base pay, an annual pay-for-performance incentive plan, we offer a strong benefits package in each of the countries in which we operate, Advanced Energy is committed to diversity in its workforce including Equal Employment Opportunity for Minorities, Females, Protected Veterans, and Individuals with Disabilities, We are committed to protecting and respecting your privacy We take your privacy seriously and will only use your personal information to administer your application in accordance with the RA No 10173 also known as the Data Privacy Act of 2012

Posted 1 week ago

Apply

1.0 - 4.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

Mandate 2Employees will have the freedom to work remotely all through the year These employees, who form a large majority, will come together in their base location for a week, once every quarter, Analytics Engineer Swiggy About Swiggy: Swiggy, founded in 2014, is India's leading tech-driven on-demand delivery platform With a vision to enhance the urban consumer's quality of life through unparalleled convenience, Swiggy connects millions of consumers with a vast network of restaurants and stores across 500+ cities Our growth stems from cutting-edge technology, innovative thinking, and well-informed decision-making Join the Swiggy Analytics team to collaborate on decoding hyperlocal trends and impact the entire value chain, Role and Responsibilities Collaboration With Data Engineering Team As an Analytics Engineer at Swiggy, you will be at the heart of our data-driven approach, collaborating with cross-functional teams to transform raw data into actionable insights Your role will encompass: Partner closely with our Data Engineering team to ensure efficient data collection, storage, and processing, Collaborate in designing and optimizing data pipelines for seamless data movement, Work jointly on data architecture decisions to enhance analytics capabilities, Performance Tuning And Query Optimization Dive into large, complex datasets to create efficient and optimized queries for analysis, Identify bottlenecks and optimize data processing pipelines for improved performance, Implement best practices for query optimization, ensuring swift data retrieval, DataOps Excellence Contribute to the DataOps framework, automating data processes and enhancing data quality, Implement monitoring and alerting systems to ensure smooth data operations, Collaborate with the team to develop self-serve platforms for recurring analysis, Qualifications And Skills Bachelor's or Masters degree in Engineering, Mathematics, Statistics, or a related quantitative field, 2-4 years of analytics, data science, or related experience, Proficiency (2-4 years) in SQL, R, Python, Excel, etc , for effective data manipulation, Hands-on experience with Snowflake and Spark/Databricks, adept at Query Profiles and bottleneck identification, What We Expect Apply creative thinking to solve real-world problems using data-driven insights, Embrace a "fail fast, learn faster" approach in a dynamic, fast-paced environment, Exhibit proficient verbal and written communication skills, Thrive in an unstructured environment, demonstrating attention to detail and self-direction, Foster collaboration and partnerships across functions, Join us as an Analytics Engineer at Swiggy to contribute significantly to our data ecosystem, drive operational efficiency, and be an integral part of our data-driven journey Your expertise will play a pivotal role in influencing our strategic decisions and reshaping the food delivery landscape,

Posted 1 week ago

Apply

1.0 - 4.0 years

4 - 8 Lacs

Gurugram

Work from Office

Naukri logo

Company Overview Leading with our core values of Quality, Integrity, and Opportunity, MedInsight is one of the healthcare industrys most trusted solutions for healthcare intelligence Our company purpose is to empower easy, data-driven decision-making on important healthcare questions Through our products, education, and services, MedInsight is making an impact on healthcare by helping to drive better outcomes for patients while reducing waste Over 300 leading healthcare organizations have come to rely on MedInsight analytic solutions for healthcare cost and care management, MedInsight is a subsidiary of Milliman; a global, employee-owned consultancy providing actuarial consulting, retirement funding and healthcare financing, enterprise risk management and regulatory compliance, data analytics and business transformation as well as a range of other consulting and technology solutions, Position Summary: We are seeking an entry-level Business Analyst to join our team to assist in all phases of the software development life cycle Individuals interested in this position must have the ability to assist in the management of multiple projects in varying stages of implementation Initially this position will focus on assisting in the development and support of our analytic products, requiring you to work closely with clinicians, actuaries, technical resources, and clients In addition to being a self-starter with the ability to work both independently and in a cross-functional team environment, candidates interested in this position must have the ability to: Manage multiple priorities in a fast-paced environment, Work independently on assigned tasks, (i-e , plan, organize, delegate, problem solve and meet established deadlines), Learn quickly, Follow-through, Prioritize work under time pressure, Execute work with exceptional attention to detail on all project tasks, Primary Responsibilities Responsibilities specific to the Business Analyst role include: Perform tasks in support of a systems development lifecycle including requirements definition, functional specifications, use case scenarios, test plans, documentation, training, and implementation, Gain an in-depth understanding of customer data for purposes of modeling, mapping and integrating data into a data warehouse, Work with client and market advocates (such as product management and sales/marketing) to determine system/business objectives and solutions, Train new users on the system and assisting with change management for new releases; developing test plans; testing new applications and maintenance releases, Work directly with end users in utilizing the client developed products, Establish, review, and validate action and decisions relative to system design and programming, Work with customers to evaluate and ensure the overall integrity of data and reporting results in meeting business needs and use requirements, Provide timely responses to technical and business-related questions/issues and provide consultative advice on use of data in meeting customer objectives, Skills And Requirements Candidates must be team players with excellent interpersonal skills They must also have solid, proven experience developing commercial quality business software applications, Requirements Bachelors degree in Mathematics, Statistics, Engineering, Physical Sciences, Pharmacy or equivalent work experience, Ability to analyze user requests, define requirements, develop project plans and report conclusions, Ability to work creatively and flexibly, both independently and as part of a team, Working knowledge of healthcare data, Attention to fine details and work processes, Desire and ability to learn new skills, Good organizational, and written and oral communications skills, Preferred Skills Working knowledge of Databricks, Python, SQL, and relational database systems, Working knowledge of standard clinical terminologies and coding systems, Experience with Excel and Microsoft Office products,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies