Azure Data Bricks (4-15 Yrs) - Bangalore

4 - 9 years

15 - 30 Lacs

Bengaluru

Posted:1 day ago| Platform: Naukri logo

Apply Now

Skills Required

Pyspark Azure Data Bricks sql ETL

Work Mode

Work from Office

Job Type

Full Time

Job Description

Hi, Greetings from Happiest Minds Technologies Currently we are hiring for below positions and looking for immediate joiners. 1. Azure Databricks Bangalore 5 to 10 Yrs - Bangalore As a Senior Azure Data Engineer, you will leverage Azure technologies to drive data transformation, analytics, and machine learning. You will design scalable Databricks data pipelines using PySpark, transforming raw data into actionable insights. Your role includes building, deploying, and maintaining machine learning models using MLlib or TensorFlow while optimizing cloud data integration from Azure Blob Storage, Data Lake, and SQL/NoSQL sources. You will execute large-scale data processing using Spark Pools, fine-tuning configurations for efficiency. The ideal candidate holds a Bachelors or Masters in Computer Science, Data Science, or a related field, with 7+ years in data engineering and 3+ years specializing in Azure Databricks, PySpark, and Spark Pools. Proficiency in Python PySpark, Pandas, NumPy, SciPy, Spark SQL, DataFrames, RDDs, Delta Lake, Databricks Notebooks, and MLflow is required, along with hands-on experience in Azure Data Lake, Blob Storage, and Synapse Analytics. 2. Azure Data Engineer 2 to 4 years Bangalore We are seeking a skilled and motivated Azure Data Engineer to join our dynamic team. The ideal candidate will have hands-on experience with Microsoft Azure cloud services, data engineering, and a strong background in designing and implementing scalable data solutions. Responsibilities: Data Engineering & Pipeline Development Design, implement, and maintain ETL processes using ADF and ADB. Create and manage views in ADB and SQL for efficient data access. Optimize SQL queries for large datasets and high performance. Conduct end-to-end testing and impact analysis on data pipelines. Optimization & Performance Tuning Identify and resolve bottlenecks in data processing. Optimize SQL queries and Delta Tables for fast data processing. Data Sharing & Integration Implement Delta Share, SQL Endpoints, and other data sharing methods. Use Delta Tables for efficient data sharing and processing. API Integration & Development Integrate external systems through Databricks Notebooks and build scalable solutions. Experience in building APIs (Good to have). Collaboration & Documentation Collaborate with teams to understand requirements and design solutions. Provide documentation for data processes and architectures. Qualifications : Strong experience with Azure Data Factory (ADF) and Azure Databricks (ADB). Proficient in SQL with experience in query optimization, view creation, and working with Delta Tables. Hands-on experience with end-to-end testing and impact analysis for data pipelines. Solid understanding of various data sharing approaches such as Delta Share, SQL Endpoints, etc. Familiarity with connecting to APIs using Databricks Notebooks. Experience with cloud data architecture and big data technologies. Ability to troubleshoot and resolve complex data issues. Knowledge of version control, deployment, and CI/CD practices for data pipelines. Plus to Have : Experience with CI/CD processes for deploying code and automating data pipeline deployments. Knowledge in building APIs for data sharing and integration, enabling seamless communication between systems and platforms 3. Azure Data Bricks Architect - 10 to 15 Yrs – Bangalore Key Responsibilities: Architect and design end-to-end data solutions on Azure, with a focus on Databricks. Lead data architecture initiatives, ensuring alignment with best practices and business objectives. Collaborate with stakeholders to define data strategies, architectures, and roadmaps. Migrate and transform data from Oracle to Azure Data Lake. Ensure data solutions are secure, reliable, and scalable. Provide technical leadership and mentorship to junior team members. Required Skills: Extensive experience with Azure Data Services, including Azure Data Factory, Azure SQL Data Warehouse. Deep expertise in Databricks, including Spark, Delta Lake. Strong understanding of data architecture principles and best practices. Proven track record of leading large-scale data projects and initiatives. Design data integration strategies, ensuring seamless integration between Azure services and on-premises/cloud applications. Optimize performance and cost efficiency for Databricks clusters, data pipelines, and storage systems. Monitor and manage cloud resources to ensure high availability, performance and scalability. Should have experience in setting up and configuring Azure DevOps. Excellent communication and collaboration skills. If you are interested, pls share me your updated profile with below information to reddemma.n@happiestminds.com to consider further. Are you willing to work from office 4 days in a week - Yes/no - Total experience Current CTC Expected CTC Notice period - Current Location - Preferred location - Reason for change - Are you holding any offer - if yes, what is the reason looking for alternate opportunity Regards Reddemma

Mock Interview

Boost Confidence & Sharpen Skills

Start Pyspark Interview Now
Happiest Minds Technologies
Happiest Minds Technologies

IT Services and IT Consulting

Bengaluru Karnataka

5001-10000 Employees

428 Jobs

    Key People

  • Ashok Soota

    Executive Chairman
  • Nitin Achyut

    CEO

RecommendedJobs for You

Thane, Navi Mumbai, Mumbai (All Areas)