Home
Jobs

182 Azure Synapse Jobs - Page 6

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 7.0 years

12 - 18 Lacs

Pune, Gurugram, Bengaluru

Hybrid

Naukri logo

Job Description: Azure Data Engineer Work Location: Hybrid Gurugram / Pune / Bangalore Experience: 5 to 8 years Apply now: aditya.rao@estrel.ai Include: Resume | CTC | ECTC | Notice (Only Immediate Joiners considere d) | LinkedIn URL Key Responsibilities: - Design, build, and maintain scalable data pipelines and solutions using Azure Data Engineering tools. - Work with large-scale datasets and develop efficient data processing architectures. - Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions. - Implement data governance, security, and quality frameworks as part of the solution architecture. Technical Skills Required: - 4+ years of hands-on experience with Azure Data Engineering tools such as: - Event Hub, Azure Data Factory, Cosmos DB, Synapse, Azure SQL Database, Databricks, and Azure Data Explorer. - 3+ years of experience working with Python / PySpark, Spark, Scala, Hive, and Impala. - Strong SQL and coding skills. - Familiarity with additional Azure services like Azure Data Lake Analytics, U-SQL, and Azure SQL Data Warehouse. - Solid understanding of Modern Data Warehouse architectures, Lambda architecture, and data warehousing principles. Other Requirements: - Proficiency in scripting languages (e.g., Shell). - Strong analytical and organizational abilities. - Ability to work effectively both independently and in a team environment. - Experience working in Agile delivery models. - Awareness of software development best practices. - Excellent written and verbal communication skills. - Azure Data Engineer certification is a plus.

Posted 4 weeks ago

Apply

12.0 - 18.0 years

50 - 80 Lacs

Hyderabad

Work from Office

Naukri logo

Executive Director Data Management Company Overview Accordion is a global private equity-focused financial consulting firm specializing in driving value creation through services rooted in Data & Analytics and powered by technology. Accordion works at the intersection of Private Equity sponsors and portfolio companies management teams across every stage of the investment lifecycle. We provide hands-on, execution-oriented support, driving value through the office of the CFO by building data and analytics capabilities and identifying and implementing strategic work, rooted in data and analytics. Accordion is headquartered in New York City with 10 offices worldwide. Join us and make your mark on our company. Data & Analytics (Accordion | Data & Analytics) Accordion's Data & Analytics (D&A) practice in India delivers cutting-edge, intelligent solutions to a global clientele, leveraging a blend of domain knowledge, sophisticated technology tools, and deep analytics capabilities to tackle complex business challenges. We partner with Private Equity clients and their Portfolio Companies across diverse sectors, including Retail, CPG, Healthcare, Media & Entertainment, Technology, and Logistics. D&A team members deliver data and analytical solutions designed to streamline reporting capabilities and enhance business insights across vast and complex data sets ranging from Sales, Operations, Marketing, Pricing, Customer Strategies, and more. Working at Accordion in India means joining 800+ analytics, data science, finance, and technology experts in a high-growth, agile, and entrepreneurial environment to transform how portfolio companies drive value. It also means making your mark on Accordion's future by embracing a culture rooted in collaboration and a firm-wide commitment to building something great, together. Join us and experience a better way to work! Location: Hyderabad, Telangana Role Overview: Accordion is looking for an experienced Enterprise Data Architect to lead the strategy, design, and implementation of data architectures for across all its data management projects. He/she will be part of the technology team and will possess in-depth knowledge of distinct types of data architectures and frameworks including distributed large-scale implementations. He/she will collaborate closely with the client partnership team to design and recommend robust and scalable data architecture to clients and work with engineering teams to implement the same in on-premises or cloud-based environments. He/she will be a data evangelist and will conduct knowledge sharing sessions in the company on various data management topics to spread awareness of data architecture principles and improve the overall capabilities of the team. The Enterprise Data Architect will also conduct design review sessions to validate/verify implementations, emphasize and implement best practices followed by exhaustive documentation which are in line with the design philosophy. He/she will have excellent communication skills and will possess industry standard certification in the data architecture areas. What You will do: Partner with clients to understand their business and create comprehensive requirements to enable development of optimal data architecture. Translate business requirements into logical and physical design of databases, data warehouses, and data streams. Analyze, plan, and define data architecture framework, including security, reference data, metadata, and master data. Create elaborate data management processes and procedures and consult with Senior Management to share the knowledge. Collaborate with client and internal project teams to devise and implement data strategies, build models, and assess shareholder needs and goals. Develop application programming interfaces (APIs) to extract and store data in the most optimal manner. Align business requirements with technical architecture and collaborate with the technical teams for implementation and tracking purposes. Research and track the latest developments in the field to maintain expertise about the latest best practices and techniques within the industry. Ideally, you have: Undergraduate degree (B.E/B.Tech.) from tier-1/tier-2 colleges are preferred. 12+ years of experience in related field. Experience in designing logical & physical data design architectures in various RDBMS (SQL Server, Oracle, MySQL etc.), Non-RDBMS (MongoDB, Cassandra etc.) and Data Warehouse (Azure Synapse, AWS Redshift, Google BigQuery, Snowflake etc.) environments. Deep knowledge and implementation experience on Modern Data Warehouse principles using Kimball & Inmon Models or Data Vault including their application based on data quality requirements. In-depth knowledge of any one of cloud-based infrastructure (AWS, Azure, Google Cloud) for solution design, development, and delivery is mandatory. Proven abilities to take on initiative, be innovative and drive it through completion. Analytical mind with strong problem-solving attitude. Excellent communication skills, both written and verbal. Any Enterprise Data Architect certification will be an added advantage. Why Explore a Career at Accordion: High growth environment: Semi-annual performance management and promotion cycles coupled with a strong meritocratic culture, enables fast track to leadership responsibility. Cross Domain Exposure: Interesting and challenging work streams across industries and domains that always keep you excited, motivated, and on your toes. Entrepreneurial Environment: Intellectual freedom to make decisions and own them. We expect you to spread your wings and assume larger responsibilities. Fun culture and peer group: Non-bureaucratic and fun working environment; Strong peer environment that will challenge you and accelerate your learning curve. Other benefits for full time employees: Health and wellness programs that include employee health insurance covering immediate family members and parents, term life insurance for employees, free health camps for employees, discounted health services (including vision, dental) for employee and family members, free doctor's consultations, counsellors, etc. Corporate Meal card options for ease of use and tax benefits. Team lunches, company sponsored team outings and celebrations. Robust leave policy to support work-life balance. Specially designed leave structure to support woman employees for maternity and related requests. Reward and recognition platform to celebrate professional and personal milestones. A positive & transparent work environment including various employee engagement and employee benefit initiatives to support personal and professional learning and development.

Posted 4 weeks ago

Apply

7.0 - 9.0 years

25 - 35 Lacs

Chennai, Bengaluru

Hybrid

Naukri logo

Warm Greetings from Dataceria Software Solutions Pvt Ltd We are Looking For: Senior Azure Data Engineer Domain : BFSI ------------------------------------------------------------------------------------------------------------------------------------------------- As a Senior Azure Data Engineer , you will play a pivotal role in bridging data engineering with front-end development. You willll work closely with Data Scientists and UI Developers (React.js) to design, build, and secure data services that power a next-generation platform. This is a hands-on, collaborative role requiring deep experience across the Azure data ecosystem, API development, and modern DevOps practices. Your Responsibilities Will Include: Building and maintaining scalable Azure data pipelines ( ADF, Synapse, Databricks, DBT) to serve dynamic frontend interfaces. Creating API access layers to expose data to front-end applications and external services. Collaborating with the Data Science team to operationalize models and insights. Working directly with React JS developers to support UI data integration. Ensuring data security , integrity , and monitoring across systems. Implementing and maintaining CI/CD pipelines for seamless deployment. Automating and managing cloud infrastructure using Terraform, Kubernetes, and Azure App Services . Supporting data migration initiatives from legacy infrastructure to modern platforms like Data Mesh Refactoring legacy pipelines with code reuse, version control, and infrastructure-as-code best practices. Analyzing, mapping, and documenting financial data models across various systems. What Were Looking For: 8+ years of experience in data engineering, with a strong focus on the Azure ecosystem (ADF, Synapse, Databricks, App Services). Proven ability to develop and host secure, scalable REST APIs . Experience supporting cross-functional teams, especially front-end/UI and data science groups is a plus. Hands-on experience with Terraform, Kubernetes (Azure EKS), CI/CD, and cloud automation. Strong expertise in ETL/ELT design , performance tuning, and pipeline monitoring . Solid command of Python, SQL , and optionally Scala, Java, or PowerShell. Knowledge of data security practices, governance, and compliance (e.g., GDPR) . Familiarity with big data tools (e.g., Spark, Kafka ), version control (Git), and testing frameworks for data pipelines. Excellent communication skills and the ability to explain technical concepts to diverse stakeholders. Role & responsibilities ---------------------------------------------------------------------------------------------------------------------------------------------- Joining: Immediate Work location: Bangalore (hybrid) , Chennai Open Positions: Senior Azure Data Engineer, If interested, please share your updated resume to carrers@dataceria.com: We welcome applications from skilled candidates who are open to working in a hybrid model. Candidates with less experience but strong technical abilities are also encouraged to apply. ----------------------------------------------------------------------------------------------------- Dataceria Software Solutions Pvt Ltd Follow our LinkedIn for more job openings : https://www.linkedin.com/company/dataceria/ Email : careers@dataceria.com

Posted 4 weeks ago

Apply

7.0 - 12.0 years

10 - 18 Lacs

Bengaluru

Hybrid

Naukri logo

Job Goals Design and implement resilient data pipelines to ensure data reliability, accuracy, and performance. Collaborate with cross-functional teams to maintain the quality of production services and smoothly integrate data processes. Oversee the implementation of common data models and data transformation pipelines, ensuring alignement to standards. Drive continuous improvement in internal data frameworks and support the hiring process for new Data Engineers. Regularly engage with collaborators to discuss considerations and manage the impact of changes. Support architects in shaping the future of the data platform and help land new capabilities into business-as-usual operations. Identify relevant emerging trends and build compelling cases for adoption, such as tool selection. Ideal Skills & Capabilities A minimum of 6 years of experience in a comparable Data Engineer position is required. Data Engineering Expertise: Proficiency in designing and implementing resilient data pipelines, ensuring data reliability, accuracy, and performance, with practical knowledge of modern cloud data technology stacks (AZURE) Technical Proficiency: Experience with Azure Data Factory and Databricks , and skilled in Python , Apache Spark , or other distributed data programming frameworks. Operational Knowledge: In-depth understanding of data concepts, data structures, modelling techniques, and provisioning data to support varying consumption needs, along with accomplished ETL/ELT engineering skills. Automation & DevOps: Experience using DevOps toolchains for managing CI/CD and an automation-first mindset in building solutions, including self-healing and fault-tolerant methods. Data Management Principles: Practical application of data management principles such as security and data privacy, with experience handling sensitive data through techniques like anonymisation/tokenisation/pseudo-anonymisation.

Posted 4 weeks ago

Apply

5.0 - 7.0 years

15 - 20 Lacs

Pune

Work from Office

Naukri logo

Roles and Responsibilities: You are detailed reviewing and analyzing structured, semi-structured and unstructured data sources for quality, completeness, and business value. You design, architect, implement and test rapid prototypes that demonstrate value of the data and present them to diverse audiences. You participate in early state design and feature definition activities. Responsible for implementing robust data pipeline using Microsoft, Databricks Stack Responsible for creating reusable and scalable data pipelines. You are a Team-Player, collaborating with team members across multiple engineering teams to support the integration of proven prototypes into core intelligence products. You have strong communication skills to effectively convey complex data insights to non-technical stakeholders. Critical Skills to Possess: Skills: Advanced working knowledge and experience with relational and non-relational databases. Advanced working knowledge and experience with API data providers Experience building and optimizing Big Data pipelines, architectures, and datasets. Strong analytic skills related to working with structured and unstructured datasets. Hands-on experience in Azure Databricks utilizing Spark to develop ETL pipelines. Strong proficiency in data analysis, manipulation, and statistical modeling using tools like Spark, Python, Scala, SQL, or similar languages. Strong experience in Azure Data Lake Storage Gen2, Azure Data Factory, Databricks, Event Hub, Azure Synapse. Familiarity with several of the following technologies: Event Hub, Docker, Azure Kubernetes Service, Azure DWH, API Azure, Azure Function, Power BI, Azure Cognitive Services. Azure DevOps experience to deploy the data pipelines through CI/CD. Roles and Responsibilities Skills: Azure Databricks, Azure Datafactory, Big Data Pipelines, Pyspark, Azure Synapse, Azure DevOps, Azure Data Lake Storage Gen2, Event Hub, Azure DWH, API Azure. Experience: Minimum 5-7 years of practical experience as Data Engineer. Azure cloud stack in-production experience. Preferred Qualifications: BS degree in Computer Science or Engineering or equivalent experience

Posted 4 weeks ago

Apply

7.0 - 9.0 years

10 - 20 Lacs

Bengaluru

Hybrid

Naukri logo

Overall, 7 to 9 years of experience in cloud data and analytics platforms such as AWS, Azure, or GCP • Including 3+ years experience with Azure cloud Analytical tools is a must • Including 5+ years of experience working with data & analytics concepts such as SQL, ETL, ELT, reporting and report building, data visualization, data lineage, data importing & exporting, and data warehousing • Including 3+ years of experience working with general IT concepts such as integrations, encryption, authentication & authorization, batch processing, real-time processing, CI/CD, automation • Advanced knowledge of cloud technologies and services, specifically around Azure Data Analytics tools o Azure Functions (Compute) o Azure Blob Storage (Storage) o Azure Cosmos DB (Databases) o Azure Synapse Analytics (Databases) o Azure Data Factory (Analytics) o Azure Synapse Serverless SQL Pools (Analytics)Role & responsibilities Preferred candidate profile

Posted 4 weeks ago

Apply

10.0 - 12.0 years

25 - 30 Lacs

Hyderabad, Bengaluru

Hybrid

Naukri logo

Excellent communication and interpersonal skills, with the ability to explain technical concepts to non-technical stakeholders. Azure certifications such as Azure Solutions Architect Expert (AZ-30x) or equivalent certifications are preferred. Knowledge of hybrid cloud architecture and migration techniques. Mandatory Skills: Microsoft Fabric Azure Data Factory Azure Synapse Analytics Azure SQL DB Azure Functions Azure Cosmos DB Nice to Have: .NET experience Azure AI skills

Posted 4 weeks ago

Apply

5.0 - 6.0 years

8 - 13 Lacs

Hyderabad

Work from Office

Naukri logo

About the Role - We are seeking a highly skilled and experienced Senior Azure Databricks Engineer to join our dynamic data engineering team. - As a Senior Azure Databricks Engineer, you will play a critical role in designing, developing, and implementing data solutions on the Azure Databricks platform. - You will be responsible for building and maintaining high-performance data pipelines, transforming raw data into valuable insights, and ensuring data quality and reliability. Key Responsibilities - Design, develop, and implement data pipelines and ETL/ELT processes using Azure Databricks. - Develop and optimize Spark applications using Scala or Python for data ingestion, transformation, and analysis. - Leverage Delta Lake for data versioning, ACID transactions, and data sharing. - Utilize Delta Live Tables for building robust and reliable data pipelines. - Design and implement data models for data warehousing and data lakes. - Optimize data structures and schemas for performance and query efficiency. - Ensure data quality and integrity throughout the data lifecycle. - Integrate Azure Databricks with other Azure services (e.g., Azure Data Factory, Azure Synapse Analytics, Azure Blob Storage). - Leverage cloud-based data services to enhance data processing and analysis capabilities. Performance Optimization & Troubleshooting - Monitor and analyze data pipeline performance. - Identify and troubleshoot performance bottlenecks. - Optimize data processing jobs for speed and efficiency. - Collaborate effectively with data engineers, data scientists, data analysts, and other stakeholders. - Communicate technical information clearly and concisely. - Participate in code reviews and contribute to the improvement of development processes. Qualifications Essential - 5+ years of experience in data engineering, with at least 2 years of hands-on experience with Azure Databricks. - Strong proficiency in Python and SQL. - Expertise in Apache Spark and its core concepts (RDDs, DataFrames, Datasets). - In-depth knowledge of Delta Lake and its features (e.g., ACID transactions, time travel). - Experience with data warehousing concepts and ETL/ELT processes. - Strong analytical and problem-solving skills. - Excellent communication and interpersonal skills. - Bachelor's degree in Computer Science, Computer Engineering, or a related field.

Posted 4 weeks ago

Apply

3.0 - 4.0 years

10 - 20 Lacs

Hyderabad

Remote

Naukri logo

Experience Required: 3 to 4Years Mode of work: Remote Skills Required: Azure Data Bricks, Azure Data Factory, Pyspark, Python, SQL, Spark Notice Period : Immediate Joiners/ Permanent/Contract role (Can join within June 15th ) 3 to 4+ years of experience with Big Data technologies Exp with Databricks is must with Python scripting and SQL knowledge Strong knowledge and experience with Microsoft Azure cloud platform. Proficiency in SQL and experience with SQL-based database systems. Experience with batch and data streaming. Hands-on experience with Azure data services, such as Azure SQL Database, Azure Data Lake, and Azure Blob Storage . Experience using Azure Databricks in real-world scenarios is preferred . Experience with data integration and ETL (Extract, Transform, Load) processes . Strong analytical and problem-solving skills. Good understanding of data engineering principles and best practices. Experience with programming languages such as Pyspark/Python Relevant certifications in Azure data services or data engineering are a plus. Interested candidate can share your resume to OR you can refer your friend to Pavithra.tr@enabledata.com for the quick response.

Posted 4 weeks ago

Apply

3.0 - 7.0 years

20 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

Key Skills: Azure Synapse, Azure Databricks, Azure, Azure Devops, Azure AI, Azure Api, Azure AD, PLSQL Roles and Responsibilities: Design and Develop Data Pipelines: Build and maintain scalable data pipelines using Azure Data Factory, ensuring efficient and reliable data movement and transformation. File-Based Data Management: Handle data ingestion and management from various file sources, including CSV, JSON, and Parquet formats, ensuring data accuracy and consistency. ETL Implementation: Implement and optimize ETL (Extract, Transform, Load) processes using tools such as Azure Data Factory, Azure SQL Database, and Azure Synapse Analytics. Cloud Storage Management: Work with Azure Data Lake Storage to manage and utilize cloud storage solutions, ensuring data is securely stored and easily accessible. Automation with Data Factory: Leverage Azure Data Factory's automation capabilities to schedule and monitor data workflows, ensuring timely execution and error-free operations. Performance Monitoring: Continuously monitor and optimize data pipeline performance, troubleshoot issues, and implement best practices to enhance efficiency. Team Collaboration: Collaborate with Technical Architects, Business Analysts, and other engineers to build scalable and reliable end-to-end data solutions for reporting and analytics. DevOps Framework: Defining and implementing DevOps framework using CI/CD pipelines. SQL Development: Write efficient, clean, and well-documented SQL queries for data extraction, manipulation, and analysis. SQL Performance Optimization: Optimize performance of SQL-based queries, stored procedures, and jobs in Azure environments. Data Security & Compliance: Implement data security best practices and ensure compliance with data privacy regulations (HIPAA, etc.). Technical Leadership: Provide technical leadership and mentoring to junior engineers and team members. Technology Adoption: Stay current with emerging Azure technologies and trends, recommending improvements to existing systems and solutions. Skills Required: Strong expertise in Data Analytics for analyzing and interpreting large datasets. Proficiency in Azure Boards-GitHub for managing project tasks and source code version control. Extensive experience with Azure Data Factory for building and managing scalable data pipelines. In-depth knowledge of Azure Data Lake for managing cloud storage solutions and data access. Hands-on experience with Azure Synapse for data integration and analytics solutions. Proficient in Azure DevOps for implementing CI/CD pipelines and automating deployments. Education: Bachelor's or Master's degree in Computer Science, Information Systems, Data Engineering, or a related technical field

Posted 4 weeks ago

Apply

5.0 - 10.0 years

5 - 13 Lacs

Chennai

Work from Office

Naukri logo

Roles and Responsibilities Design, develop, test, deploy, and maintain Azure Data Factory (ADF) pipelines for data integration. Collaborate with cross-functional teams to gather requirements and design solutions using ADF. Develop complex data transformations using SQL Server Integration Services (SSIS), DDL/DML statements, and other tools. Troubleshoot issues related to pipeline failures or errors in the pipeline execution process. Optimize pipeline performance by analyzing logs, identifying bottlenecks, and implementing improvements.

Posted 4 weeks ago

Apply

8.0 - 13.0 years

16 - 27 Lacs

Hyderabad

Remote

Naukri logo

Job Location : Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai Preferred: Hyderabad At least 5+ years of relevant hands on development experience as Azure Data Engineering role Proficient in Azure technologies like ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flow Experience in business processing mapping of data and analytics solutions

Posted 4 weeks ago

Apply

8.0 - 13.0 years

16 - 27 Lacs

Indore, Hyderabad, Ahmedabad

Work from Office

Naukri logo

Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI. We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth. Designation: Lead Data Engineer Location: Hyderabad, Indore, Ahmedabad Experience: 8 years Role & responsibilities What You Will Do: • Analyze Business Requirements. • Analyze the Data Model and do GAP analysis with Business Requirements and Power BI. Design and Model Power BI schema. • Transformation of Data in Power BI/SQL/ETL Tool. • Create DAX Formula, Reports, and Dashboards. Able to write DAX formulas. • Experience writing SQL Queries and stored procedures. • Design effective Power BI solutions based on business requirements. • Manage a team of Power BI developers and guide their work. • Integrate data from various sources into Power BI for analysis. • Optimize performance of reports and dashboards for smooth usage. • Collaborate with stakeholders to align Power BI projects with goals. • Knowledge of Data Warehousing(must), Data Engineering is a plus What we need? • B. Tech computer science or equivalent • Minimum 5+ years of relevant experience Perks and benefits

Posted 1 month ago

Apply

5.0 - 10.0 years

13 - 23 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

Hi, We are excited to announce that #LTI Mindtree is currently recruiting #Data Engineers! Roles Available: - Specialist - Data Engineering: 5 to 8 years of experience Senior Specialist - Data Engineering: 8 to 12 years of experience Location: Bangalore, Pune, Mumbai, Kolkata, Hyderabad, Chennai and Delhi NCR. Work Mode: Hybrid Notice period: Till 60 days Link to share your details: ( https://lnkd.in/daty4F25 ) Job Summary: We are seeking an experienced and strategic Data to design, build, and optimize scalable, secure, and high-performance data solutions. You will play a pivotal role in shaping our data infrastructure, working with technologies such as Databricks, Azure Data Factory, Unity Catalog , and Spark , while aligning with best practices in data governance, pipeline automation , and performance optimization . Key Responsibilities: Design and develop scalable data pipelines using Databricks and Medallion Architecture (Bronze, Silver, Gold layers). • Architect and implement data governance frameworks using Unity Catalog and related tools. • Write efficient PySpark and SQL code for data transformation, cleansing, and enrichment. • Build and manage data workflows in Azure Data Factory (ADF) including triggers, linked services, and integration runtimes. • Optimize queries and data structures for performance and cost-efficiency . • Develop and maintain CI/CD pipelines using GitHub for automated deployment and version control. • Collaborate with cross-functional teams to define data strategies and drive data quality initiatives. • Implement best practices for DevOps, CI/CD , and infrastructure-as-code in data engineering. • Troubleshoot and resolve performance bottlenecks across Spark, ADF, and Databricks pipelines. • Maintain comprehensive documentation of architecture, processes, and workflows . Requirements: Bachelors or master’s degree in computer science, Information Systems, or related field. • Proven experience as a Data Architect or Senior Data Engineer. • Strong knowledge of Databricks , Azure Data Factory , Spark (PySpark) , and SQL . • Hands-on experience with data governance , security frameworks , and catalog management . • Proficiency in cloud platforms (preferably Azure). • Experience with CI/CD tools and version control systems like GitHub. • Strong communication and collaboration skills.

Posted 1 month ago

Apply

7.0 - 11.0 years

15 - 20 Lacs

Mumbai

Work from Office

Naukri logo

This role requires deep understanding of data warehousing, business intelligence (BI), and data governance principles, with strong focus on the Microsoft technology stack. Data Architecture Develop and maintain the overall data architecture, including data models, data flows, data quality standards. Design and implement data warehouses, data marts, data lakes on Microsoft Azure platform Business Intelligence Design and develop complex BI reports, dashboards, and scorecards using Microsoft Power BI. Data Engineering Work with data engineers to implement ETL/ELT pipelines using Azure Data Factory. Data Governance Establish and enforce data governance policies and standards. Primary Skills Experience 15+ years of relevant experience in data warehousing, BI, and data governance. Proven track record of delivering successful data solutions on the Microsoft stack. Experience working with diverse teams and stakeholders. Required Skills and Experience Technical Skills: Strong proficiency in data warehousing concepts and methodologies. Expertise in Microsoft Power BI. Experience with Azure Data Factory, Azure Synapse Analytics, and Azure Databricks. Knowledge of SQL and scripting languages (Python, PowerShell). Strong understanding of data modeling and ETL/ELT processes. Secondary Skills Soft Skills Excellent communication and interpersonal skills. Strong analytical and problem-solving abilities. Ability to work independently and as part of a team. Strong attention to detail and organizational skills.

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Kolkata, Hyderabad, Bengaluru

Hybrid

Naukri logo

Greetings from Tech Mahindra!! With reference to your profile on Naukri portal, we are contacting you to share a better job opportunity for the role of SQL Developer with our own organization, Tech Mahindra based. COMPANY PROFILE: Tech Mahindra is an Indian multinational information technology services and consulting company. Website: www.techmahindra.com We are looking for SQL Developer for our Organization. Job Details: Experience : 5+ years Education : Any Work timings : Normal Shift Mode-Hybrid Location open for all locations No of days working : 05 Days Working Required Skills - Experience: 5+ years Experience on SQL DB/Server including building SQL Database, database designing, data modelling and data warehousing. Strong experience Creating complex stored procedures and functions, Dynamic SQLs Strong experience in performance tuning activities •Must have experience on Azure Data Factory V2, Azure Synapse, Azure Databricks and SSIS. •Strong Azure SQL Database and Azure SQL Datawarehouse concepts. •Strong verbal and written communications skills Kindly share only interested candidates forward your updated resumes with below details at: ps00874998@techmahindra.com Total years of experience: Relevant experience in SQL developer : Relevant experience in Azure Data Factory :- Relevant experience in Azure Databricks :- Offer amount (if holding any offer ) : Location of offer:- Reason for looking another offer:- Notice Period (if serving LWD) : Current location :- Preferred location : CTC: Exp CTC: When you are available for the interview? (Time/Date): How soon can you join? Best Regards, Prerna Sharma Business Associate | RMG Tech Mahindra | PS00874998@TechMahindra.com

Posted 1 month ago

Apply

4.0 - 6.0 years

10 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

Primary Responsibilities: Ability to interact closely with Business Stakeholder on understanding their business requirements and converting them into opportunity. Leading POCs to create break through technical solutions, performing exploratory and targeted data analyses. Ability to Manage and support existing applications and implementing the best practices on timely Manner. Analyzes the results to generate actionable insights and presents the findings to the business users for informed decision making. Understands business requirements and develops dashboards to meet business needs Adapts to the changing business requirements and supports the development and implementation of best-known methods with respect to data analytics Performs Data mining which provides actionable data in response to changing business requirements Migrates data into standardized platforms (Power BI) and builds critical data models to improve process performance and product quality Owns technical implementation and documentation associated with datasets Provides updates on project progress, performs root cause analysis on completed projects and works on identified improvement areas (like process, product quality, performance, etc.) Provides post-implementation support and ensures the target project benefits are successfully delivered in a robust and sustainable fashion Builds relationships and partners effectively with cross-functional teams to ensure available data is accurate, consistent and timely Mandatory Skills required to perform the job: Knowledge on the software development lifecycle expert in translating business requirements into technical solutions; and fanatical about quality, usability, security and scalability Specialist in Power Platform (Power Apps & Power Automate) Experience with JavaScript, Power Fx, create plugins, custom app and canvas/model driven, page development, web API creation, data/Cloud flow, Dataverse. Expert in Reports & Dashboard development (Power BI) Knowledge of SAP systems (SAP ECC T-Codes & Navigation) Experience in Data Base Development, Troubleshooting & Problem-solving skills (SQL Server, SAP HANA, Azure Synapse) Experience in project requirements gathering and converting business requirements into analytical & technical specs. Good understanding of business processes and experience in Manufacturing/Inventory Management domains Knowledge in performing Root cause analysis and Corrective actions Excellent verbal and written communication & presentation skills, able to communicate cross-functionally Eligibility Criteria: Years of Experience: Minimum 5-7 years Job Experience: Expert with Power Platform (Power Apps, Power Automate & Power BI) Experience in Database and Data warehouse tech (Azure Synapse/SQL Server/SAP HANA) Data Analysis/Data Profiling/Data Visualization

Posted 1 month ago

Apply

8.0 - 10.0 years

10 - 12 Lacs

Pune

Work from Office

Naukri logo

Work Mode: Full-time, Office-based JobSummary Transform raw data into compelling stories that drive business decisions. You will design, build, and optimize interactive dashboards and reports with PowerBI, partner with business stakeholders to define KPIs and data models, and ensure every analytic deliverable meets enterprise reporting standards for accuracy, usability, and performance. KeyResponsibilities Collaborate with business teams to gather requirements, identify key performance indicators, and translate them into intuitive PowerBI reports and dashboards. Build robust semantic modelsdefining star/snowflake schemas, measures, and calculated tablesto support self service analytics. Develop advanced DAX calculations, optimized queries, and dynamic visual interactions that deliver near real time insights. Continuously tune data models, visuals, and refresh schedules to maximise performance and minimise cost. Establish and maintain report governance standards (naming conventions, documentation, version control, and accessibility compliance). Mentor analysts and citizen developers on PowerBI best practices and storytelling techniques. Partner with data engineering teams to validate data quality, source new data sets, and enhance the analytics pipeline. Must HaveSkills 6-8years in BI/reporting roles, with 3+years hands on PowerBI design and development experience. Expertise in data modelling concepts (star/snowflake, slowly changing dimensions) and strong command of DAX and PowerQuery (M). Proven ability to translate complex business needs into intuitive KPIs, visuals, and interactive drill downs. Solid SQL skills and familiarity with data warehouse/ETL processes (AzureSynapse, Snowflake, or similar). Experience optimising report performance—query folding, aggregation tables, incremental refresh, composite models, etc. Strong understanding of data visualisation best practices, UX design, and storytelling principles. Excellent stakeholder management, requirements gathering, and presentation abilities. PreferredCertifications Microsoft Certified: PowerBI Data Analyst Associate (PL 300) Microsoft Certified: Azure Enterprise Data Analyst Associate (DP 500)

Posted 1 month ago

Apply

6.0 - 11.0 years

10 - 20 Lacs

Chennai, Bengaluru

Hybrid

Naukri logo

Hiring for Big Data Lead Experience : 6 - 12+ yrs Work location : Chennai and Bangalore Shift timings : 12:30pm - 9:30pm Work Mode : 5 days WFO Primary: Azure, Databricks, ADF, Pyspark, SQL Sharing JD for your reference : Must Have 6 + Years of IT experience in Datawarehouse and ETL • Hands-on data experience on Cloud Technologies on Azure, ADF, Synapse, Pyspark/Python • Ability to understand Design, Source to target mapping (STTM) and create specifications documents • Flexibility to operate from client office locations • Able to mentor and guide junior resources, as needed Nice to Have • Any relevant certifications • Banking experience on RISK & Regulatory OR Commercial OR Credit Cards/Retail Kindly, share the following details : Updated CV Relevant Skills Total Experience Current Company Current CTC Expected CTC Notice Period Current Location Preferred Location

Posted 1 month ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

Azure Data Factory: - Develop Azure Data Factory Objects - ADF pipeline, configuration, parameters, variables, Integration services runtime - Hands-on knowledge of ADF activities(such as Copy, SP, lkp etc) and DataFlows - ADF data Ingestion and Integration with other services Azure Databricks: - Experience in Big Data components such as Kafka, Spark SQL, Dataframes, HIVE DB etc implemented using Azure Data Bricks would be preferred. - Azure Databricks integration with other services - Read and write data in Azure Databricks - Best practices in Azure Databricks Synapse Analytics: - Import data into Azure Synapse Analytics with and without using PolyBase - Implement a Data Warehouse with Azure Synapse Analytics - Query data in Azure Synapse Analytics Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 1 month ago

Apply

4.0 - 9.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub. - Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. - Designing and implementing data engineering, ingestion, and transformation functions - Azure Synapse or Azure SQL data warehouse - Spark on Azure is available in HD insights and data bricks - Good customer communication. - Good Analytical skill Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 1 month ago

Apply

4.0 - 8.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

About The Role : - Minimum 4 years of experience in relevant field. - Hands on experience in Databricks, SQL, Azure Data Factory, Azure DevOps - Strong expertise in Microsoft Azure cloud platform services (Azure Data Factory, Azure Data Bricks, Azure SQL Database, Azure Data Lake Storage, Azure Synapse Analytics). - Proficient in CI-CD pipelines in Azure DevOps for automatic deployments - Good in Performance optimization techniques like using temp tables, CTE, indexing, merge statements, joins. - Familiarity in Advanced SQL and programming skills (e.g., Python, Pyspark). - Familiarity with data warehousing and data modelling concepts. - Good in Data management and deployment processes using Azure Data factory and Databricks, Azure DevOps. - Knowledge on integrating every azure service with DevOps - Experience in designing and implementing scalable data architectures. - Proficient in ETL processes and tools. - Strong communication and collaboration skills. - Certifications in relevant Azure technologies are a plus Location Bangalore/ Hyderabad Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 1 month ago

Apply

5.0 - 6.0 years

7 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

About the Role - We are seeking a highly skilled and experienced Senior Azure Databricks Engineer to join our dynamic data engineering team. - As a Senior Azure Databricks Engineer, you will play a critical role in designing, developing, and implementing data solutions on the Azure Databricks platform. - You will be responsible for building and maintaining high-performance data pipelines, transforming raw data into valuable insights, and ensuring data quality and reliability. Key Responsibilities - Design, develop, and implement data pipelines and ETL/ELT processes using Azure Databricks. - Develop and optimize Spark applications using Scala or Python for data ingestion, transformation, and analysis. - Leverage Delta Lake for data versioning, ACID transactions, and data sharing. - Utilize Delta Live Tables for building robust and reliable data pipelines. - Design and implement data models for data warehousing and data lakes. - Optimize data structures and schemas for performance and query efficiency. - Ensure data quality and integrity throughout the data lifecycle. - Integrate Azure Databricks with other Azure services (e.g., Azure Data Factory, Azure Synapse Analytics, Azure Blob Storage). - Leverage cloud-based data services to enhance data processing and analysis capabilities. Performance Optimization & Troubleshooting - Monitor and analyze data pipeline performance. - Identify and troubleshoot performance bottlenecks. - Optimize data processing jobs for speed and efficiency. - Collaborate effectively with data engineers, data scientists, data analysts, and other stakeholders. - Communicate technical information clearly and concisely. - Participate in code reviews and contribute to the improvement of development processes. Qualifications Essential - 5+ years of experience in data engineering, with at least 2 years of hands-on experience with Azure Databricks. - Strong proficiency in Python and SQL. - Expertise in Apache Spark and its core concepts (RDDs, DataFrames, Datasets). - In-depth knowledge of Delta Lake and its features (e.g., ACID transactions, time travel). - Experience with data warehousing concepts and ETL/ELT processes. - Strong analytical and problem-solving skills. - Excellent communication and interpersonal skills. - Bachelor's degree in Computer Science, Computer Engineering, or a related field. Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 20 Lacs

Noida, Hyderabad

Work from Office

Naukri logo

Azure data factory, Azure Databricks, SQL, Pyspark, Python, Synapse

Posted 1 month ago

Apply

8 - 12 years

18 - 25 Lacs

Pune

Work from Office

Naukri logo

We are looking for an experienced TechLead with a deep understanding of the Microsoft Data Technology stack. The candidate should have 8-10 years of professional experience, proven leadership skills, and the ability to manage and mentor a team of 5 to 8 people. Preferred candidate profile Experience: 8-10 years in the Data and Analytics domain with expertise in the Microsoft Data Tech stack. Leadership: Experience in managing teams of 8-10 members. Technical Skills: Expertise in tools like Microsoft Fabric, Azure Synapse Analytics, Azure Data Factory, Power BI, SQL Server, Azure Databricks, etc. Strong understanding of data architecture, pipelines, and governance. Understanding of one of the other data platforms like Snowflake or Google Big query or Amazon Red shift will be a plus and good to have skill. Tech stack - DBT and Databricks or Snowflake Microsoft BI - PBI, Synapse and Fabric Project Management: Proficiency in project management methodologies (Agile, Scrum, or Waterfall). Communication: Excellent interpersonal, written, and verbal communication skills. Education: Bachelors or masters degree in computer science, Information Technology, or related field. Intersected candidates can forward your profile to karthik@busisol.net or whatsapp @ 9791876677

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies