Jobs
Interviews

378 Azure Synapse Jobs - Page 7

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Job Title DD&IT Delivery Lead Department Digital, Data & IT Novo Nordisk India Pvt Ltd Are you passionate about delivering cutting-edge digital and data-driven technology solutionsDo you thrive at the intersection of technology and business, and have a knack for leading complex IT projectsIf so, we have an exciting opportunity for you! Join Novo Nordisk as a Delivery Lead in our Digital, Data & IT (DDIT) team in Bangalore, India, and help us shape the future of healthcare. Read on and apply today for a life-changing career. The position As a Delivery Lead Digital, Data & IT, you will: Lead the full lifecycle of IT projects, from initiation and planning to execution, deployment, and post-go-live support. Define and manage project scope, timelines, budgets, and resources using Agile or hybrid methodologies. While Agile is preferred. Drive sprint planning, backlog grooming, and release management in collaboration with product owners and scrum teams. Conduct architecture and solution design reviews to ensure scalability and alignment with enterprise standards. Provide hands-on guidance on solution design, data modelling, API integration, and system interoperability. Ensure compliance with IT security policies and data privacy regulations, including GDPR and local requirements. Act as the primary point of contact for business stakeholders, translating business needs into technical deliverables. Facilitate workshops and design sessions with cross-functional teams, including marketing, sales, medical, and analytics. Manage vendor relationships, ensure contract compliance, SLA adherence, and performance reviews. Qualifications We are looking for an experienced professional who meets the following criteria: Bachelor''s degree in computer science, Information Technology, or related field OR and MBA/postgraduate with minimum 3 years of relevant experience. 68 years of experience in IT project delivery, with at least 3 years in a technical leadership or delivery management role. Proven experience in CRM platforms (e.g., Veeva, Salesforce), Omnichannel orchestration tools, and Patient Engagement platforms. Proven experience is required in commercial space of business. Experience in Data lakes and analytics platforms (e.g., Azure Synapse, Power BI) Mobile/web applications for field force enablement. Certifications in project management (PMP, PRINCE2) or Agile (Scrum Master, SAFe) are good to have. Relevant experience in managing projects can also be considered. Experience with IT governance models and technical documentation for best practices. Exposure to data privacy tools and frameworks. Familiarity with data and IT security best practices. About the department The DDIT department is located at our headquarters, where we manage projects and programs related to business requirements and specialized technical areas. Our team is dedicated to planning, organizing, and controlling resources to achieve project objectives. We foster a dynamic and innovative atmosphere, driving the adoption of Agile processes and best practices across the organization.

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad

Work from Office

Lead Data Engineer Data Management Job description Company Overview Accordion works at the intersection of sponsors and management teams throughout every stage of the investment lifecycle, providing hands-on, execution-focused support to elevate data and analytics capabilities. So, what does it mean to work at Accordion? It means joining 1,000+ analytics, data science, finance & technology experts in a high-growth, agile, and entrepreneurial environment while transforming how portfolio companies drive value. It also means making your mark on Accordions futureby embracing a culture rooted in collaboration and a firm-wide commitment to building something great, together. Headquartered in New York City with 10 offices worldwide, Accordion invites you to join our journey. Data & Analytics (Accordion | Data & Analytics) Accordion's Data & Analytics (D&A) team delivers cutting-edge, intelligent solutions to a global clientele, leveraging a blend of domain knowledge, sophisticated technology tools, and deep analytics capabilities to tackle complex business challenges. We partner with Private Equity clients and their Portfolio Companies across diverse sectors, including Retail, CPG, Healthcare, Media & Entertainment, Technology, and Logistics. D&A team delivers data and analytical solutions designed to streamline reporting capabilities and enhance business insights across vast and complex data sets ranging from Sales, Operations, Marketing, Pricing, Customer Strategies, and more. Location: Hyderabad Role Overview: Accordion is looking for Lead Data Engineer. He/she will be responsible for the design, development, configuration/deployment, and maintenance of the above technology stack. He/she must have in-depth understanding of various tools & technologies in the above domain to design and implement robust and scalable solutions which address client current and future requirements at optimal costs. The Lead Data Engineer should be able to evaluate existing architectures and recommend way to upgrade and improve the performance of architectures both on-premises and cloud-based solutions. A successful Lead Data Engineer should possess strong working business knowledge, familiarity with multiple tools and techniques along with industry standards and best practices in Business Intelligence and Data Warehousing environment. He/she should have strong organizational, critical thinking, and communication skills. What You will do: Partners with clients to understand their business and create comprehensive business requirements. Develops end-to-end Business Intelligence framework based on requirements including recommending appropriate architecture (on-premises or cloud), analytics and reporting. Works closely with the business and technology teams to guide in solution development and implementation. Work closely with the business teams to arrive at methodologies to develop KPIs and Metrics. Work with Project Manager in developing and executing project plans within assigned schedule and timeline. Develop standard reports and functional dashboards based on business requirements. Conduct training programs and knowledge transfer sessions to junior developers when needed. Recommend improvements to provide optimum reporting solutions. Curiosity to learn new tools and technologies to provide futuristic solutions for clients. Ideally, you have: Undergraduate degree (B.E/B.Tech.) from tier-1/tier-2 colleges are preferred. More than 5 years of experience in related field. Proven expertise in SSIS, SSAS and SSRS (MSBI Suite.) In-depth knowledge of databases (SQL Server, MySQL, Oracle etc.) and data warehouse (any one of Azure Synapse, AWS Redshift, Google BigQuery, Snowflake etc.) In-depth knowledge of business intelligence tools (any one of Power BI, Tableau, Qlik, DOMO, Looker etc.) Good understanding of Azure (OR) AWS: Azure (Data Factory & Pipelines, SQL Database & Managed Instances, DevOps, Logic Apps, Analysis Services) or AWS (Glue, Aurora Database, Dynamo Database, Redshift, QuickSight). Proven abilities to take on initiative and be innovative. Analytical mind with problem solving attitude. Why Explore a Career at Accordion: High growth environment: Semi-annual performance management and promotion cycles coupled with a strong meritocratic culture, enables fast track to leadership responsibility. Cross Domain Exposure: Interesting and challenging work streams across industries and domains that always keep you excited, motivated, and on your toes. Entrepreneurial Environment : Intellectual freedom to make decisions and own them. We expect you to spread your wings and assume larger responsibilities. Fun culture and peer group: Non-bureaucratic and fun working environment; Strong peer environment that will challenge you and accelerate your learning curve. Other benefits for full time employees: Health and wellness programs that include employee health insurance covering immediate family members and parents, term life insurance for employees, free health camps for employees, discounted health services (including vision, dental) for employee and family members, free doctors consultations, counsellors, etc. Corporate Meal card options for ease of use and tax benefits. Team lunches, company sponsored team outings and celebrations. Cab reimbursement for women employees beyond a certain time of the day. Robust leave policy to support work-life balance. Specially designed leave structure to support woman employees for maternity and related requests. Reward and recognition platform to celebrate professional and personal milestones. A positive & transparent work environment including various employee engagement and employee benefit initiatives to support personal and professional learning and development.

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 22 Lacs

Ahmedabad

Work from Office

• Design, develop, and maintain data pipelines and ETL processes using Azure Data Factory, Azure Databricks, and Azure Synapse Analytics.experience with SQL, Python, or other scripting languages Required Candidate profile ETL design,big data tools such as Hadoop or Spark• 3+ years of in data engineering with a focus on Azure cloud services.expe in data working with Azure cloud services, and designing data solutions

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad, Pune, Gurugram

Work from Office

We Are Hiring! Sr. Azure Data Engineer at GSPANN Technologies - 5 + years of experience . Application Process: If you are ready to take the next step in your career and be a part of a leading IT services company, please send your updated CV to heena.ruchwani@gspann.com. 5 + years of experience . Location: Hyderabad, Gurgaon Key Skills & Experience: Azure Synapse Analytics Azure Data Factory (ADF) PySpark Databricks Expertise in developing and maintaining Stored Procedures Proven experience in designing and implementing scalable data solutions in Azure Preferred Qualifications: Minimum 6 years of hands-on experience working with Azure Data Services Strong analytical and problem-solving skills Excellent communication skills, both verbal and written Ability to collaborate effectively in a fast-paced, cross-functional environment Immediate Joiners Only: We are looking for professionals who can join immediately and contribute to dynamic projects. Application Process: If you are ready to take the next step in your career and be a part of a leading IT services company, please send your updated CV to heena.ruchwani@gspann.com. Join GSPANN Technologies and accelerate your career with exciting opportunities in data engineering!

Posted 1 month ago

Apply

10.0 - 20.0 years

30 - 45 Lacs

Hyderabad, Jaipur

Hybrid

Data Architect - AI & Azure - Lead & Coach Team Job Description: Job Title: Data Architect - AI & Azure - Lead & Coach Teams Shift Timings: 12 PM - 9 PM Location: Jaipur, hybrid Experience required: 10 to 20 years Job Title: Data Architect - AI & Azure - Lead & Coach Teams Experience: Total Experience: 5+ years in data architecture and implementation. Pre-Sales Experience: Minimum 1 year in a client-facing pre-sales or technical solutioning role is mandatory. Must-Have Skills & Qualifications: Technical Expertise: In-depth knowledge of the Microsoft Azure data platform (Azure Synapse Analytics, Azure Data Factory, Azure SQL, Azure Data Lake Storage). Modern Data Platforms: Hands-on experience with Databricks and/or Snowflake. AI Acumen: Strong understanding of AI workflows and data requirements. Must have a solid grasp of Gen AI applications and concepts. Leadership: Experience in mentoring, coaching, or leading technical teams or project initiation phases. Solutioning: Proven ability to create high-quality technical proposals, respond to RFPs, and design end-to-end data solutions. Communication: Exceptional English communication and presentation skills are essential for this client-facing role. If interested Please share your resume on shivam.gaurav@programmers.io

Posted 1 month ago

Apply

10.0 - 14.0 years

15 - 30 Lacs

Hyderabad, Ahmedabad

Hybrid

Experience-9+ years Location-Hyderabad Job type-Permanent Role & responsibilities Lead Data Engineer to design, develop, and maintain data pipelines and ETL workflows for processing large-scale structured and unstructured data. The ideal candidate will have expertise in Azure Data Services (Azure Data Factory, Synapse, Databricks, SQL, SSIS, and Data Lake) along with big data processing, real-time analytics, and cloud data integration and Team Leading Experience. Key Responsibilities: 1. Data Pipeline Development & ETL/ELT Design and build scalable data pipelines using Azure Data Factory, Synapse Pipelines ,Databricks, SSIS and ADF Connectors like Salesforce. Implement ETL/ELT workflows for structured and unstructured data processing. Optimize data ingestion, transformation, and storage strategies. 2. Cloud Data Architecture & Integration Develop data integration solutions for ingesting data from multiple sources (APIs, databases, streaming data). Work with Azure Data Lake, Azure Blob Storage, and Delta Lake for data storage and processing. 3. Database Management & Optimization Design and maintain cloud data bases (Azure Synapse, BigQuery, Cosmos DB). Optimize SQL queries and indexing strategies for performance. Implement data partitioning, compression, and caching for efficiency. 4. Collaboration & Documentation Document data models, pipeline architectures, and data workflows. Immediate joiners are preferred.

Posted 1 month ago

Apply

2.0 - 7.0 years

5 - 15 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Role- Azure Devops EXP- 2+yrs Location- Pan INDIA

Posted 1 month ago

Apply

9.0 - 14.0 years

27 - 40 Lacs

Hyderabad

Remote

Experience Required: 8+Years Mode of work: Remote Skills Required: Azure DataBricks, Eventhub, Kafka, Architecture,Azure Data Factory, Pyspark, Python, SQL, Spark Notice Period : Immediate Joiners/ Permanent/Contract role (Can join within 14th July 2025) Responsibilities Design, develop, and maintain scalable and robust data solutions in the cloud using Apache Spark and Databricks. Gather and analyse data requirements from business stakeholders and identify opportunities for data-driven insights. Build and optimize data pipelines for data ingestion, processing, and integration using Spark and Databricks. Ensure data quality, integrity, and security throughout all stages of the data lifecycle. Collaborate with cross-functional teams to design and implement data models, schemas, and storage solutions. Optimize data processing and analytics performance by tuning Spark jobs and leveraging Databricks features. Provide technical guidance and expertise to junior data engineers and developers . Stay up to date with emerging trends and technologies in cloud computing, big data, and data engineering. Contribute to the continuous improvement of data engineering processes, tools, and best practices. Requirements: Bachelors or masters degree in computer science, engineering, or a related field. 10+ years of experience as a Data Engineer with a focus on building cloud-based data solutions. Mandatory skills: Azure DataBricks, Eventhub, Kafka, Architecture, Azure Data Factory, Pyspark, Python, SQL, Spark Strong experience with cloud platforms such as Azure or AWS. Proficiency in Apache Spark and Databricks for large-scale data processing and analytics. Experience in designing and implementing data processing pipelines using Spark and Databricks. Strong knowledge of SQL and experience with relational and NoSQL databases. Experience with data integration and ETL processes using tools like Apache Airflow or cloud-native orchestration services. Good understanding of data modelling and schema design principles. Experience with data governance and compliance frameworks . Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills to work effectively in a cross-functional team. Interested candidate can share your resume to OR you can refer your friend to Pavithra.tr@enabledata.com for the quick response.

Posted 1 month ago

Apply

6.0 - 11.0 years

18 - 33 Lacs

Kolkata, Bengaluru, Delhi / NCR

Work from Office

Minimum 6 yrs of experience in building ETL pipeline using Azure data Factory, Azure Synapse Minimum 6 yrs of ETL development using PL/SQL Exposure in Azure data brick is added advantage Above average communication skill IC Role Can Join in 2-3 weeks Required Candidate profile Expert in SSIS and Azure Synapse ETL development. Significant experience in developing Python or pySpark. Significant experience in Database and Data Storage Platforms including Azure Data Lake.

Posted 1 month ago

Apply

3.0 - 8.0 years

6 - 16 Lacs

Noida, Kolkata, Bengaluru

Work from Office

Job Description Your Role and Responsibilities Responsible for implementing a robust data estate using Microsoft Azure Stack. Responsible for creating reusable and scalable data pipelines. Responsible for developing and deploying Big Data BI Solutions across industries and functions. Responsible for the development and deployment of new data platforms. Responsible for creating reusable components for rapid development of data platform. Responsible to create high-level architecture and data modelling. Responsible to guide and mentor the development teams. Responsible for getting UAT and Go Live for Data platform projects. Play an active role in team meetings and workshops with clients. Required Technical and Professional Expertise Minimum of 4+ years of experience in Data Warehousing with Big Data or Cloud. Graduate degree educated in computer science or a relevant subject. Good software engineering principals. Strong in SQL queries and data models. Good understanding of OLAP & OLTP concepts and implementations. Experience in working on Azure Services like Azure Data Factory, Azure Function, Azure SQL, Azure Data Bricks, Azure Data Lake, Synapse Analytics etc. Knowledge of Power shell is good to have. Experience of working in Agile delivery model. Knowledge of Big Data technologies, such as Spark, Hadoop/MapReduce is good to have. Knowledge and practical experience of cloud-based at forms and their ML/DL offerings (such as Google GCP, AWS, and Azure) would be advantageous. Understanding of infrastructure (including hosting, container-based deployments, and storage architectures) would be advantage. Preferred Technical and Professional Experience 6+ years of experience in ELT/Data Integration for Data Warehousing, Data Lakes, and Business Intelligence. Expertise in data storage, ETL/ELT, and data analytics tools and technologies. Proven hands-on experience in designing, modelling, testing, and optimizing data warehousing & data lake solutions. Proficiency in SQL and Spark-based data processing. Experience with Azure cloud big data technologies Like Azure Data Factory, Azure Databricks, Fabric, Synapse, ADLS Gen2, etc. Strong understanding of data and analytics data architecture. Experience working with Agile methodologies (Scrum, Kanban) and in large transformational programs. Ability to document architecture and present solutions effectively. Strong data modelling skills (conceptual, logical, and physical). Experience with Python.

Posted 1 month ago

Apply

3.0 - 8.0 years

3 - 6 Lacs

Bengaluru

Work from Office

We are looking for a skilled SQL PySpark professional with 3 to 8 years of experience to join our team. The ideal candidate will have expertise in developing data pipelines and transforming data using Databricks, Synapse notebooks, and Azure Data Factory. Roles and Responsibility Collaborate with technical architects and cloud solutions teams to design data pipelines, marts, and reporting solutions. Code, test, and optimize Databricks jobs for efficient data processing and report generation. Set up scalable data pipelines integrating with various data sources and cloud platforms using Databricks. Ensure best practices are followed in terms of code quality, data security, and scalability. Participate in code and design reviews to maintain high development standards. Optimize data querying layers to enhance performance and support analytical requirements. Leverage Databricks to set up scalable data pipelines that integrate with a variety of data sources and cloud platforms. Collaborate with data scientists and analysts to support machine learning workflows and analytic needs. Stay updated with the latest developments in Databricks and associated technologies to drive innovation. Job Proficiency in PySpark or Scala and SQL for data processing tasks. Hands-on experience with Azure Databricks, Delta Lake, Delta Live tables, Auto Loader, and Databricks SQL. Expertise with Azure Data Lake Storage (ADLS) Gen2 for optimized data storage and retrieval. Strong knowledge of data modeling, ETL processes, and data warehousing concepts. Experience with Power BI for dashboarding and reporting is a plus. Familiarity with Azure Synapse for analytics and integration tasks is desirable. Knowledge of Spark Streaming for real-time data stream processing is an advantage. MLOps knowledge for integrating machine learning into production workflows is beneficial. Familiarity with Azure Resource Manager (ARM) templates for infrastructure as code (IaC) practices is preferred. Demonstrated expertise of 4-5 years in developing data ingestion and transformation pipelines using Databricks, Synapse notebooks, and Azure Data Factory. Solid understanding and hands-on experience with Delta tables, Delta Lake, and Azure Data Lake Storage Gen2. Experience in efficiently using Auto Loader and Delta Live tables for seamless data ingestion and transformation. Proficiency in building and optimizing query layers using Databricks SQL. Demonstrated experience integrating Databricks with Azure Synapse, ADLS Gen2, and Power BI for end-to-end analytics solutions. Prior experience in developing, optimizing, and deploying Power BI reports. Familiarity with modern CI/CD practices, especially in the context of Databricks and cloud-native solutions.

Posted 1 month ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Noida

Work from Office

company name=Apptad Technologies Pvt Ltd., industry=Employment Firms/Recruitment Services Firms, experience=5 to 10 , jd= Job Role Data Engineer, ADF Job Location Remote Job Type FTE JD We are looking for an experienced Data Engineer / Architect who is hands-on with Microsoft Azure Data Factory (ADF) and SQL development , and can take complete ownership of data flow design and execution. This role requires a mix of technical skills and business understanding to analyze, define, and build scalable data pipelines that align with organizational goals. The ideal candidate will work closely with business stakeholders to understand processes, define data flow requirements , and implement robust ETL/ELT solutions using Microsoft Azure technologies. Key Responsibilities: Collaborate with business users and analysts to understand data requirements and workflows . Define and document end-to-end data flow architectures and integration strategies. Build and maintain data pipelines using Azure Data Factory (ADF) and SQL stored procedures . Design, optimize, and troubleshoot complex SQL queries and stored procedures . Translate business processes into technical solutions , ensuring alignment with data governance and enterprise architecture standards. Drive data quality , transformation logic , and load processes with efficiency and consistency. Take ownership of data integration tasks and ensure timely delivery of high-quality data. Monitor, troubleshoot, and optimize ETL workflows and data storage performance. Support cloud data platform modernization and integration projects as needed. Must-Have Skills: 6+ years of hands-on experience in Data Engineering or Architecture roles. Proven experience with Azure Data Factory (ADF) pipeline design, triggers, datasets, linked services. Advanced SQL skills with expertise in writing and optimizing stored procedures , functions , and data transformation logic . Strong experience in business engagement ability to gather requirements, interpret business processes, and translate them into technical workflows. Familiarity with data modeling , data warehousing , and ETL/ELT pipelines . Understanding of data governance , metadata , and data lineage . Nice-to-Have Skills: Exposure to Azure Synapse Analytics , Databricks , or Power BI . Experience working in Agile/Scrum environments. Familiarity with CI/CD pipelines for data workflows. Knowledge of Python or .NET for scripting or data orchestration is a plus. Soft Skills: Excellent communication and stakeholder management abilities. Analytical mindset with attention to detail. Strong sense of ownership and ability to work independently and collaboratively. , Title=Data Engineer ADF, ref=6566357

Posted 1 month ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Noida

Work from Office

company name=Apptad Technologies Pvt Ltd., industry=Employment Firms/Recruitment Services Firms, experience=5 to 10 , jd= Job Title:-SQL+ ADF Job Location:- Gurgaon Job Type:- Full Time JD:- Strong exp in SQL developmentalong with exp in cloud AWS & good exp in ADF Job Summary : We are looking for a skilled SQL + Azure Data Factory (ADF) Developer to join our data engineering team. The ideal candidate will have strong experience in writing complex SQL queries, developing ETL pipelines using Azure Data Factory, and integrating data from multiple sources into cloud-based data solutions. This role will support data warehousing, analytics, and business intelligence initiatives. Key Responsibilities : Design, develop, and maintain data integration pipelines using Azure Data Factory (ADF) . Write optimized and complex SQL queries , stored procedures, and functions for data transformation and reporting. Extract data from various structured and unstructured sources and load into Azure-based data platforms (e.g., Azure SQL Database, Azure Data Lake). Schedule and monitor ADF pipelines, ensuring data quality, accuracy, and availability. Collaborate with data analysts, data architects, and business stakeholders to gather requirements and deliver solutions. Troubleshoot data issues and implement corrective actions to resolve pipeline or data quality problems. Implement and maintain data lineage, metadata, and documentation for pipelines. Participate in code reviews, performance tuning, and optimization of ETL processes. Ensure compliance with data governance, privacy, and security standards. Hands-on experience with T-SQL / SQL Server . Experience working with Azure Data Factory (ADF) and Azure SQL . Strong understanding of ETL processes , data warehousing concepts , and cloud data architecture . Experience working with Azure services such as Azure Data Lake, Blob Storage, and Azure Synapse Analytics (preferred). Familiarity with Git/DevOps CI/CD pipelines for ADF deployments is a plus. Excellent problem-solving, analytical, and communication skills. , Title=SQL+ ADF, ref=6566294

Posted 1 month ago

Apply

5.0 - 10.0 years

18 - 25 Lacs

Mumbai, Thane

Work from Office

Role & responsibilities Assess current Synapse Analytics workspace including pipelines, notebooks, datasets, and SQL scripts. Rebuild or refactor Synapse pipelines, notebooks, and data models using Fabric-native services. Collaborate with data engineers, architects, and business stakeholders to ensure functional parity post-migration. Validate data integrity and performance in the new environment. Document the migration process, architectural decisions, and any required support materials. Provide knowledge transfer and guidance to internal teams on Microsoft Fabric capabilities. Preferred candidate profile Proven experience with Azure Synapse Analytics (workspaces, pipelines, dedicated/SQL serverless pools, Spark notebooks). 5 years of synapse azure cloud experience. Probably only see 1 to 2 years experience in Fabric. Hands-on experience with Microsoft Fabric (Data Factory, OneLake, Power BI integration). Strong proficiency in SQL, Python, and Spark. Solid understanding of data modeling, ETL/ELT pipelines, and data integration patterns. Familiarity with Azure Data Lake, Azure Data Factory, and Power BI. Experience with Lakehouse architecture and Delta Lake in Microsoft Fabric. Experience with CI/CD practices for data pipelines. Excellent communication skills and ability to work cross-functionally. Nice-to-Have Skills: Familiarity with DataOps or DevOps practices in Azure environments. Prior involvement in medium to large-scale cloud platform migrations. Knowledge of security and governance features in Microsoft Fabric. Knowledge of Dynamics Dataverse link to Fabric.

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 15 Lacs

Chennai

Work from Office

Job Description Job Purpose The DST Business Analyst and Business Intelligence Developer for Husky will be responsible for building the business intelligence system for the company, based on the internal and external data structures. Responsible for leading the design and support of enterprise-wide business intelligence applications and architecture. Works with enterprise-wide business and IT senior management to understand and prioritize data and information requirements. Solves complex technical problems. Optimizes the performance of enterprise business intelligence tools by defining data elements which contribute to data insights which add value to the user. Creates testing methodology and criteria. Designs and coordinates a curriculum for coaching and training customers in the use of business intelligence tools to enhance business decision-making capability. Develops standards, policies, and procedures for the form, structure, and attributes of the business intelligence tools and systems. Develops data/information quality metrics. Researches new technology and develops business cases to support enterprise-wide business intelligence solutions. Key Responsibilities & Key Success Metrics Leading BI software development, deployment and maintenance Perform Data Profiling and Data Analysis activities to understand data sources Report curation, template definition and analytical data modeling Work with cross-functional teams to gather and document reporting requirements Translate business requirements into specifications that will be used to implement the required reports and dashboards, created from potentially multiple data sources Identifies and resolves data reporting issues in a timely fashion, while looking for continuous improvement opportunities. Build solutions that create value and resolve business problems Provide technical guidance to designers and other stakeholders Work effectively with members of Digital Solutions Team Troubleshoots analytics tool problems and tunes for performance Develops semantic layer and analytics query objects for end users Translation of business questions and requirements into reports, views, and analytics query objects Ensuring that quality standards are met Supporting Master Data Management Strategy Qualifications Understanding of ERP and Operational systems databases, knowledge of database programming Highly skilled at writing SQL queries with large scale, complex datasets Experience in data visualization and data storytelling Experience designing, debugging and deploying software in ADO (Azure Dev/Ops) development environment Experience with Microsoft BI stack - Power BI and SQL Server Analysis Services Experience working in an international business environment Experience with Azure Data Platform resources (ADLS, ADF, Azure Synapse, Power BI Services) Basic manufacturing and sales business process knowledge Strong communication & presentation skills Ability to moderate meetings and constructive design sessions for effective decision making English language skills are a requirement, German & French are considered an asset

Posted 1 month ago

Apply

6.0 - 10.0 years

30 - 35 Lacs

Bengaluru

Work from Office

We are seeking an experienced PySpark Developer / Data Engineer to design, develop, and optimize big data processing pipelines using Apache Spark and Python (PySpark). The ideal candidate should have expertise in distributed computing, ETL workflows, data lake architectures, and cloud-based big data solutions. Key Responsibilities: Develop and optimize ETL/ELT data pipelines using PySpark on distributed computing platforms (Hadoop, Databricks, EMR, HDInsight). Work with structured and unstructured data to perform data transformation, cleansing, and aggregation. Implement data lake and data warehouse solutions on AWS (S3, Glue, Redshift), Azure (ADLS, Synapse), or GCP (BigQuery, Dataflow). Optimize PySpark jobs for performance tuning, partitioning, and caching strategies. Design and implement real-time and batch data processing solutions. Integrate data pipelines with Kafka, Delta Lake, Iceberg, or Hudi for streaming and incremental updates. Ensure data security, governance, and compliance with industry best practices. Work with data scientists and analysts to prepare and process large-scale datasets for machine learning models. Collaborate with DevOps teams to deploy, monitor, and scale PySpark jobs using CI/CD pipelines, Kubernetes, and containerization. Perform unit testing and validation to ensure data integrity and reliability. Required Skills & Qualifications: 6+ years of experience in big data processing, ETL, and data engineering. Strong hands-on experience with PySpark (Apache Spark with Python). Expertise in SQL, DataFrame API, and RDD transformations. Experience with big data platforms (Hadoop, Hive, HDFS, Spark SQL). Knowledge of cloud data processing services (AWS Glue, EMR, Databricks, Azure Synapse, GCP Dataflow). Proficiency in writing optimized queries, partitioning, and indexing for performance tuning. Experience with workflow orchestration tools like Airflow, Oozie, or Prefect. Familiarity with containerization and deployment using Docker, Kubernetes, and CI/CD pipelines. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, CCPA, etc.). Excellent problem-solving, debugging, and performance optimization skills.

Posted 1 month ago

Apply

10.0 - 15.0 years

40 - 50 Lacs

Hyderabad

Hybrid

Envoy Global is a proven innovator in the global immigration space. Our mission combines our industry-leading tech platform with holistic service to streamline, simplify and expedite the immigration process for employers and individuals. We are seeking a highly skilled Team Lead OR Manager, Data Engineering within Envoy Global 's tech team to join us on a full time, permanent basis. This role is responsible for the end-to-end design, development, and documentation of data pipelines and ETL (Extract, Transform, Load) processes. This role focuses on enabling data migration, integration, and warehousing, encompassing the creation of ETL jobs, reports, dashboards, and data pipelines. As our Senior Data Engineering Lead OR Manager, you will be required to: Lead and mentor a small team of data engineers, fostering a collaborative and innovative environment. Design, develop, and document robust data pipelines and ETL jobs. Engage in data modeling activities to ensure efficient and effective data structures. Ensure the seamless integration of data across various platforms and systems Lead all aspects of the design, implementation, and maintenance of data engineering pipelines in our Azure environment including integration with a variety of data sources Collaborate with Data Analytics and DataOps teams and other partners in Architecture, Engineering and Devops teams to delivery high quality data platforms that enable analytics solutions for the business Ensure data engineering standards are in line with established principles of data governance, data quality and data security Monitor and optimizes the performance of data pipelines, ensuring they meet SLAs in terms of data availability and quality Hire, manage and mentor a team of Data Engineers and Data Quality Engineers Communicate clearly and effectively with stakeholders To apply for this role, you should possess the following skills, experience and qualifications: Proven experience in data engineering, with a strong background in designing and developing ETL processes. Excellent collaboration skills, with the ability to work effectively with cross-functional teams. Leadership experience, with a track record of managing and mentoring a team of data engineers. 8+ years of experience as a Data Engineer with 3+ years of experience in a managerial role Technical experience in one or more of the cloud-based data warehouse/data lake platforms such as AWS, Snowflake, Azure Synapse ETL experience using SSIS, ADF or another equivalent tool Knowledgeable in Data Modeling and Data warehouse concepts Demonstrated ability to write SQL/TSQL queries to retrieve/modify data Knowledge and know-how to troubleshoot potential issues, and experience with best practices around database operations Ability to work in an Agile environment Should you have a deep passion for technology and a desire to thrive in a rapidly evolving and creative environment, we would be delighted to receive your application.

Posted 1 month ago

Apply

2.0 - 6.0 years

5 - 8 Lacs

Pune

Work from Office

Supports, develops, and maintains a data and analytics platform. Effectively and efficiently processes, stores, and makes data available to analysts and other consumers. Works with Business and IT teams to understand requirements and best leverage technologies to enable agile data delivery at scale. Note:- Although the role category in the GPP is listed as Remote, the requirement is for a Hybrid work model. Key Responsibilities: Oversee the development and deployment of end-to-end data ingestion pipelines using Azure Databricks, Apache Spark, and related technologies. Design high-performance, resilient, and scalable data architectures for data ingestion and processing. Provide technical guidance and mentorship to a team of data engineers. Collaborate with data scientists, business analysts, and stakeholders to integrate various data sources into the data lake/warehouse. Optimize data pipelines for speed, reliability, and cost efficiency in an Azure environment. Enforce and advocate for best practices in coding standards, version control, testing, and documentation. Work with Azure services such as Azure Data Lake Storage, Azure SQL Data Warehouse, Azure Synapse Analytics, and Azure Blob Storage. Implement data validation and data quality checks to ensure consistency, accuracy, and integrity. Identify and resolve complex technical issues proactively. Develop reliable, efficient, and scalable data pipelines with monitoring and alert mechanisms. Use agile development methodologies, including DevOps, Scrum, and Kanban. External Qualifications and Competencies Technical Skills: Expertise in Spark, including optimization, debugging, and troubleshooting. Proficiency in Azure Databricks for distributed data processing. Strong coding skills in Python and Scala for data processing. Experience with SQL for handling large datasets. Knowledge of data formats such as Iceberg, Parquet, ORC, and Delta Lake. Understanding of cloud infrastructure and architecture principles, especially within Azure. Leadership & Soft Skills: Proven ability to lead and mentor a team of data engineers. Excellent communication and interpersonal skills. Strong organizational skills with the ability to manage multiple tasks and priorities. Ability to work in a fast-paced, constantly evolving environment. Strong problem-solving, analytical, and troubleshooting abilities. Ability to collaborate effectively with cross-functional teams. Competencies: System Requirements Engineering: Uses appropriate methods to translate stakeholder needs into verifiable requirements. Collaborates: Builds partnerships and works collaboratively to meet shared objectives. Communicates Effectively: Delivers clear, multi-mode communications tailored to different audiences. Customer Focus: Builds strong customer relationships and delivers customer-centric solutions. Decision Quality: Makes good and timely decisions to keep the organization moving forward. Data Extraction: Performs ETL activities and transforms data for consumption by downstream applications. Programming: Writes and tests computer code, version control, and build automation. Quality Assurance Metrics: Uses measurement science to assess solution effectiveness. Solution Documentation: Documents information for improved productivity and knowledge transfer. Solution Validation Testing: Ensures solutions meet design and customer requirements. Data Quality: Identifies, understands, and corrects data flaws. Problem Solving: Uses systematic analysis to address and resolve issues. Values Differences: Recognizes the value that diverse perspectives bring to an organization. Preferred Knowledge & Experience: Exposure to Big Data open-source technologies (Spark, Scala/Java, Map-Reduce, Hive, HBase, Kafka, etc.). Experience with SQL and working with large datasets. Clustered compute cloud-based implementation experience. Familiarity with developing applications requiring large file movement in a cloud-based environment. Exposure to Agile software development and analytical solutions. Exposure to IoT technology. Additional Responsibilities Unique to this Position Qualifications: Education: Bachelors or Masters degree in Computer Science, Information Technology, Engineering, or a related field. Experience: 3 to 5 years of experience in data engineering or a related field. Strong hands-on experience with Azure Databricks, Apache Spark, Python/Scala, CI/CD, Snowflake, and Qlik for data processing. Experience working with multiple file formats like Parquet, Delta, and Iceberg. Knowledge of Kafka or similar streaming technologies. Experience with data governance and data security in Azure. Proven track record of building large-scale data ingestion and ETL pipelines in cloud environments. Deep understanding of Azure Data Services. Experience with CI/CD pipelines, version control (Git), Jenkins, and agile methodologies. Familiarity with data lakes, data warehouses, and modern data architectures. Experience with Qlik Replicate (optional).

Posted 1 month ago

Apply

4.0 - 9.0 years

8 - 12 Lacs

Chennai

Remote

Expertise in ADF, Azure Databricks and Python. The ideal candidate will be responsible for developing and optimizing data pipelines, integrating cloud data services, and building scalable data processing workflows in the Azure ecosystem.

Posted 1 month ago

Apply

6.0 - 8.0 years

3 - 6 Lacs

Pune

Work from Office

Role & responsibilities Job Title: Developer Work Location: Pune, MH Skill Required: Azure Data Factory Experience Range in Required Skills: 6 - 8 Years Job Description: (6+ years) Azure, ADF, Databricks, Python Essential Skills: (6+ years) Azure, ADF, Databricks, Python

Posted 1 month ago

Apply

6.0 - 11.0 years

35 - 50 Lacs

Pune, Gurugram, Delhi / NCR

Hybrid

Role: Snowflake Data Engineer Mandatory Skills: #Snowflake, #AZURE, #Datafactory, SQL, Python, #DBT / #Databricks . Location (Hybrid) : Bangalore, Hyderabad, Chennai, Pune, Gurugram & Noida. Budget: Up to 50 LPA' Notice: Immediate to 30 Days Serving Notice Experience: 6-11 years Key Responsibilities: Design and develop ETL/ELT pipelines using Azure Data Factory , Snowflake , and DBT . Build and maintain data integration workflows from various data sources to Snowflake. Write efficient and optimized SQL queries for data extraction and transformation. Work with stakeholders to understand business requirements and translate them into technical solutions. Monitor, troubleshoot, and optimize data pipelines for performance and reliability. Maintain and enforce data quality, governance, and documentation standards. Collaborate with data analysts, architects, and DevOps teams in a cloud-native environment. Must-Have Skills: Strong experience with Azure Cloud Platform services. Proven expertise in Azure Data Factory (ADF) for orchestrating and automating data pipelines. Proficiency in SQL for data analysis and transformation. Hands-on experience with Snowflake and SnowSQL for data warehousing. Practical knowledge of DBT (Data Build Tool) for transforming data in the warehouse. Experience working in cloud-based data environments with large-scale datasets. Good-to-Have Skills: Experience with Azure Data Lake , Azure Synapse , or Azure Functions . Familiarity with Python or PySpark for custom data transformations. Understanding of CI/CD pipelines and DevOps for data workflows. Exposure to data governance , metadata management , or data catalog tools. Knowledge of business intelligence tools (e.g., Power BI, Tableau) is a plus. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. 5+ years of experience in data engineering roles using Azure and Snowflake. Strong problem-solving, communication, and collaboration skills.

Posted 1 month ago

Apply

6.0 - 11.0 years

35 - 50 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Role: Snowflake Data Engineer Mandatory Skills: #Snowflake, #AZURE, #Datafactory, SQL, Python, #DBT / #Databricks . Location (Hybrid) : Bangalore, Hyderabad, Chennai, Pune, Gurugram & Noida. Budget: Up to 50 LPA' Notice: Immediate to 30 Days Serving Notice Experience: 6-11 years Key Responsibilities: Design and develop ETL/ELT pipelines using Azure Data Factory , Snowflake , and DBT . Build and maintain data integration workflows from various data sources to Snowflake. Write efficient and optimized SQL queries for data extraction and transformation. Work with stakeholders to understand business requirements and translate them into technical solutions. Monitor, troubleshoot, and optimize data pipelines for performance and reliability. Maintain and enforce data quality, governance, and documentation standards. Collaborate with data analysts, architects, and DevOps teams in a cloud-native environment. Must-Have Skills: Strong experience with Azure Cloud Platform services. Proven expertise in Azure Data Factory (ADF) for orchestrating and automating data pipelines. Proficiency in SQL for data analysis and transformation. Hands-on experience with Snowflake and SnowSQL for data warehousing. Practical knowledge of DBT (Data Build Tool) for transforming data in the warehouse. Experience working in cloud-based data environments with large-scale datasets. Good-to-Have Skills: Experience with Azure Data Lake , Azure Synapse , or Azure Functions . Familiarity with Python or PySpark for custom data transformations. Understanding of CI/CD pipelines and DevOps for data workflows. Exposure to data governance , metadata management , or data catalog tools. Knowledge of business intelligence tools (e.g., Power BI, Tableau) is a plus. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. 5+ years of experience in data engineering roles using Azure and Snowflake. Strong problem-solving, communication, and collaboration skills.

Posted 1 month ago

Apply

7.0 - 10.0 years

6 - 10 Lacs

Noida

Work from Office

R1 RCM India is proud to be recognized amongst India's Top 50 Best Companies to Work For TM 2023 by Great Place To Work Institute. We are committed to transform the healthcare industry with our innovative revenue cycle management services. Our goal is to make healthcare simpler and enable efficiency for healthcare systems, hospitals, and physician practices. With over 30,000 employees globally, we are about 14,000 strong in India with offices in Delhi NCR, Hyderabad, Bangalore, and Chennai. Our inclusive culture ensures that every employee feels valued, respected, and appreciated with a robust set of employee benefits and engagement activities. : We are seeking a Staff Data Engineer with 7-10 years of experience to join our Data Platform team. This role will report to the Manager of data engineering and be involved in the planning, design, and implementation of our centralized data warehouse solution for ETL, reporting and analytics across all applications within the company. Deep knowledge and experience working with Scala and Spark. Experienced in Azure data factory, Azure Data bricks, Azure Synapse Analytics, Azure Data Lake. Experience working in Full stack development in .Net & Angular. Experience working with SQL and NoSQL database systems such as MongoDB, Couchbase. Experience in distributed system architecture design. Experience with cloud environments (Azure Preferred). Experience with acquiring and preparing data from primary and secondary disparate data sources (real-time preferred). Experience working on large scale data product implementation, responsible for technical delivery, mentoring and managing peer engineers. Experience working with Databricks is preferred. Experience working with agile methodology is preferred. Healthcare industry experience is preferred. Job Responsibilities: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions. Work with other team with deep experience in ETL process, distributed microservices, and data science domains to understand how to centralize their data. Share your passion for staying experimenting with and learning new technologies. Perform thorough data analysis, uncover opportunities, and address business problems. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Our associates are given valuable opportunities to contribute, to innovate and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package. To learn more, visitr1rcm.com Visit us on Facebook

Posted 1 month ago

Apply

4.0 - 6.0 years

3 - 7 Lacs

Noida

Work from Office

R1 RCM India is proud to be recognized amongst India's Top 50 Best Companies to Work For TM 2023 by Great Place To Work Institute. We are committed to transform the healthcare industry with our innovative revenue cycle management services. Our goal is to make healthcare simpler and enable efficiency for healthcare systems, hospitals, and physician practices. With over 30,000 employees globally, we are about 14,000 strong in India with offices in Delhi NCR, Hyderabad, Bangalore, and Chennai. Our inclusive culture ensures that every employee feels valued, respected, and appreciated with a robust set of employee benefits and engagement activities. We are seeking a Data Engineer with 4-6 years of experience to join our Data Platform team. This role will report to the Manager of data engineering and be involved in the planning, design, and implementation of our centralized data warehouse solution for ETL, reporting and analytics across all applications within the company. Deep knowledge and experience working with Scala and Spark. Experienced in Azure data factory, Azure Data bricks, Azure Synapse Analytics, Azure Data Lake. Experience working in Full stack development in .Net & Angular. Experience working with SQL and NoSQL database systems such as MongoDB, Couchbase. Experience in distributed system architecture design. Experience with cloud environments (Azure Preferred). Experience with acquiring and preparing data from primary and secondary disparate data sources (real-time preferred). Experience working on large scale data product implementation, responsible for technical delivery, mentoring and managing peer engineers. Experience working with Databricks is preferred. Experience working with agile methodology is preferred. Healthcare industry experience is preferred. Job Responsibilities Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions. Work with other team with deep experience in ETL process, distributed microservices, and data science domains to understand how to centralize their data. Share your passion for staying experimenting with and learning new technologies. Perform thorough data analysis, uncover opportunities, and address business problems. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Our associates are given valuable opportunities to contribute, to innovate and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package. To learn more, visitr1rcm.com Visit us on Facebook

Posted 1 month ago

Apply

7.0 - 12.0 years

20 - 25 Lacs

Chennai

Remote

Skills: Azure Data Factory, Data Lake, Synapse SQL DWH, Databricks Airflow, Python, Pyspark, SQL, Terraform Experience in designing and building highly scalable data platforms pipelines Hands on experience in developing end to end big data pipelines using Azure, AWS and or open source bigdata tools and technologies. Experience in data lake , data warehousing solutions Experience in extracting data from APIs, Cloud services Salesforce, Eloqua, S3, SQL and NoSQL onprem cloud databases using Azure Data factory, Glue ,Opensource data ingestion tools Experience in creating complex data processing pipelines ETLs using PySpark Scala Databricks Glue EMR In depth knowledge of spark architecture and experience in improving the performance and optimization Experience using cloud data warehouses e.g. Synapse ,SQL DWH, Redshift, Snowflake, build and manage data models and to present data securely Good understanding of Distributed Data Processing , MPP Knowledge of common DevOps skills and methodologies Experience in Azure DevOps, GitHub, Gitlab Experience in using Azure ARM Templates or Infrastructure as Code Terraform Indepth understanding of OLTP, OLAP, Data warehousing, Data modeling and strong analytical and problem solving skills Hands on experience using MySQL, MS SQL Server, Oracle or similar RDBMS platform. Highly self motivated, self directed, and attentive to detail Ability to effectively prioritize and execute tasks. Additional skillsets Good to have experience in Docker and Kubernetes Experience working across Azure IaaS, Azure PaaS, Azure Networking, and other areas of the platform Databricks delta lake and lake house Familiarity of Data Quality Management methodology and supporting technology tools. Familiarity with data visualization tools e.g. PowerBI, Tableau etc

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies