Jobs
Interviews

70 Synapse Analytics Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As an Azure Data Engineer (Databricks Specialist) at CrossAsyst, you will be part of our high-impact Data & AI team, working on critical client-facing projects. With over 5 years of experience, you will utilize your expertise in Azure data services and Databricks to build robust, scalable data pipelines and drive technology innovation for our clients in Pune. Your key responsibilities will include designing, developing, and deploying end-to-end data pipelines using Azure Databricks, Data Factory, and Synapse. You will be responsible for data ingestion, transformation, and wrangling from various sources, optimizing Spark jobs and Databricks notebooks for performance and cost-efficiency. Implementing DevOps best practices for CI/CD, Git integration, and automated testing will be essential in your role. Collaborating with cross-functional teams such as data scientists, architects, and stakeholders, you will design scalable data lakehouse and data warehouse solutions using Delta Lake and Synapse. Ensuring data security, access control, and compliance using Azure-native governance tools will also be a part of your responsibilities. Additionally, you will work closely with data science teams for feature engineering and machine learning workflows within Databricks. Your proactive mindset and strong coding ability in PySpark will be crucial in writing efficient SQL and PySpark code for analytics and transformation tasks. It will also be essential to proactively monitor and troubleshoot data pipelines in production environments. In this role, documenting solution architectures, workflows, and data lineage will contribute to the successful delivery of scalable, secure, and high-performance data solutions. If you are looking to make an impact by driving technology innovation and delivering better and faster outcomes, we welcome you to join our team at CrossAsyst.,

Posted 2 days ago

Apply

5.0 - 12.0 years

0 Lacs

noida, uttar pradesh

On-site

You are a seasoned Delivery Lead specializing in Azure Integration Services with over 12 years of experience. Your role involves managing and delivering enterprise-grade Azure projects, including implementation, migration, and upgrades. As a strategic leader, you should have in-depth expertise in Azure services and a proven track record of successfully managing enterprise customers and driving project success across Azure Integration and Data platforms. Your key responsibilities include leading end-to-end delivery of Azure integration, data, and analytics projects, ensuring scope, timeline, and budget adherence. You will plan and manage execution roadmaps, define milestones, handle dependencies, and oversee enterprise-level implementations, migrations, and upgrades using Azure services while ensuring compliance with best practices in security, performance, and governance. In terms of customer and stakeholder engagement, you will collaborate with enterprise customers to understand their business needs and translate them into technical solutions. Additionally, you will serve as a trusted advisor to clients, aligning technology with business objectives, and engage and manage stakeholders, including business users, architects, and engineering teams. Your technical leadership responsibilities include defining and guiding architecture, design patterns, and best practices for Azure Integration Services. You will deliver integration solutions using various Azure services such as Logic Apps, APIM, Azure Functions, Event Grid, and Service Bus. Leveraging ADF, Azure Databricks, and Synapse Analytics for data processing and analytics will be crucial, along with promoting automation and DevOps culture within the team. As the Delivery Lead, you will lead a cross-functional team of Azure developers, engineers, and architects, provide technical mentorship, and drive team performance. You will also coordinate with Microsoft and third-party vendors to ensure seamless delivery and support pre-sales activities by contributing to solution architecture, proposals, and effort estimation. To excel in this role, you must possess deep expertise in Azure Integration Services, hands-on experience with Azure App Services, Microservices architecture, and serverless solutions, and proficiency in data platforms such as Azure Data Factory, Azure Databricks, Synapse Analytics, and ADLS Gen2. A solid understanding of Azure security and governance tools is essential, along with experience in DevOps tools like Azure DevOps, CI/CD, Terraform, and ARM templates. In terms of professional experience, you should have at least 10 years in IT with a minimum of 5 years in Azure integration and data platforms. A proven track record in leading enterprise migration and implementation projects, sound knowledge of hybrid, on-prem, and cloud-native integration architectures, and experience in delivering projects using Agile, Scrum, and DevOps frameworks are required. Your soft skills should include strong leadership and stakeholder engagement abilities, effective problem-solving skills, and excellent verbal and written communication, presentation, and documentation skills. Preferred qualifications for this role include Microsoft certifications in Azure Solutions Architecture, Integration Services, or Data Engineering, experience in integration with SAP, Salesforce, or other enterprise applications, and awareness of AI/ML use cases within Azure's data ecosystem. This role is primarily based in Noida with a hybrid work model, and you should be willing to travel for client meetings as required.,

Posted 2 days ago

Apply

5.0 - 10.0 years

0 - 0 Lacs

pune, maharashtra

On-site

You will be responsible for architecting data warehousing and business intelligence solutions to address cross-functional business challenges. This will involve interacting with business stakeholders to gather requirements and deliver comprehensive Data Engineering, Data Warehousing, and analytics solutions. Additionally, you will collaborate with other technology teams to extract, transform, and load data from diverse sources. You should have a minimum of 5-8 years of end-to-end Data Engineering Development experience, preferably across industries such as Retail, FMCG, Manufacturing, Finance, Oil & Gas. Experience in functional domains like Sales, Procurement, Cost Control, Business Development, and Finance is desirable. You are expected to have 3 to 10 years of experience in data engineering projects using Azure or AWS services, with hands-on expertise in data transformation, processing, and migration using various tools such as Azure Data Lake Storage, Azure Data Factory, Databricks, AWS Glue, Redshift, and Athena. Familiarity with MS Fabric and its components will be advantageous, along with experience in working with different source/target systems like Oracle Database, SQL Server Database, Azure Data Lake Storage, ERP, CRM, and SCM systems. Proficiency in reading data from sources via APIs/Web Services and utilizing APIs to write data to target systems is essential. You should also have experience in Data Cleanup, Data Cleansing, and optimization tasks, including working with non-structured data sets in Azure. Knowledge of analytics tools like Power BI and Azure Analysis Service, as well as exposure to private and public cloud architectures, will be beneficial. Excellent written and verbal communication skills are crucial for this role. Ideally, you hold a degree in M.Tech / B.E. / B.Tech (Computer Science, Information systems, IT) / MCA / MCS. Key requirements include expertise in MS Azure Data Factory, Python, PySpark Coding, Synapse Analytics, Azure Function Apps, Azure Databricks, AWS Glue, Athena, Redshift, and Databricks Pysark. Exposure to integration with various applications/systems like ERP, CRM, SCM, WebApp using APIs, Cloud, On-premise systems, DBs, and file systems is expected. The role necessitates a minimum of 3 Full Cycle Data Engineering Implementations (5-10 years of experience) with a focus on building data warehouses and implementing data models. Exposure to the consulting industry is mandatory, along with strong verbal and written communication skills. Your primary skills should encompass Data Engineering Development, Cloud Engineering with Azure or AWS, Data Warehousing & BI Solutions Architecture, Programming (Python PySpark), Data Integration across various systems, Consulting experience, ETL and Data Transformation, and knowledge in Cloud Architecture. Additionally, familiarity with MS Fabric, handling non-structured data, Data Cleanup and Optimization, API/Web Services, Data Visualization, and industry and functional knowledge will be advantageous. The compensation package ranges from INR 12-28 lpa, subject to the candidate's performance and experience level.,

Posted 3 days ago

Apply

0.0 years

0 Lacs

Gurugram, Haryana, India

On-site

What you will do: Must be able to drive different analytics initiatives in close collaboration with the business leaders Must be able to develop mathematical / analytical models for resolving complex business problems Should be fully adept on process mapping and lead six sigma tools to provide the process realignment support prior to analytics solutioning Must be able to create dashboards / applications using Power Apps, SSIS, Power BI, Alteryx, MS-Access and advanced excel Must be able to design & develop custom report and provide inferential analysis Must have working knowledge on Azure data engineering tools e.g. Databricks, Azure data factory, Synapse Analytics, Logic Apps etc. Must have a good understanding on supply chain basics and provide support in solutioning for a given supply chain problem statement Should be able to pick up new tools and technologies e.g. Python, ML, Azure data factories etc. Exposure of Azure Dev Ops/JIRA for Agile project management Must be fully qualified and have delivered analytical solutions using Business Intelligence (BI) tools, R, Python, SQL Coding, Java Coding, data crunching, advanced excel modeling, VBA coding & macro writing What you will need: Build new tools in close collaboration with the business SMEs and ensure a successful deployment Project/Program management of analytics projects with ADO Lead and Own an Analytics area and ensure product adoption for the respective area Must be able to identify key business metrics (Financial & Operational) and populate them on regular basis Must be able to communicate with different stakeholders at various levels within Stryker for collating the monthly KPI reporting data Should be able to deliver the analytics trainings across organization on different tools like powerBI, excel, MS Access etc. to steer an analytics driven culture Stryker is a global leader in medical technologies and, together with its customers, is driven to make healthcare better. The company offers innovative products and services in MedSurg, Neurotechnology, Orthopaedics and Spine that help improve patient and healthcare outcomes. Alongside its customers around the world, Stryker impacts more than 150 million patients annually. Show more Show less

Posted 3 days ago

Apply

6.0 - 7.0 years

9 - 12 Lacs

Chennai

Work from Office

Responsibilities: * Design, develop & maintain data pipelines using Azure/AWS, Synapse Analytics, Fabric & PySpark.

Posted 4 days ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

5+ Years of Data and Analytics experience with minimum 3 years in Azure Cloud Excellent communication and interpersonal skills. Extensive experience in Azure stack ADLS, Azure SQL DB, Azure Data Factory, Azure Data bricks, Azure Synapse, Cosmo DB, Analysis Services, Event Hub etc.. Experience in job scheduling using Oozie or Airflow or any other ETL scheduler Design and build production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala. Good experience in designing & delivering data analytics solutions using Azure Cloud native services. Good experience in Requirements Analysis and Solution Architecture Design, Data modelling, ETL, data integration and data migration design Documentation of solutions (e.g. data models, configurations, and setup). Well versed with Waterfall, Agile, Scrum and similar project delivery methodologies. Experienced in internal as well as external stakeholder management Experience in MDM / DQM / Data Governance technologies like Collibra, Atacama, Alation, Reltio will be added advantage. Azure Data Engineer or Azure Solution Architect certification will be added advantage. Nice to have skills: Working experience with Snowflake, Databricks, Open source stack like Hadoop Bigdata, Pyspark, Scala, Python, Hive etc.

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

As a Senior Analyst - Data Analytics, you will leverage your 3+ years of experience in Data Analytics and reporting to design and build interactive dashboards and reports using Power BI and Microsoft Fabric. Your strong technical expertise in Power BI, Microsoft Fabric, Snowflake, SQL, Python, and R will be instrumental in performing advanced data analysis and visualization to support business decision-making. You will utilize your experience with Azure Data Factory, Databricks, Synapse Analytics, and AWS Glue to develop and maintain data pipelines and queries using SQL and Python. Applying data science techniques such as predictive modeling, classification, clustering, and regression, you will solve business problems and uncover actionable insights. Your hands-on experience in building and deploying machine learning models will be crucial in building, validating, and tuning machine learning models using tools such as scikit-learn, TensorFlow, or similar frameworks. Collaborating with stakeholders, you will translate business questions into data science problems and communicate findings in a clear, actionable manner. You will use statistical techniques and hypothesis testing to validate assumptions and support decision-making. Documenting data science workflows and maintaining the reproducibility of experiments and models will ensure the success of analytics projects. Additionally, you will support the Data Analytics Manager in delivering analytics projects and mentoring junior analysts. Professional certifications such as Microsoft Certified: Power BI Data Analyst Associate (PL-300), SnowPro Core Certification (Snowflake), Microsoft Certified: Azure Data Engineer Associate, and AWS Certified: Data Analytics Specialty are preferred or in progress to enhance your expertise in the field.,

Posted 1 week ago

Apply

6.0 - 8.0 years

4 - 9 Lacs

Bhubaneswar, Hyderabad, Bengaluru

Work from Office

Job Title: Developer Work Location: Kolkata, WB/ Bangalore, KA/ Hyderabad, TS/ Bhubaneswar, OR Skills Required: Azure Data Factory Experience Range in Required Skills: 6-8 Job Description: Azure Data Engineer - What is Strictly Required: Minimum 5-6 years of hands-on experience in ETL pipeline using Azure Data factory /Azure synapse 3-4 weeks of notice period Average to excellent communication skill and has experience of managing client delivery independently.

Posted 1 week ago

Apply

8.0 - 12.0 years

7 - 16 Lacs

Hyderabad, Bengaluru

Work from Office

Role & responsibilities Primary Skills: Bigdata, Hadoop/ Apache Hive Must have Skills: Synapse, Data Lake Experience:8 to 12 years Location/ Shift/ Work Mode: Bangalore or Hyderabad 1 to 10pm Preferred candidate profile: Strong knowledge of distributed computing and big data frameworks: Hadoop, Spark, Hive, Presto, Kafka Hands-on experience with cloud platforms: AWS (S3, Glue, EMR, Athena, Redshift) or Azure (Synapse, ADLS, Data Factory) or GCP (Big Query, Dataflow, Pub/Sub) Deep understanding of data lake, Lakehouse, and warehouse design principles Proficiency in data modeling, schema design, partitioning strategies, and metadata management Experience with CI/CD, Terraform, Git, and orchestration tools like Airflow Familiarity with data catalog, lineage, and governance tools (e.g., Datahub, Purview, Collibra) Strong problem-solving, communication, and stakeholder management skills

Posted 1 week ago

Apply

12.0 - 16.0 years

0 Lacs

pune, maharashtra

On-site

As a Microsoft Fabric Professional at YASH Technologies, you will be responsible for leveraging your 12+ years of experience in Microsoft Azure Data Engineering to drive analytical projects. Your expertise will be crucial in designing, developing, and deploying high-volume ETL pipelines using Azure, Microsoft Fabric, and Databricks for complex models. Your hands-on experience with Azure Data Factory, Databricks, Azure Functions, Synapse Analytics, Data Lake, Delta Lake, and Azure SQL Database will be utilized for managing and processing large-scale data integrations. In this role, you will be expected to optimize Databricks clusters and manage workflows to ensure cost-effective and high-performance data processing. Your knowledge of data modeling, governance, quality management, and modernization processes will be essential in developing architecture blueprints and technical design documentation for Azure-based data solutions. You will provide technical leadership on cloud architecture best practices, stay updated on emerging Azure technologies, and recommend enhancements to existing systems. As part of the job requirements, mandatory certifications are a prerequisite for this role. At YASH Technologies, you will have the opportunity to work in an inclusive team environment where you can shape your career path. The company emphasizes continuous learning, unlearning, and relearning through career-oriented skilling models and technology-enabled collective intelligence. The workplace culture at YASH is built upon principles of flexible work arrangements, emotional positivity, self-determination, trust, transparency, and open collaboration, all aimed at supporting the realization of business goals in a stable employment environment with a great atmosphere and ethical corporate culture.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

As an experienced professional with 3-5 years in the field, you will be responsible for handling various technical tasks related to Azure Data Factory, Talend/SISS, MSSQL, Azure, and MySQL. Your expertise in Azure Data Factory will be crucial in this role. Your primary responsibilities will include demonstrating advanced knowledge of Azure SQL DB & Synapse Analytics, Power BI, SSIS, SSRS, T-SQL, and Logic Apps. Your ability to analyze and comprehend complex data sets will play a key role in your daily tasks. Proficiency in Azure Data Lake and other Azure services such as Analysis Service, SQL Databases, Azure DevOps, and CI/CD will be essential for success in this role. Additionally, a solid understanding of master data management, data warehousing, and business intelligence architecture will be required. You will be expected to have experience in data modeling and database design, with a strong grasp of SQL Server best practices. Effective communication skills, both verbal and written, will be necessary for interacting with stakeholders at all levels. A clear understanding of the data warehouse lifecycle will be beneficial, as you will be involved in preparing design documents, unit test plans, and code review reports. Experience working in an Agile environment, particularly with methodologies like Scrum, Lean, or Kanban, will be advantageous. Knowledge of big data technologies such as the Spark Framework, NoSQL, Azure Data Bricks, and the Hadoop Ecosystem (Hive, Impala, HDFS) would be a valuable asset in this role.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

As a Data Engineer, you will be responsible for designing, developing, and maintaining robust ETL pipelines using Azure Data Factory (ADF) to support complex insurance data workflows. You will integrate and extract data from various Guidewire modules (PolicyCenter, BillingCenter, ClaimCenter) to ensure data quality, integrity, and consistency. Building reusable components for data ingestion, transformation, and orchestration across Guidewire and Azure ecosystems will be a key part of your role. Your responsibilities will also include optimizing ADF pipelines for performance, scalability, and cost-efficiency while following industry-standard DevOps and CI/CD practices. Collaborating with solution architects, data modelers, and Guidewire functional teams to translate business requirements into scalable ETL solutions will be crucial. You will conduct thorough unit testing, data validation, and error handling across all data transformation steps and participate in end-to-end data lifecycle management. Providing technical documentation, pipeline monitoring dashboards, and ensuring production readiness will be part of your responsibilities. You will support data migration projects involving legacy platforms to Azure cloud environments and follow Agile/Scrum practices, contributing to sprint planning, retrospectives, and stand-ups with strong ownership of deliverables. **Mandatory Skills:** - 6+ years of experience in data engineering with expertise in Azure Data Factory, Azure SQL, and related Azure services. - Hands-on experience in building ADF pipelines integrating with Guidewire Insurance Suite. - Proficiency in data transformation using SQL, Stored Procedures, and Data Flows. - Experience working on Guidewire data models and understanding of PC/Billing/Claim schema and business entities. - Strong understanding of cloud-based data warehousing concepts, data lake patterns, and data governance best practices. - Clear experience in integrating Guidewire systems with downstream reporting and analytics platforms. - Excellent debugging skills to resolve complex data transformation and pipeline performance issues. **Preferred Skills:** - Prior experience in the Insurance (P&C preferred) domain or implementing Guidewire DataHub and/or InfoCenter. - Familiarity with Power BI, Databricks, or Synapse Analytics. - Working knowledge of Git-based source control, CI/CD pipelines, and deployment automation. **Additional Requirements:** - Work Mode: 100% Onsite at Hyderabad office (No remote/hybrid flexibility). - Strong interpersonal and communication skills to work effectively with cross-functional teams and client stakeholders. - Self-starter mindset with a high sense of ownership, capable of thriving under pressure and tight deadlines.,

Posted 1 week ago

Apply

5.0 - 15.0 years

0 Lacs

maharashtra

On-site

En Derevo empoderamos a las empresas y las personas, liberando el valor de los datos en las organizaciones. Con ms de 15 aos de experiencia, diseamos soluciones de datos e IA de punta a punta, desde la integracin en arquitecturas modernas hasta la implementacin de modelos inteligentes en procesos clave del negocio. Buscamos tu talento como Data Engineer (MS Fabric)!! Es importante que vivas en Mxico o Colombia. Como Data Engineer en Derevo, tu misin ser clave para crear e implementar arquitecturas modernas de datos con alta calidad, impulsando soluciones analticas basadas en tecnologas de Big Data. Disears, mantendrs y optimizars sistemas de multiprocesamiento paralelo, aplicando las mejores prcticas de almacenamiento y gestin en data warehouses, data lakes y lakehouses. Sers el apasionado que recolecta, procesa, limpia y orquesta grandes volmenes de datos, entendiendo modelos estructurados y semiestructurados, para integrar y transformar mltiples fuentes con eficacia. Definirs la estrategia ptima segn objetivos de negocio y requerimientos tcnicos, convirtiendo problemas complejos en soluciones alcanzables que ayuden a nuestros clientes a tomar decisiones basadas en datos. Te integrars al proyecto, sus sprints y ejecutars las actividades de desarrollo aplicando siempre las mejores prcticas de datos y las tecnologas que implementamos. Identificars requerimientos y definirs el alcance, participando en sprint planning y sesiones de ingeniera con una visin de consultor que aporte valor extra. Colaborars proactivamente en workshops y reuniones con el equipo interno y con el cliente. Clasificars y estimars actividades bajo metodologas giles (picas, features, historias tcnicas/usuario) y dars seguimiento diario para mantener el ritmo del sprint. Cumplirs las fechas de entrega comprometidas y gestionars riesgos comunicando desviaciones a tiempo. Para incorporarte como Data Engineer en Derevo, es necesario tener un manejo avanzado del idioma ingls (Conversaciones tcnicas y de negocios, B2+ o C1) y habilidades tcnicas en: - Lenguajes de Consulta y Programacin: T-SQL / Spark SQL, Python (PySpark), JSON / REST APIs, Microsoft Fabric. - Lenguajes de Consulta y Programacin: T-SQL / Spark SQL, Python (PySpark), JSON / REST APIs, Microsoft Fabric. Adems, es importante que te identifiques con habilidades blandas y de negocio como la comunicacin cercana, trabajo en Squads, proactividad y colaboracin, aprendizaje constante, responsabilidad y organizacin, consultora de datos, gestin de requerimientos, estrategia alineada al cliente y presentacin a clientes. Entre los beneficios que tendrs en Derevo se encuentran el impulso a tu bienestar integral, oportunidad de especializarte en diferentes reas y tecnologas, libertad para crear, participacin en proyectos tecnolgicos punteros y un esquema de trabajo remoto flexible y estructurado. Si cumples con la mayora de los requisitos y te interesa el perfil, no dudes en postularte para convertirte en un derevian y desarrollar tu superpoder. Nuestro equipo de Talent te contactar!,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Senior Azure Data Engineer, you will be responsible for designing, building, and optimizing scalable data pipelines and solutions using Databricks and modern Azure data engineering tools. Your expertise in Databricks and Azure services will be crucial in delivering high-quality, secure, and efficient data platforms. Your key skills and expertise should include a strong hands-on experience with Databricks, proficiency in Azure Data Factory (ADF) for orchestrating ETL workflows, excellent programming skills in Python with advanced PySpark skills, solid understanding of Apache Spark internals and tuning, and expertise in SQL for writing complex queries and optimizing joins. You should also be familiar with data warehousing principles and modeling techniques and have knowledge of Azure data services like Data Lake Storage, Synapse Analytics, and SQL Database. In this role, you will design and implement robust, scalable, and efficient data pipelines using Databricks and ADF, leverage Unity Catalog for securing and governing sensitive data, optimize Databricks jobs and queries for speed, cost, and scalability, build and maintain Delta Lake tables and data models for analytics and BI, collaborate with stakeholders to define data needs and deliver business value, automate workflows to improve reliability and data quality, troubleshoot and monitor pipelines for uptime and data accuracy, and mentor junior engineers in best practices in Databricks and Azure data engineering. The ideal candidate should have at least 5 years of experience in data engineering with a focus on Azure, demonstrated ability to work with large-scale distributed systems, strong communication and teamwork skills, and certifications in Databricks and/or Azure Data Engineering would be a plus.,

Posted 2 weeks ago

Apply

9.0 - 14.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Key Responsibilities : Oversee the entire data infrastructure to ensure scalability, operation efficiency and resiliency. - Mentor junior data engineers within the organization. - Design, develop, and maintain data pipelines and ETL processes using Microsoft Azure services (e.g., Azure Data Factory, Azure Synapse, Azure Databricks, Azure Fabric). - Utilize Azure data storage accounts for organizing and maintaining data pipeline outputs. (e.g., Azure Data Lake Storage Gen 2 & Azure Blob storage). - Collaborate with data scientists, data analysts, data architects and other stakeholders to understand data requirements and deliver high-quality data solutions. - Optimize data pipelines in the Azure environment for performance, scalability, and reliability. - Ensure data quality and integrity through data validation techniques and frameworks. - Develop and maintain documentation for data processes, configurations, and best practices. - Monitor and troubleshoot data pipeline issues to ensure timely resolution. - Stay current with industry trends and emerging technologies to ensure our data solutions remain cutting-edge. - Manage the CI/CD process for deploying and maintaining data solutions.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

Bengaluru

Work from Office

Build, optimize data models using Power BI Desktop &DAX for performance &scalability Integrate Power BI reports into other platforms such as SharePoint, Teams or custom apps Perform data extraction, transformation& loading (ETL)using Power Query&Etc Required Candidate profile 5+ yrs exp in reporting & analytics from MS-based data warehouse solution Strong in Agile methodologies& iterative development cycles Min 4 yrs of exp in Power BI 3+ yrs in MS data ecosystem, & Azure

Posted 2 weeks ago

Apply

2.0 - 7.0 years

6 - 16 Lacs

Hyderabad

Work from Office

Job Title: IT- Engineer / Senior Engineer Years of Experience: 2-6 Years Mandatory Skills: PowerBI, Tableau Key Responsibilities: Develop on Datalakes technologies including SQL, SYNAPSE, Databricks, PowerBI, and Fabric. Provide tool usage support to business stakeholders (10-20% of the role). Apply data modeling in the semantic layer and understand system structures. Support end-user training and resolve performance and tuning issues. Required Qualifications & Experience: 3 years as an Analyst in large-scale projects. 5 years in back-end/full stack development with Fabric / PowerBI. Experience in SQL environments and Agile (SCRUM) methodologies. Familiarity with data architectural concepts and data modeling.

Posted 2 weeks ago

Apply

4.0 - 7.0 years

18 - 22 Lacs

Noida, Pune

Work from Office

Collaborate with stakeholders to identify and gather reporting requirements, translating them into Power BI dashboards (in collaboration with Power BI developers). Monitor, troubleshoot, and optimize data pipelines and Azure services for performance and reliability. Follow best practices in DevOps to implement CI/CD pipelines. Document pipeline architecture, infrastructure changes, and operational procedures Required Skills Strong understanding of DevOps principles and CI/CD in Azure environments. Proven hands-on experience with: Azure Data Factory Azure Synapse Analytics Azure Function Apps Azure Infrastructure Services (Networking, Storage, RBAC, etc.) PowerShell scripting Experience in designing data workflows for reporting and analytics, especially integrating with Azure DevOps (ADO).

Posted 2 weeks ago

Apply

7.0 - 12.0 years

25 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Develop and maintain data pipelines, ETL/ELT processes, and workflows to ensure the seamless integration and transformation of data. Architect, implement, and optimize scalable data solutions. Required Candidate profile Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver actionable insights. Partner with cloud architects and DevOps teams

Posted 2 weeks ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Bengaluru

Work from Office

Job Title: Azure Data Engineer Job Summary: We are looking for a Data Engineer with hands-on experience in the Azure ecosystem. You will be responsible for designing, building, and maintaining both batch and real-time data pipelines using Azure cloud services. Key Responsibilities: Develop and maintain data pipelines using Azure Synapse Analytics, Data Factory, and DataBricks Work with real-time streaming tools like Azure Event Hub, Streaming Analytics, and Apache Kafka Design and manage data storage using ADLS Gen2, Blob Storage, Cosmos DB, and SQL Data Warehouse Use Spark (Python/Scala) for data processing in DataBricks Implement data workflows with tools like Apache Airflow and dbt Automate processes using Azure Functions and Python Ensure data quality, performance, and security Required Skills: Strong knowledge of Azure Data Platform (Synapse, ADLS2, Data Factory, Event Hub, Cosmos DB) Experience with Spark (in DataBricks), Python or Scala Familiar with tools like Azure Purview, dbt, and Airflow Good understanding of real-time and batch processing architectures

Posted 2 weeks ago

Apply

4.0 - 6.0 years

5 - 13 Lacs

Pune

Hybrid

Job Description : This position is for a Cloud Data engineer with a background in Python, DBT, SQL and data warehousing for enterprise level systems. Major Responsibilities: Adhere to standard coding principles and standards. Build and optimize data pipelines for efficient data ingestion, transformation and loading from various sources while ensuring data quality and integrity. Design, develop, and deploy python scripts and ETL processes in ADF environment to process and analyze varying volumes of data. Experience of DWH, Data Integration, Cloud, Design and Data Modelling. Proficient in developing programs in Python and SQL Experience with Data warehouse Dimensional data modeling. Working with event based/streaming technologies to ingest and process data. Working with structured, semi structured and unstructured data. Optimize ETL jobs for performance and scalability to handle big data workloads. Monitor and troubleshoot ADF jobs, identify and resolve issues or bottlenecks. Implement best practices for data management, security, and governance within the Databricks environment. Experience designing and developing Enterprise Data Warehouse solutions. Proficient writing SQL queries and programming including stored procedures and reverse engineering existing process. Perform code reviews to ensure fit to requirements, optimal execution patterns and adherence to established standards. Checking in, checkout and peer review and merging PRs into git Repo. Knowledge of deployment of packages and code migrations to stage and prod environments via CI/CD pipelines. Skills: 3+ years Python coding experience. 5+ years - SQL Server based development of large datasets 5+ years with Experience with developing and deploying ETL pipelines using Databricks Pyspark. Experience in any cloud data warehouse like Synapse, ADF, Redshift, Snowflake. Experience in Data warehousing - OLTP, OLAP, Dimensions, Facts, and Data modeling. Previous experience leading an enterprise-wide Cloud Data Platform migration with strong architectural and design skills. Experience with Cloud based data architectures, messaging, and analytics. Cloud certification(s). Add ons: Any experience with Airflow , AWS lambda, AWS glue and Step functions is a Plus.

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

As a Data Engineer, you will be responsible for designing, developing, and maintaining robust ETL pipelines using Azure Data Factory (ADF) to support complex insurance data workflows. Your role will involve integrating and extracting data from various Guidewire modules such as PolicyCenter, BillingCenter, and ClaimCenter, ensuring data quality, integrity, and consistency. You will be tasked with building reusable components for data ingestion, transformation, and orchestration across Guidewire and Azure ecosystems. Optimizing ADF pipelines for performance, scalability, and cost-efficiency while following industry-standard DevOps and CI/CD practices will be a key part of your responsibilities. Collaboration with solution architects, data modelers, and Guidewire functional teams to translate business requirements into scalable ETL solutions is essential. You will conduct thorough unit testing, data validation, and error handling across all data transformation steps. Additionally, your involvement will span end-to-end data lifecycle management from requirement gathering through deployment and post-deployment support. Providing technical documentation, pipeline monitoring dashboards, and ensuring production readiness will be crucial. You will also support data migration projects involving legacy platforms to Azure cloud environments. You will need to follow Agile/Scrum practices, contribute to sprint planning, retrospectives, and stand-ups with a strong ownership of deliverables. Your mandatory skills should include 6+ years of experience in data engineering with expertise in Azure Data Factory, Azure SQL, and related Azure services. Hands-on experience in building ADF pipelines that integrate with Guidewire Insurance Suite is a must. Proficiency in data transformation using SQL, Stored Procedures, and Data Flows is required, along with experience working on Guidewire data models and understanding PC/Billing/Claim schema and business entities. A solid understanding of cloud-based data warehousing concepts, data lake patterns, and data governance best practices is expected. You should also have experience in integrating Guidewire systems with downstream reporting and analytics platforms. Excellent debugging skills will be necessary to resolve complex data transformation and pipeline performance issues. Preferred skills include prior experience in the Insurance (P&C preferred) domain or implementing Guidewire DataHub and/or InfoCenter. Familiarity with tools like Power BI, Databricks, or Synapse Analytics is a plus. In terms of work mode, this position requires 100% onsite presence at the Hyderabad office with no remote or hybrid flexibility. Strong interpersonal and communication skills are essential as you will be working with cross-functional teams and client stakeholders. A self-starter mindset with a high sense of ownership is crucial, as you must thrive under pressure and tight deadlines.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

As an experienced professional with 3-5 years of experience, you will be responsible for working with a range of technical skills including Azure Data Factory, Talend/SSIS, MSSQL, Azure, and MySQL. Your primary focus will be on Azure Data Factory, where you will utilize your expertise to handle complex data analysis tasks effectively. In this role, you will demonstrate advanced knowledge in Azure SQL DB & Synapse Analytics, Power BI, SSIS, SSRS, T-SQL, and Logic Apps. It is essential that you possess a solid understanding of Azure Data Lake and Azure Services such as Analysis Service, SQL Databases, Azure DevOps, and CI/CD processes. Furthermore, your responsibilities will include mastering data management, data warehousing, and business intelligence architecture. You will be required to apply your experience in data modeling and database design, ensuring compliance with SQL Server best practices. Effective communication is key in this role, as you will engage with stakeholders at various levels. You will contribute to the preparation of design documents, unit test plans, and code review reports. Experience in an Agile environment, specifically with Scrum, Lean, or Kanban methodologies, will be advantageous. Additionally, familiarity with Big Data technologies such as the Spark Framework, NoSQL databases, Azure Data Bricks, and the Hadoop Ecosystem (Hive, Impala, HDFS) will be beneficial for this position.,

Posted 2 weeks ago

Apply

3.0 - 6.0 years

5 - 8 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Work from Office

As a Senior Azure Data Engineer, your responsibilities will include: Building scalable data pipelines using Databricks and PySpark Transforming raw data into usable business insights Integrating Azure services like Blob Storage, Data Lake, and Synapse Analytics Deploying and maintaining machine learning models using MLlib or TensorFlow Executing large-scale Spark jobs with performance tuning on Spark Pools Leveraging Databricks Notebooks and managing workflows with MLflow Qualifications: Bachelors/Masters in Computer Science, Data Science, or equivalent 7+ years in Data Engineering, with 3+ years in Azure Databricks Strong hands-on in: PySpark, Spark SQL, RDDs, Pandas, NumPy, Delta Lake Azure ecosystem: Data Lake, Blob Storage, Synapse Analytics Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai

Posted 3 weeks ago

Apply

0.0 - 4.0 years

0 Lacs

chennai, tamil nadu

On-site

As an Azure Data Engineer Junior at dotSolved, you will be responsible for designing, implementing, and managing scalable data solutions on Azure. Your primary focus will be on building and maintaining data pipelines, integrating data from various sources, and ensuring data quality and security. Proficiency in Azure services such as Data Factory, Databricks, and Synapse Analytics is essential as you optimize data workflows for analytics and reporting purposes. Collaboration with stakeholders is a key aspect of this role to ensure alignment with business goals and performance standards. Your responsibilities will include designing, developing, and maintaining data pipelines and workflows using Azure services, implementing data integration, transformation, and storage solutions to support analytics and reporting, ensuring data quality, security, and compliance with organizational and regulatory standards, optimizing data solutions for performance, scalability, and cost efficiency, as well as collaborating with cross-functional teams to gather requirements and deliver data-driven insights. This position is based in Chennai and Bangalore, offering you the opportunity to work in a dynamic and innovative environment where you can contribute to the digital transformation journey of enterprises across various industries.,

Posted 3 weeks ago

Apply
Page 1 of 3
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies