Jobs
Interviews

106 Synapse Analytics Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 7.0 years

18 - 22 Lacs

Noida, Pune

Work from Office

Collaborate with stakeholders to identify and gather reporting requirements, translating them into Power BI dashboards (in collaboration with Power BI developers). Monitor, troubleshoot, and optimize data pipelines and Azure services for performance and reliability. Follow best practices in DevOps to implement CI/CD pipelines. Document pipeline architecture, infrastructure changes, and operational procedures Required Skills Strong understanding of DevOps principles and CI/CD in Azure environments. Proven hands-on experience with: Azure Data Factory Azure Synapse Analytics Azure Function Apps Azure Infrastructure Services (Networking, Storage, RBAC, etc.) PowerShell scripting Experience in designing data workflows for reporting and analytics, especially integrating with Azure DevOps (ADO).

Posted 2 months ago

Apply

7.0 - 12.0 years

25 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Develop and maintain data pipelines, ETL/ELT processes, and workflows to ensure the seamless integration and transformation of data. Architect, implement, and optimize scalable data solutions. Required Candidate profile Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver actionable insights. Partner with cloud architects and DevOps teams

Posted 2 months ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Bengaluru

Work from Office

Job Title: Azure Data Engineer Job Summary: We are looking for a Data Engineer with hands-on experience in the Azure ecosystem. You will be responsible for designing, building, and maintaining both batch and real-time data pipelines using Azure cloud services. Key Responsibilities: Develop and maintain data pipelines using Azure Synapse Analytics, Data Factory, and DataBricks Work with real-time streaming tools like Azure Event Hub, Streaming Analytics, and Apache Kafka Design and manage data storage using ADLS Gen2, Blob Storage, Cosmos DB, and SQL Data Warehouse Use Spark (Python/Scala) for data processing in DataBricks Implement data workflows with tools like Apache Airflow and dbt Automate processes using Azure Functions and Python Ensure data quality, performance, and security Required Skills: Strong knowledge of Azure Data Platform (Synapse, ADLS2, Data Factory, Event Hub, Cosmos DB) Experience with Spark (in DataBricks), Python or Scala Familiar with tools like Azure Purview, dbt, and Airflow Good understanding of real-time and batch processing architectures

Posted 2 months ago

Apply

4.0 - 6.0 years

5 - 13 Lacs

Pune

Hybrid

Job Description : This position is for a Cloud Data engineer with a background in Python, DBT, SQL and data warehousing for enterprise level systems. Major Responsibilities: Adhere to standard coding principles and standards. Build and optimize data pipelines for efficient data ingestion, transformation and loading from various sources while ensuring data quality and integrity. Design, develop, and deploy python scripts and ETL processes in ADF environment to process and analyze varying volumes of data. Experience of DWH, Data Integration, Cloud, Design and Data Modelling. Proficient in developing programs in Python and SQL Experience with Data warehouse Dimensional data modeling. Working with event based/streaming technologies to ingest and process data. Working with structured, semi structured and unstructured data. Optimize ETL jobs for performance and scalability to handle big data workloads. Monitor and troubleshoot ADF jobs, identify and resolve issues or bottlenecks. Implement best practices for data management, security, and governance within the Databricks environment. Experience designing and developing Enterprise Data Warehouse solutions. Proficient writing SQL queries and programming including stored procedures and reverse engineering existing process. Perform code reviews to ensure fit to requirements, optimal execution patterns and adherence to established standards. Checking in, checkout and peer review and merging PRs into git Repo. Knowledge of deployment of packages and code migrations to stage and prod environments via CI/CD pipelines. Skills: 3+ years Python coding experience. 5+ years - SQL Server based development of large datasets 5+ years with Experience with developing and deploying ETL pipelines using Databricks Pyspark. Experience in any cloud data warehouse like Synapse, ADF, Redshift, Snowflake. Experience in Data warehousing - OLTP, OLAP, Dimensions, Facts, and Data modeling. Previous experience leading an enterprise-wide Cloud Data Platform migration with strong architectural and design skills. Experience with Cloud based data architectures, messaging, and analytics. Cloud certification(s). Add ons: Any experience with Airflow , AWS lambda, AWS glue and Step functions is a Plus.

Posted 2 months ago

Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

As a Data Engineer, you will be responsible for designing, developing, and maintaining robust ETL pipelines using Azure Data Factory (ADF) to support complex insurance data workflows. Your role will involve integrating and extracting data from various Guidewire modules such as PolicyCenter, BillingCenter, and ClaimCenter, ensuring data quality, integrity, and consistency. You will be tasked with building reusable components for data ingestion, transformation, and orchestration across Guidewire and Azure ecosystems. Optimizing ADF pipelines for performance, scalability, and cost-efficiency while following industry-standard DevOps and CI/CD practices will be a key part of your responsibilities. Collaboration with solution architects, data modelers, and Guidewire functional teams to translate business requirements into scalable ETL solutions is essential. You will conduct thorough unit testing, data validation, and error handling across all data transformation steps. Additionally, your involvement will span end-to-end data lifecycle management from requirement gathering through deployment and post-deployment support. Providing technical documentation, pipeline monitoring dashboards, and ensuring production readiness will be crucial. You will also support data migration projects involving legacy platforms to Azure cloud environments. You will need to follow Agile/Scrum practices, contribute to sprint planning, retrospectives, and stand-ups with a strong ownership of deliverables. Your mandatory skills should include 6+ years of experience in data engineering with expertise in Azure Data Factory, Azure SQL, and related Azure services. Hands-on experience in building ADF pipelines that integrate with Guidewire Insurance Suite is a must. Proficiency in data transformation using SQL, Stored Procedures, and Data Flows is required, along with experience working on Guidewire data models and understanding PC/Billing/Claim schema and business entities. A solid understanding of cloud-based data warehousing concepts, data lake patterns, and data governance best practices is expected. You should also have experience in integrating Guidewire systems with downstream reporting and analytics platforms. Excellent debugging skills will be necessary to resolve complex data transformation and pipeline performance issues. Preferred skills include prior experience in the Insurance (P&C preferred) domain or implementing Guidewire DataHub and/or InfoCenter. Familiarity with tools like Power BI, Databricks, or Synapse Analytics is a plus. In terms of work mode, this position requires 100% onsite presence at the Hyderabad office with no remote or hybrid flexibility. Strong interpersonal and communication skills are essential as you will be working with cross-functional teams and client stakeholders. A self-starter mindset with a high sense of ownership is crucial, as you must thrive under pressure and tight deadlines.,

Posted 2 months ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

As an experienced professional with 3-5 years of experience, you will be responsible for working with a range of technical skills including Azure Data Factory, Talend/SSIS, MSSQL, Azure, and MySQL. Your primary focus will be on Azure Data Factory, where you will utilize your expertise to handle complex data analysis tasks effectively. In this role, you will demonstrate advanced knowledge in Azure SQL DB & Synapse Analytics, Power BI, SSIS, SSRS, T-SQL, and Logic Apps. It is essential that you possess a solid understanding of Azure Data Lake and Azure Services such as Analysis Service, SQL Databases, Azure DevOps, and CI/CD processes. Furthermore, your responsibilities will include mastering data management, data warehousing, and business intelligence architecture. You will be required to apply your experience in data modeling and database design, ensuring compliance with SQL Server best practices. Effective communication is key in this role, as you will engage with stakeholders at various levels. You will contribute to the preparation of design documents, unit test plans, and code review reports. Experience in an Agile environment, specifically with Scrum, Lean, or Kanban methodologies, will be advantageous. Additionally, familiarity with Big Data technologies such as the Spark Framework, NoSQL databases, Azure Data Bricks, and the Hadoop Ecosystem (Hive, Impala, HDFS) will be beneficial for this position.,

Posted 2 months ago

Apply

3.0 - 6.0 years

5 - 8 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Work from Office

As a Senior Azure Data Engineer, your responsibilities will include: Building scalable data pipelines using Databricks and PySpark Transforming raw data into usable business insights Integrating Azure services like Blob Storage, Data Lake, and Synapse Analytics Deploying and maintaining machine learning models using MLlib or TensorFlow Executing large-scale Spark jobs with performance tuning on Spark Pools Leveraging Databricks Notebooks and managing workflows with MLflow Qualifications: Bachelors/Masters in Computer Science, Data Science, or equivalent 7+ years in Data Engineering, with 3+ years in Azure Databricks Strong hands-on in: PySpark, Spark SQL, RDDs, Pandas, NumPy, Delta Lake Azure ecosystem: Data Lake, Blob Storage, Synapse Analytics Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai

Posted 2 months ago

Apply

0.0 - 4.0 years

0 Lacs

chennai, tamil nadu

On-site

As an Azure Data Engineer Junior at dotSolved, you will be responsible for designing, implementing, and managing scalable data solutions on Azure. Your primary focus will be on building and maintaining data pipelines, integrating data from various sources, and ensuring data quality and security. Proficiency in Azure services such as Data Factory, Databricks, and Synapse Analytics is essential as you optimize data workflows for analytics and reporting purposes. Collaboration with stakeholders is a key aspect of this role to ensure alignment with business goals and performance standards. Your responsibilities will include designing, developing, and maintaining data pipelines and workflows using Azure services, implementing data integration, transformation, and storage solutions to support analytics and reporting, ensuring data quality, security, and compliance with organizational and regulatory standards, optimizing data solutions for performance, scalability, and cost efficiency, as well as collaborating with cross-functional teams to gather requirements and deliver data-driven insights. This position is based in Chennai and Bangalore, offering you the opportunity to work in a dynamic and innovative environment where you can contribute to the digital transformation journey of enterprises across various industries.,

Posted 2 months ago

Apply

8.0 - 13.0 years

24 - 36 Lacs

Bengaluru

Work from Office

Azure Data Factory (ADF), Synapse Analytics Azure SQL DB, Azure Data Lake Gen2 ETL/ELT Pipeline Development SQL, Data Modeling (Star/Snowflake Schema) CI/CD deployment experience Power BI (Good to have) Microsoft Certification (Preferred)

Posted 2 months ago

Apply

6.0 - 11.0 years

5 - 15 Lacs

Tirupati

Work from Office

About the Role We are seeking an experienced and driven Technical Project Manager / Technical Delivery Manager to lead complex, high-impact data analytics and data science projects for global clients. This role demands a unique blend of project management expertise, technical depth in cloud and data technologies, and the ability to collaborate across cross-functional teams. You will be responsible for ensuring the successful delivery of data platforms, data products, and enterprise analytics solutions that drive business value. Key Responsibilities Project & Delivery Management Lead the full project lifecycle for enterprise-scale data platformsincluding requirement gathering, development, testing, deployment, and post-production support. Own the delivery of Data Warehousing and Data Lakehouse solutions on cloud platforms (Azure, AWS, or GCP). Prepare and maintain detailed project plans (Microsoft Project Plan), and align them with the Statement of Work (SOW) and client expectations. Utilize hybrid project methodologies (Agile + Waterfall) for managing scope, budget, and timelines. Monitor key project KPIs (e.g., SLA, MTTR, MTTA, MTBF) and ensure adherence using tools like ServiceNow. Data Platform & Architecture Oversight Collaborate with data engineers and architects to guide the implementation of scalable Data Warehouses (e.g., Redshift, Synapse) and Data Lakehouse architectures (e.g., Databricks, Delta Lake). Ensure data platform solutions meet performance, security, and governance standards. Understand and help manage data integration pipelines, ETL/ELT processes, and BI/reporting requirements. Client Engagement & Stakeholder Management Serve as the primary liaison for US/UK clients; manage regular status updates, escalation paths, and expectations across stakeholders. Conduct WSRs, MSRs, and QBRs with clients and internal teams to drive transparency and performance reviews. Facilitate team meetings, highlight risks or blockers, and ensure consistent stakeholder alignment. Technical Leadership & Troubleshooting Provide hands-on support and guidance in data infrastructure troubleshooting using tools like Splunk, AppDynamics, Azure Monitor. Lead incident, problem, and change management processes with data platform operations in mind. Identify automation opportunities and propose technical process improvements across data pipelines and workflows. Governance, Documentation & Compliance Create and maintain SOPs, runbooks, implementation documents, and architecture diagrams. Manage project compliance related to data privacy, security, and internal/external audits. Initiate and track Change Requests (CRs) and look for revenue expansion opportunities with clients. Continuous Improvement & Innovation Participate in and lead at least three internal process optimization or innovation initiatives annually. Work with engineering, analytics, and DevOps teams to improve CI/CD pipelines and data delivery workflows. Monitor production environments to reduce deployment issues and improve time-to-insight. Must-Have Qualifications 10+ years of experience in technical project delivery, with strong focus on data analytics, BI, and cloud data platforms . Strong hands-on experience with SQL and data warehouse technologies like Snowflake, Synapse, Redshift, BigQuery , etc. Proven experience delivering Data Warehouse and Data Lakehouse solutions. Familiarity with tools such as Redshift, Synapse, BigQuery, Databricks, Delta Lake . Strong cloud knowledge with Azure, AWS, or GCP . Proficiency in project management tools like Microsoft Project Plan (MPP) , JIRA, Confluence, and ServiceNow. Expertise in Agile project methodologies. Excellent communication skillsboth verbal and writtenwith no MTI or grammatical errors. Hands-on experience working with global delivery models (onshore/offshore). Preferred Qualifications PMP or Scrum Master certification. Understanding of ITIL processes and DataOps practices. Experience managing end-to-end cloud data transformation projects. Experience in project estimation, proposal writing, and RFP handling. Desired Skills & Competencies Deep understanding of SDLC, data architecture, and data governance principles. Strong leadership, decision-making, and conflict-resolution abilities. High attention to detail and accuracy in documentation and reporting. Ability to handle multiple concurrent projects in a fast-paced, data-driven environment. A passion for data-driven innovation and business impact. Why Join Us? Be part of a collaborative and agile team driving cutting-edge AI and data engineering solutions. Work on impactful projects that make a difference across industries. Opportunities for professional growth and continuous learning. Competitive salary and benefits package.

Posted 2 months ago

Apply

6.0 - 11.0 years

10 - 18 Lacs

Bengaluru

Remote

We are looking for experienced DBAs worked on multiple database technologies and cloud migration projects. 6+ years of experience working on SQL/NoSQL/Data warehouse platforms on on-premise and cloud (AWS, Azure & GCP) Provide expert-level guidance on cloud adoption, data migration strategies, and digital transformation projects Strong understanding of RDBMS, NoSQL, Datawarehouse, In Memory and Data Lake architecture, features, and functionalities Proficiency in SQL and data manipulation techniques. Experience with data loading and unloading tools and techniques. Expertise in Data Access Management, Database reliability & scalability and Administer, configure, and optimize database resources and services across the organization Ensure high availability, replication, and failover strategies Implement serverless database architectures for cost-effective, scalable storage Key Responsibilities Strong proficiency in Database administration of one or more databases (Snowflake, BigQuery, Amazon Redshift, Teradata, SAP HANA, Oracle, PostgreSQL, MySQL, SQL Server, Cassandra, MongoDB, Neo4j, Cloudera, Micro Focus, IBM DB2, Elasticsearch, DynamoDB, Azure synapse ) Plan and Execute the On-Prem Database/Analysis Services/Reporting Services/Integration Services Migration to AWS/Azure/GCP Develop automation scripts using Python, Shell Scripting, or Terraform for streamlined database operations. Provide technical guidance and mentoring to junior DBAs and data engineers. Hands-on experience with data modelling, ETL/ELT processes, and data integration tools. Monitoring and optimizing the performance of virtual warehouses, queries, and overall system performance. Optimize database performance through query tuning, indexing, and configuration. Manage replication, backups, and disaster recovery for high availability. Troubleshoot and resolve database issues, including performance bottlenecks, errors, and downtime. Collaborate with the infrastructure team to configure, manage, and monitor PostgreSQL in cloud environments (AWS, GCP, or Azure). Provide on-call support for critical database operations and incidents Provide Level 3 and 4 technical support, troubleshooting complex issues. Participate in cross-functional teams for database design and optimization.

Posted 2 months ago

Apply

6.0 - 11.0 years

10 - 18 Lacs

Bengaluru

Remote

We are looking for experienced DBAs worked on multiple database technologies and cloud migration projects for our clients worldwide. 6+ years of experience working on SQL/NoSQL/Data warehouse platforms on on-premise and cloud (AWS, Azure & GCP) Provide expert-level guidance on cloud adoption, data migration strategies, and digital transformation projects Strong understanding of RDBMS, NoSQL, Datawarehouse, In Memory and Data Lake architecture, features, and functionalities Proficiency in SQL and data manipulation techniques. Experience with data loading and unloading tools and techniques. Expertise in Data Access Management, Database reliability & scalability and Administer, configure, and optimize database resources and services across the organization Ensure high availability, replication, and failover strategies Implement serverless database architectures for cost-effective, scalable storage Key Responsibilities Strong proficiency in Database administration of one or more databases (Snowflake, BigQuery, Amazon Redshift, Teradata, SAP HANA, Oracle, PostgreSQL, MySQL, SQL Server, Cassandra, MongoDB, Neo4j, Cloudera, Micro Focus, IBM DB2, Elasticsearch, DynamoDB, Azure synapse ) Plan and Execute the On-Prem Database/Analysis Services/Reporting Services/Integration Services Migration to AWS/Azure/GCP Develop automation scripts using Python, Shell Scripting, or Terraform for streamlined database operations. Provide technical guidance and mentoring to junior DBAs and data engineers. Hands-on experience with data modelling, ETL/ELT processes, and data integration tools. Monitoring and optimizing the performance of virtual warehouses, queries, and overall system performance. Optimize database performance through query tuning, indexing, and configuration. Manage replication, backups, and disaster recovery for high availability. Troubleshoot and resolve database issues, including performance bottlenecks, errors, and downtime. Collaborate with the infrastructure team to configure, manage, and monitor PostgreSQL in cloud environments (AWS, GCP, or Azure). Provide on-call support for critical database operations and incidents Provide Level 3 and 4 technical support, troubleshooting complex issues. Participate in cross-functional teams for database design and optimization.

Posted 2 months ago

Apply

8.0 - 13.0 years

8 - 17 Lacs

Chennai

Remote

MS Fabric (Data Lake, OneLake, Lakehouse, Warehouse, Real-Time Analytics) and integration with Power BI, Synapse, and Azure Data Factory. DevOps Knowledge Team Leading experience

Posted 2 months ago

Apply

8.0 - 13.0 years

8 - 17 Lacs

Chennai

Remote

MS Fabric (Data Lake, OneLake, Lakehouse, Warehouse, Real-Time Analytics) and integration with Power BI, Synapse, and Azure Data Factory. DevOps Knowledge Team Leading experience

Posted 2 months ago

Apply

5.0 - 10.0 years

25 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Develop and maintain data pipelines, ETL/ELT processes, and workflows to ensure the seamless integration and transformation of data. Architect, implement, and optimize scalable data solutions. Required Candidate profile Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver actionable insights. Partner with cloud architects and DevOps teams

Posted 2 months ago

Apply

5.0 - 10.0 years

16 - 27 Lacs

Bengaluru

Hybrid

6-7 Years of Data and Analytics experience with minimum 3 years in Azure Cloud Excellent communication and interpersonal skills. Extensive experience in Azure stack ADLS, Azure SQL DB, Azure Data Factory, Azure Data bricks, Azure Synapse, CosmoDB, Analysis Services, Event Hub etc.. Experience in job scheduling using Oozie or Airflow or any other ETL scheduler Design and build production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala. Good experience in designing & delivering data analytics solutions using Azure Cloud native services. Good experience in Requirements Analysis and Solution Architecture Design, Data modelling, ETL, data integration and data migration design Documentation of solutions (e.g. data models, configurations, and setup). Well versed with Waterfall, Agile, Scrum and similar project delivery methodologies. Experienced in internal as well as external stakeholder management Experience in MDM / DQM / Data Governance technologies like Collibra, Atacama, Alation, Reltio will be added advantage. Azure Data Engineer or Azure Solution Architect certification will be added advantage. Nice to have skills: Working experience with Snowflake, Databricks, Open source stack like Hadoop Bigdata, Pyspark, Scala, Python, Hive etc.

Posted 2 months ago

Apply

9.0 - 14.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Utilizes software engineering principles to deploy and maintain fully automated data transformation pipelines that combine a large variety of storage and computation technologies to handle a distribution of data types and volumes in support of data architecture design. A Senior Data Engineer designs and oversees the entire data infrastructure, data products and data pipelines that are resilient to change, modular, flexible, scalable, reusable, and cost effective. Key Responsibilities : Oversee the entire data infrastructure to ensure scalability, operation efficiency and resiliency. - Mentor junior data engineers within the organization. - Design, develop, and maintain data pipelines and ETL processes using Microsoft Azure services (e.g., Azure Data Factory, Azure Synapse, Azure Databricks, Azure Fabric). - Utilize Azure data storage accounts for organizing and maintaining data pipeline outputs. (e.g., Azure Data Lake Storage Gen 2 & Azure Blob storage). - Collaborate with data scientists, data analysts, data architects and other stakeholders to understand data requirements and deliver high-quality data solutions. - Optimize data pipelines in the Azure environment for performance, scalability, and reliability. - Ensure data quality and integrity through data validation techniques and frameworks. - Develop and maintain documentation for data processes, configurations, and best practices. - Monitor and troubleshoot data pipeline issues to ensure timely resolution. - Stay current with industry trends and emerging technologies to ensure our data solutions remain cutting-edge. - Manage the CI/CD process for deploying and maintaining data solutions.

Posted 2 months ago

Apply

4.0 - 6.0 years

12 - 14 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Strong hands-on experience with Azure Databricks, PySpark, and ADF Advanced expertise in Azure SQL DB, Synapse Analytics, and Azure Data Lake Familiar with Azure Analysis Services, Azure SQL, and CI/CD (Azure DevOps) Proficient in data modeling, SQL Server best practices, and BI/Data Warehousing architecture Agile methodologies: ADO, Scrum, Kanban, Lean Collaborate with business/technical teams to design scalable data solutions Architect and implement data pipelines and models Provide technical leadership, code reviews, and best practice guidance Support end-to-end lifecycle: estimation, design, development, deployment Risk/issue management and solution recommendation. Location- Remote, Delhi NCR, Bangalore, Chennai, Pune, Kolkata, Ahmedabad, Mumbai, Hyderabad

Posted 2 months ago

Apply

4.0 - 9.0 years

15 - 30 Lacs

Kochi

Work from Office

About Neudesic Passion for technology drives us, but its innovation that defines us . From design to development and support to management, Neudesic offers decades of experience, proven frameworks, and a disciplined approach to quickly deliver reliable, quality solutions that help our customers go to market faster.What sets us apart from the rest is an amazing collection of people who live and lead with our core values. We believe that everyone should be Passionate about what they do, Disciplined to the core, Innovative by nature, committed to a Team and conduct themselves with Integrity . If these attributes mean something to you - we'd like to hear from you. We are currently looking for Azure Data Engineers to become a member of Neudesic s Data & AI team. Must Have Skills: Prior experience in ETL, data pipelines, data flow techniques using Azure Data Services Working experience in Python, Scala, Py Spark, Azure Data Factory, Azure Data Lake Gen2, Databricks, Azure Synapse and file formats like JSON & Parquet Experience in creating ADF Pipelines to source and process data sets. Experience in creating Databricks notebooks to cleanse, transform and enrich data sets. Good understanding about SQL, Databases, NO-SQL DBs, Data Warehouse, Hadoop and various data storage options on the cloud. Development experience in orchestration of pipelines Experience in deployment and monitoring techniques Working experience with Azure DevOps CI/CD pipelines to deploy Azure resources. Experience in handling operations/Integration with source repository Must have good knowledge on Datawarehouse concepts and Datawarehouse modelling Good to Have Skills: Familiarity with DevOps, Agile Scrum methodologies and CI/CD Domain-driven development exposure Analytical / problem solving skills Strong communication skills Good experience with unit, integration and UAT support Able to design and code reusable components and functions Should able to review design, code & provide review comments with justification Zeal to learn new tool/technologies and adoption Power BI and Data Catalog experience * Be aware of phishing scams involving fraudulent career recruiting and fictitious job postings; visit our Phishing Scams page to learn more. Neudesic is an Equal Opportunity Employer Neudesic provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by local laws. Neudesic is an IBM subsidiary which has been acquired by IBM and will be integrated into the IBM organization. Neudesic will be the hiring entity. By proceeding with this application, you understand that Neudesic will share your personal information with other IBM companies involved in your recruitment process, wherever these are located. More Information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here: https://www.ibm.com/us-en/privacy?lnk=flg-priv-usen

Posted 2 months ago

Apply

5.0 - 10.0 years

0 - 0 Lacs

Pune, Chennai

Hybrid

Ciklum is looking for a Senior Microsoft Fabric Data Engineer to join our team full-time in India. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About the role: We are seeking a highly skilled and experienced Senior Microsoft Fabric Data Engineer to design, develop, and optimize advanced data solutions leveraging the Microsoft Fabric platform. You will be responsible for building robust, scalable data pipelines, integrating diverse and large-scale data sources, and enabling sophisticated analytics and business intelligence capabilities. This role requires extensive hands-on expertise with Microsoft Fabric, a deep understanding of Azure data services, and mastery of modern data engineering practices. Responsibilities: Lead the design and implementation of highly scalable and efficient data pipelines and data warehouses using Microsoft Fabric and a comprehensive suite of Azure services (Data Factory, Synapse Analytics, Azure SQL, Data Lake) Develop, optimize, and oversee complex ETL/ELT processes for data ingestion, transformation, and loading from a multitude of disparate sources, ensuring high performance with large-scale datasets Ensure the highest level of data integrity, quality, and governance throughout the entire Fabric environment, establishing best practices for data management Collaborate extensively with stakeholders, translating intricate business requirements into actionable, resilient, and optimized data solutions Proactively troubleshoot, monitor, and fine-tune data pipelines and workflows for peak performance and efficiency, particularly in handling massive datasets Architect and manage workspace architecture, implement robust user access controls, and enforce data security in strict compliance with privacy regulations Automate platform tasks and infrastructure management using advanced scripting languages (Python, PowerShell) and Infrastructure as Code (Terraform, Ansible) principles Document comprehensive technical solutions, enforce code modularity, and champion best practices in version control and documentation across the team Stay at the forefront of Microsoft Fabric updates, new features, and contribute significantly to continuous improvement initiatives and the adoption of cutting-edge technologies Requirements: Minimum of 5+ years of progressive experience in data engineering, with at least 3 years of hands-on, in-depth work on Microsoft Fabric and a wide array of Azure data services Exceptional proficiency in SQL, Python, and advanced data transformation tools (e.g., Spark, PySpark notebooks) Mastery of data warehousing concepts, dimensional modeling, and advanced ETL best practices Extensive experience with complex hybrid cloud and on-premises data integration scenarios Profound understanding of data governance, security protocols, and compliance standards Excellent problem-solving, analytical, and communication skills, with the ability to articulate complex technical concepts clearly to both technical and non-technical audiences Desirable: Experience with Power BI, Azure Active Directory, and managing very large-scale data infrastructure Strong familiarity with Infrastructure as Code and advanced automation tools Bachelors degree in Computer Science, Engineering, or a related field (or equivalent extensive experience) What's in it for you? Care: your mental and physical health is our priority. We ensure comprehensive company-paid medical insurance, as well as financial and legal consultation Tailored education path: boost your skills and knowledge with our regular internal events (meetups, conferences, workshops), Udemy licence, language courses and company-paid certifications Growth environment: share your experience and level up your expertise with a community of skilled professionals, locally and globally Flexibility: hybrid work mode at Chennai or Pune Opportunities: we value our specialists and always find the best options for them. Our Resourcing Team helps change a project if needed to help you grow, excel professionally and fulfil your potential Global impact: work on large-scale projects that redefine industries with international and fast-growing clients Welcoming environment: feel empowered with a friendly team, open-door policy, informal atmosphere within the company and regular team-building events About us: At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, youll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress. India is a strategic innovation hub for Ciklum, with growing teams in Chennai and Pune leading advancements in EdgeTech, AR/VR, IoT, and beyond. Join us to collaborate on game-changing solutions and take your career to the next level. Want to learn more about us? Follow us on Instagram , Facebook , LinkedIn . Explore, empower, engineer with Ciklum! Experiences of tomorrow. Engineered together Interested already? We would love to get to know you! Submit your application. Cant wait to see you at Ciklum.

Posted 2 months ago

Apply

8.0 - 13.0 years

14 - 22 Lacs

Mumbai

Work from Office

Designed and built ETL pipelines to support enterprise data initiatives. Developed and implemented databases and data collection systems . Acquired data from primary and secondary sources ; maintained reliable data systems. Extensive experience in data ingestion , modelling , architecture design , and consulting for complex, cross-functional projects involving multiple stakeholders. Identified relevant data sources aligned with specific business requirements . Conducted data quality assessments and built validation processes to ensure integrity. Generated actionable insights from data sets, identifying key trends and patterns . Created detailed executive-level reports and dashboards for project teams. Acted as a single point of contact for all data-related issues across business units. Collaborated with vendors, functional, operational, and technical teams to address and support data needs. Ensured data consistency and accuracy from all providers and external sources. Supported the development of data platforms , maintaining focus on data quality and completeness . Created and maintained comprehensive technical documentation . Built for scalability and high performance in all development efforts. Troubleshot data infrastructure and data processing issues effectively. Supported data roadmap initiatives , recommending optimal technologies. Led the automation of multi-step, repetitive tasks , improving efficiency and reducing manual errors.

Posted 2 months ago

Apply

5.0 - 10.0 years

16 - 27 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

6-7 Years of Data and Analytics experience with minimum 3 years in Azure Cloud Excellent communication and interpersonal skills. Extensive experience in Azure stack ADLS, Azure SQL DB, Azure Data Factory, Azure Data bricks, Azure Synapse, CosmoDB, Analysis Services, Event Hub etc.. Experience in job scheduling using Oozie or Airflow or any other ETL scheduler Design and build production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala. Good experience in designing & delivering data analytics solutions using Azure Cloud native services. Good experience in Requirements Analysis and Solution Architecture Design, Data modelling, ETL, data integration and data migration design Documentation of solutions (e.g. data models, configurations, and setup). Well versed with Waterfall, Agile, Scrum and similar project delivery methodologies. Experienced in internal as well as external stakeholder management Experience in MDM / DQM / Data Governance technologies like Collibra, Atacama, Alation, Reltio will be added advantage. Azure Data Engineer or Azure Solution Architect certification will be added advantage. Nice to have skills: Working experience with Snowflake, Databricks, Open source stack like Hadoop Bigdata, Pyspark, Scala, Python, Hive etc.

Posted 2 months ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Hyderabad, Telangana, India

On-site

An Ideal Candidate: The ideal candidates should have 3-8 Years of hands-on experience Implementing Azure Cloud data warehouses, Azure and No-SQL databases and hybrid data scenarios. Experience developing Azure Data Factory (covering Azure Functions, LogicApps, Triggers, IR), Databricks (pySpark, Scala), Stream Analytics, Event Hub & HD Insight Components. Experience in working on data lake & DW solutions on Azure. Experience managing Azure DevOps pipelines (CI/CD) Experience managing source data access security, using Vault, configuring authentication and authorization, enforcing data policies and standards. Key Competencies: 1.Design, develop and deploy solutions using different tools, design principles and conventions. 2.Configure robotics processes and objects using core workflow principles in an efficient way; ensure they are easily maintainable and easy to understand. 3.Understand existing processes and facilitate change requirements as part of a structured change control process. 4.Solve day to day issues arising while running robotics processes and provide timely resolutions. 5.Maintain proper documentation for the solutions, test procedures and scenarios during UAT and Production phase. 6.Coordinate with process owners and business to understand the as-is process and design the automation process flow.

Posted 2 months ago

Apply

3.0 - 7.0 years

6 - 15 Lacs

Pune, Chennai

Hybrid

Company Description: Volante is on the Leading Edge of Financial Services technology, if you are interested to be on an Innovative fast- moving team that leverages the very best in Cloud technology our team may be right for you. By joining the product team at Volante, you will have an opportunity to shape the future of payments technology, with focus on payment intelligence. We are a financial technology business that provides a market leading, cloud native Payments Processing Platform to Banks and Financial institutions globally. Education Criteria: • B.E, MSc, M.E/MS in Computer Science or similar major. Relevant certification courses from reputed organization. Experience of 3+ years as a Data Engineer Responsibilities: • The role involves design and development of scalable solutions, payment analytics unlocking operational and business insight • Own data modeling, building ETL pipelines and enabling data driven metrics • Build and optimize data models for our application needs • Design & develop data pipelines and workflows that integrate data sources (structured, unstructured data) across the payment landscape • Assess customer's data infrastructure landscape (payment ancillary systems including Sanctions, Fraud, AML) across cloud environments like AWS, Azure as well as on-prem, for deployment design • Lead the enterprise application data architecture design, framework & services plus Identify and enable the services for SaaS environment in Azure and AWS • Implement customizations and data processing required to transform customer datasets that is needed for processing in our analytics framework/BI models • Monitor data processing, machine learning workflows to ensure customer data is successfully processed by our BI models, debugging and resolving any issues faced along the way • Optimize queries, warehouse, data lake costs • Review and provide feedback on Data Architecture Design Document/HLD for our SaaS application • Cross team collaboration to successfully integrate all aspects of the Volante PaaS solution • Mentor to the development team Skills: • 3+ years of data engineering experience data collection, preprocessing, ETL processes and Analytics • Proficiency in data engineering Architecture, Metadata management, Analytics, reporting and database administration • Strong in SQL/NoSQL, Python, JSON and data warehousing/data lake , orchestration, analytical tools • ETL or pipeline design & implementation of large data • Experience with data technologies, frameworks like Databricks, Synapse, Kafka, Spark, Elasticsearch • Knowledge of SCD, CDC, core data warehousing to develop a cost-effective, secure data collection, storage, and distribution of data for SaaS application • Experience in application deployment in AWS or Azure w/container, Kubernetes • Strong problem-solving skills and passion for building data at scale Job Description Engineering Skills (Desirable): • Knowledge of data visualization tools like Tableau • ETL Orchestration tools like Airflow and visualization tools like Grafana • Prior experience in Banking or Payments domain Location: India (Pune or Chennai)

Posted 3 months ago

Apply

7.0 - 12.0 years

7 - 12 Lacs

Pune, Maharashtra, India

On-site

As part of a critical healthcare IT transformation, Xpress we'llness is migrating its data infrastructure from Google Cloud Platform (GCP) to Microsoft Azure, and building an end-to-end ETL and reporting system to deliver key KPIs via Power BI. We are seeking a hands-on Technical Lead - Azure Data Engineering to lead the data engineering workstream of the project. The ideal candidate will have deep expertise in Azure cloud data services, strong experience in data migration from GCP to Azure, and a solid understanding of data governance, compliance, and Azure storage architectures. Key Responsibilities: Lead the technical design and implementation of data pipelines and storage on Azure. Drive the GCP-to-Azure data migration strategy and execution. Oversee the development of scalable ETL/ELT processes using Azure Data Factory, Synapse, or Fabric. Ensure alignment with data governance and healthcare compliance standards. Collaborate with architects, data engineers, and Power BI developers to enable accurate KPI delivery. Provide technical mentorship to junior engineers and ensure best practices are followe'd. Act as the primary technical point of contact for data engineering-related discussions. Key Skills & Qualifications: 7-12 years of experience in data engineering, with 3-6 years in Azure Cloud. Strong experience with Azure Data Factory, Azure Data Lake, Microsoft Fabric, ETL, Synapse Analytics, and Azure Storage services. Hands-on experience in data migration projects from GCP to Azure. Knowledge of data governance, Microsoft Purvue, privacy, and compliance (HIPAA preferred). Excellent communication and stakeholder management skills. Relevant Microsoft certifications are a plus. Our Commitment to Diversity & Inclusion: Our Perks and Benefits: Our benefits and rewards program has been thoughtfully designed to recognize your skills and contributions, elevate your learning/upskilling experience and provide care and support for you and your loved ones. As an Apexon Associate, you get continuous skill-based development, opportunities for career advancement, and access to comprehensive health and we'll-being benefits and assistance. We also offer: Group Health Insurance covering family of 4 Term Insurance and Accident Insurance Paid Holidays & Earned Leaves Paid Parental LeaveoLearning & Career Development Employee we'llness

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies