Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
24 - 36 Lacs
Bengaluru
Work from Office
Azure Data Factory (ADF), Synapse Analytics Azure SQL DB, Azure Data Lake Gen2 ETL/ELT Pipeline Development SQL, Data Modeling (Star/Snowflake Schema) CI/CD deployment experience Power BI (Good to have) Microsoft Certification (Preferred)
Posted 3 weeks ago
6.0 - 11.0 years
5 - 15 Lacs
Tirupati
Work from Office
About the Role We are seeking an experienced and driven Technical Project Manager / Technical Delivery Manager to lead complex, high-impact data analytics and data science projects for global clients. This role demands a unique blend of project management expertise, technical depth in cloud and data technologies, and the ability to collaborate across cross-functional teams. You will be responsible for ensuring the successful delivery of data platforms, data products, and enterprise analytics solutions that drive business value. Key Responsibilities Project & Delivery Management Lead the full project lifecycle for enterprise-scale data platformsincluding requirement gathering, development, testing, deployment, and post-production support. Own the delivery of Data Warehousing and Data Lakehouse solutions on cloud platforms (Azure, AWS, or GCP). Prepare and maintain detailed project plans (Microsoft Project Plan), and align them with the Statement of Work (SOW) and client expectations. Utilize hybrid project methodologies (Agile + Waterfall) for managing scope, budget, and timelines. Monitor key project KPIs (e.g., SLA, MTTR, MTTA, MTBF) and ensure adherence using tools like ServiceNow. Data Platform & Architecture Oversight Collaborate with data engineers and architects to guide the implementation of scalable Data Warehouses (e.g., Redshift, Synapse) and Data Lakehouse architectures (e.g., Databricks, Delta Lake). Ensure data platform solutions meet performance, security, and governance standards. Understand and help manage data integration pipelines, ETL/ELT processes, and BI/reporting requirements. Client Engagement & Stakeholder Management Serve as the primary liaison for US/UK clients; manage regular status updates, escalation paths, and expectations across stakeholders. Conduct WSRs, MSRs, and QBRs with clients and internal teams to drive transparency and performance reviews. Facilitate team meetings, highlight risks or blockers, and ensure consistent stakeholder alignment. Technical Leadership & Troubleshooting Provide hands-on support and guidance in data infrastructure troubleshooting using tools like Splunk, AppDynamics, Azure Monitor. Lead incident, problem, and change management processes with data platform operations in mind. Identify automation opportunities and propose technical process improvements across data pipelines and workflows. Governance, Documentation & Compliance Create and maintain SOPs, runbooks, implementation documents, and architecture diagrams. Manage project compliance related to data privacy, security, and internal/external audits. Initiate and track Change Requests (CRs) and look for revenue expansion opportunities with clients. Continuous Improvement & Innovation Participate in and lead at least three internal process optimization or innovation initiatives annually. Work with engineering, analytics, and DevOps teams to improve CI/CD pipelines and data delivery workflows. Monitor production environments to reduce deployment issues and improve time-to-insight. Must-Have Qualifications 10+ years of experience in technical project delivery, with strong focus on data analytics, BI, and cloud data platforms . Strong hands-on experience with SQL and data warehouse technologies like Snowflake, Synapse, Redshift, BigQuery , etc. Proven experience delivering Data Warehouse and Data Lakehouse solutions. Familiarity with tools such as Redshift, Synapse, BigQuery, Databricks, Delta Lake . Strong cloud knowledge with Azure, AWS, or GCP . Proficiency in project management tools like Microsoft Project Plan (MPP) , JIRA, Confluence, and ServiceNow. Expertise in Agile project methodologies. Excellent communication skillsboth verbal and writtenwith no MTI or grammatical errors. Hands-on experience working with global delivery models (onshore/offshore). Preferred Qualifications PMP or Scrum Master certification. Understanding of ITIL processes and DataOps practices. Experience managing end-to-end cloud data transformation projects. Experience in project estimation, proposal writing, and RFP handling. Desired Skills & Competencies Deep understanding of SDLC, data architecture, and data governance principles. Strong leadership, decision-making, and conflict-resolution abilities. High attention to detail and accuracy in documentation and reporting. Ability to handle multiple concurrent projects in a fast-paced, data-driven environment. A passion for data-driven innovation and business impact. Why Join Us? Be part of a collaborative and agile team driving cutting-edge AI and data engineering solutions. Work on impactful projects that make a difference across industries. Opportunities for professional growth and continuous learning. Competitive salary and benefits package.
Posted 3 weeks ago
6.0 - 11.0 years
10 - 18 Lacs
Bengaluru
Remote
We are looking for experienced DBAs worked on multiple database technologies and cloud migration projects. 6+ years of experience working on SQL/NoSQL/Data warehouse platforms on on-premise and cloud (AWS, Azure & GCP) Provide expert-level guidance on cloud adoption, data migration strategies, and digital transformation projects Strong understanding of RDBMS, NoSQL, Datawarehouse, In Memory and Data Lake architecture, features, and functionalities Proficiency in SQL and data manipulation techniques. Experience with data loading and unloading tools and techniques. Expertise in Data Access Management, Database reliability & scalability and Administer, configure, and optimize database resources and services across the organization Ensure high availability, replication, and failover strategies Implement serverless database architectures for cost-effective, scalable storage Key Responsibilities Strong proficiency in Database administration of one or more databases (Snowflake, BigQuery, Amazon Redshift, Teradata, SAP HANA, Oracle, PostgreSQL, MySQL, SQL Server, Cassandra, MongoDB, Neo4j, Cloudera, Micro Focus, IBM DB2, Elasticsearch, DynamoDB, Azure synapse ) Plan and Execute the On-Prem Database/Analysis Services/Reporting Services/Integration Services Migration to AWS/Azure/GCP Develop automation scripts using Python, Shell Scripting, or Terraform for streamlined database operations. Provide technical guidance and mentoring to junior DBAs and data engineers. Hands-on experience with data modelling, ETL/ELT processes, and data integration tools. Monitoring and optimizing the performance of virtual warehouses, queries, and overall system performance. Optimize database performance through query tuning, indexing, and configuration. Manage replication, backups, and disaster recovery for high availability. Troubleshoot and resolve database issues, including performance bottlenecks, errors, and downtime. Collaborate with the infrastructure team to configure, manage, and monitor PostgreSQL in cloud environments (AWS, GCP, or Azure). Provide on-call support for critical database operations and incidents Provide Level 3 and 4 technical support, troubleshooting complex issues. Participate in cross-functional teams for database design and optimization.
Posted 3 weeks ago
6.0 - 11.0 years
10 - 18 Lacs
Bengaluru
Remote
We are looking for experienced DBAs worked on multiple database technologies and cloud migration projects for our clients worldwide. 6+ years of experience working on SQL/NoSQL/Data warehouse platforms on on-premise and cloud (AWS, Azure & GCP) Provide expert-level guidance on cloud adoption, data migration strategies, and digital transformation projects Strong understanding of RDBMS, NoSQL, Datawarehouse, In Memory and Data Lake architecture, features, and functionalities Proficiency in SQL and data manipulation techniques. Experience with data loading and unloading tools and techniques. Expertise in Data Access Management, Database reliability & scalability and Administer, configure, and optimize database resources and services across the organization Ensure high availability, replication, and failover strategies Implement serverless database architectures for cost-effective, scalable storage Key Responsibilities Strong proficiency in Database administration of one or more databases (Snowflake, BigQuery, Amazon Redshift, Teradata, SAP HANA, Oracle, PostgreSQL, MySQL, SQL Server, Cassandra, MongoDB, Neo4j, Cloudera, Micro Focus, IBM DB2, Elasticsearch, DynamoDB, Azure synapse ) Plan and Execute the On-Prem Database/Analysis Services/Reporting Services/Integration Services Migration to AWS/Azure/GCP Develop automation scripts using Python, Shell Scripting, or Terraform for streamlined database operations. Provide technical guidance and mentoring to junior DBAs and data engineers. Hands-on experience with data modelling, ETL/ELT processes, and data integration tools. Monitoring and optimizing the performance of virtual warehouses, queries, and overall system performance. Optimize database performance through query tuning, indexing, and configuration. Manage replication, backups, and disaster recovery for high availability. Troubleshoot and resolve database issues, including performance bottlenecks, errors, and downtime. Collaborate with the infrastructure team to configure, manage, and monitor PostgreSQL in cloud environments (AWS, GCP, or Azure). Provide on-call support for critical database operations and incidents Provide Level 3 and 4 technical support, troubleshooting complex issues. Participate in cross-functional teams for database design and optimization.
Posted 3 weeks ago
8.0 - 13.0 years
8 - 17 Lacs
Chennai
Remote
MS Fabric (Data Lake, OneLake, Lakehouse, Warehouse, Real-Time Analytics) and integration with Power BI, Synapse, and Azure Data Factory. DevOps Knowledge Team Leading experience
Posted 3 weeks ago
8.0 - 13.0 years
8 - 17 Lacs
Chennai
Remote
MS Fabric (Data Lake, OneLake, Lakehouse, Warehouse, Real-Time Analytics) and integration with Power BI, Synapse, and Azure Data Factory. DevOps Knowledge Team Leading experience
Posted 3 weeks ago
5.0 - 10.0 years
25 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Develop and maintain data pipelines, ETL/ELT processes, and workflows to ensure the seamless integration and transformation of data. Architect, implement, and optimize scalable data solutions. Required Candidate profile Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver actionable insights. Partner with cloud architects and DevOps teams
Posted 1 month ago
5.0 - 10.0 years
16 - 27 Lacs
Bengaluru
Hybrid
6-7 Years of Data and Analytics experience with minimum 3 years in Azure Cloud Excellent communication and interpersonal skills. Extensive experience in Azure stack ADLS, Azure SQL DB, Azure Data Factory, Azure Data bricks, Azure Synapse, CosmoDB, Analysis Services, Event Hub etc.. Experience in job scheduling using Oozie or Airflow or any other ETL scheduler Design and build production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala. Good experience in designing & delivering data analytics solutions using Azure Cloud native services. Good experience in Requirements Analysis and Solution Architecture Design, Data modelling, ETL, data integration and data migration design Documentation of solutions (e.g. data models, configurations, and setup). Well versed with Waterfall, Agile, Scrum and similar project delivery methodologies. Experienced in internal as well as external stakeholder management Experience in MDM / DQM / Data Governance technologies like Collibra, Atacama, Alation, Reltio will be added advantage. Azure Data Engineer or Azure Solution Architect certification will be added advantage. Nice to have skills: Working experience with Snowflake, Databricks, Open source stack like Hadoop Bigdata, Pyspark, Scala, Python, Hive etc.
Posted 1 month ago
9.0 - 14.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Utilizes software engineering principles to deploy and maintain fully automated data transformation pipelines that combine a large variety of storage and computation technologies to handle a distribution of data types and volumes in support of data architecture design. A Senior Data Engineer designs and oversees the entire data infrastructure, data products and data pipelines that are resilient to change, modular, flexible, scalable, reusable, and cost effective. Key Responsibilities : Oversee the entire data infrastructure to ensure scalability, operation efficiency and resiliency. - Mentor junior data engineers within the organization. - Design, develop, and maintain data pipelines and ETL processes using Microsoft Azure services (e.g., Azure Data Factory, Azure Synapse, Azure Databricks, Azure Fabric). - Utilize Azure data storage accounts for organizing and maintaining data pipeline outputs. (e.g., Azure Data Lake Storage Gen 2 & Azure Blob storage). - Collaborate with data scientists, data analysts, data architects and other stakeholders to understand data requirements and deliver high-quality data solutions. - Optimize data pipelines in the Azure environment for performance, scalability, and reliability. - Ensure data quality and integrity through data validation techniques and frameworks. - Develop and maintain documentation for data processes, configurations, and best practices. - Monitor and troubleshoot data pipeline issues to ensure timely resolution. - Stay current with industry trends and emerging technologies to ensure our data solutions remain cutting-edge. - Manage the CI/CD process for deploying and maintaining data solutions.
Posted 1 month ago
4.0 - 6.0 years
12 - 14 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Strong hands-on experience with Azure Databricks, PySpark, and ADF Advanced expertise in Azure SQL DB, Synapse Analytics, and Azure Data Lake Familiar with Azure Analysis Services, Azure SQL, and CI/CD (Azure DevOps) Proficient in data modeling, SQL Server best practices, and BI/Data Warehousing architecture Agile methodologies: ADO, Scrum, Kanban, Lean Collaborate with business/technical teams to design scalable data solutions Architect and implement data pipelines and models Provide technical leadership, code reviews, and best practice guidance Support end-to-end lifecycle: estimation, design, development, deployment Risk/issue management and solution recommendation. Location- Remote, Delhi NCR, Bangalore, Chennai, Pune, Kolkata, Ahmedabad, Mumbai, Hyderabad
Posted 1 month ago
4.0 - 9.0 years
15 - 30 Lacs
Kochi
Work from Office
About Neudesic Passion for technology drives us, but its innovation that defines us . From design to development and support to management, Neudesic offers decades of experience, proven frameworks, and a disciplined approach to quickly deliver reliable, quality solutions that help our customers go to market faster.What sets us apart from the rest is an amazing collection of people who live and lead with our core values. We believe that everyone should be Passionate about what they do, Disciplined to the core, Innovative by nature, committed to a Team and conduct themselves with Integrity . If these attributes mean something to you - we'd like to hear from you. We are currently looking for Azure Data Engineers to become a member of Neudesic s Data & AI team. Must Have Skills: Prior experience in ETL, data pipelines, data flow techniques using Azure Data Services Working experience in Python, Scala, Py Spark, Azure Data Factory, Azure Data Lake Gen2, Databricks, Azure Synapse and file formats like JSON & Parquet Experience in creating ADF Pipelines to source and process data sets. Experience in creating Databricks notebooks to cleanse, transform and enrich data sets. Good understanding about SQL, Databases, NO-SQL DBs, Data Warehouse, Hadoop and various data storage options on the cloud. Development experience in orchestration of pipelines Experience in deployment and monitoring techniques Working experience with Azure DevOps CI/CD pipelines to deploy Azure resources. Experience in handling operations/Integration with source repository Must have good knowledge on Datawarehouse concepts and Datawarehouse modelling Good to Have Skills: Familiarity with DevOps, Agile Scrum methodologies and CI/CD Domain-driven development exposure Analytical / problem solving skills Strong communication skills Good experience with unit, integration and UAT support Able to design and code reusable components and functions Should able to review design, code & provide review comments with justification Zeal to learn new tool/technologies and adoption Power BI and Data Catalog experience * Be aware of phishing scams involving fraudulent career recruiting and fictitious job postings; visit our Phishing Scams page to learn more. Neudesic is an Equal Opportunity Employer Neudesic provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by local laws. Neudesic is an IBM subsidiary which has been acquired by IBM and will be integrated into the IBM organization. Neudesic will be the hiring entity. By proceeding with this application, you understand that Neudesic will share your personal information with other IBM companies involved in your recruitment process, wherever these are located. More Information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here: https://www.ibm.com/us-en/privacy?lnk=flg-priv-usen
Posted 1 month ago
5.0 - 10.0 years
0 - 0 Lacs
Pune, Chennai
Hybrid
Ciklum is looking for a Senior Microsoft Fabric Data Engineer to join our team full-time in India. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About the role: We are seeking a highly skilled and experienced Senior Microsoft Fabric Data Engineer to design, develop, and optimize advanced data solutions leveraging the Microsoft Fabric platform. You will be responsible for building robust, scalable data pipelines, integrating diverse and large-scale data sources, and enabling sophisticated analytics and business intelligence capabilities. This role requires extensive hands-on expertise with Microsoft Fabric, a deep understanding of Azure data services, and mastery of modern data engineering practices. Responsibilities: Lead the design and implementation of highly scalable and efficient data pipelines and data warehouses using Microsoft Fabric and a comprehensive suite of Azure services (Data Factory, Synapse Analytics, Azure SQL, Data Lake) Develop, optimize, and oversee complex ETL/ELT processes for data ingestion, transformation, and loading from a multitude of disparate sources, ensuring high performance with large-scale datasets Ensure the highest level of data integrity, quality, and governance throughout the entire Fabric environment, establishing best practices for data management Collaborate extensively with stakeholders, translating intricate business requirements into actionable, resilient, and optimized data solutions Proactively troubleshoot, monitor, and fine-tune data pipelines and workflows for peak performance and efficiency, particularly in handling massive datasets Architect and manage workspace architecture, implement robust user access controls, and enforce data security in strict compliance with privacy regulations Automate platform tasks and infrastructure management using advanced scripting languages (Python, PowerShell) and Infrastructure as Code (Terraform, Ansible) principles Document comprehensive technical solutions, enforce code modularity, and champion best practices in version control and documentation across the team Stay at the forefront of Microsoft Fabric updates, new features, and contribute significantly to continuous improvement initiatives and the adoption of cutting-edge technologies Requirements: Minimum of 5+ years of progressive experience in data engineering, with at least 3 years of hands-on, in-depth work on Microsoft Fabric and a wide array of Azure data services Exceptional proficiency in SQL, Python, and advanced data transformation tools (e.g., Spark, PySpark notebooks) Mastery of data warehousing concepts, dimensional modeling, and advanced ETL best practices Extensive experience with complex hybrid cloud and on-premises data integration scenarios Profound understanding of data governance, security protocols, and compliance standards Excellent problem-solving, analytical, and communication skills, with the ability to articulate complex technical concepts clearly to both technical and non-technical audiences Desirable: Experience with Power BI, Azure Active Directory, and managing very large-scale data infrastructure Strong familiarity with Infrastructure as Code and advanced automation tools Bachelors degree in Computer Science, Engineering, or a related field (or equivalent extensive experience) What's in it for you? Care: your mental and physical health is our priority. We ensure comprehensive company-paid medical insurance, as well as financial and legal consultation Tailored education path: boost your skills and knowledge with our regular internal events (meetups, conferences, workshops), Udemy licence, language courses and company-paid certifications Growth environment: share your experience and level up your expertise with a community of skilled professionals, locally and globally Flexibility: hybrid work mode at Chennai or Pune Opportunities: we value our specialists and always find the best options for them. Our Resourcing Team helps change a project if needed to help you grow, excel professionally and fulfil your potential Global impact: work on large-scale projects that redefine industries with international and fast-growing clients Welcoming environment: feel empowered with a friendly team, open-door policy, informal atmosphere within the company and regular team-building events About us: At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, youll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress. India is a strategic innovation hub for Ciklum, with growing teams in Chennai and Pune leading advancements in EdgeTech, AR/VR, IoT, and beyond. Join us to collaborate on game-changing solutions and take your career to the next level. Want to learn more about us? Follow us on Instagram , Facebook , LinkedIn . Explore, empower, engineer with Ciklum! Experiences of tomorrow. Engineered together Interested already? We would love to get to know you! Submit your application. Cant wait to see you at Ciklum.
Posted 1 month ago
8.0 - 13.0 years
14 - 22 Lacs
Mumbai
Work from Office
Designed and built ETL pipelines to support enterprise data initiatives. Developed and implemented databases and data collection systems . Acquired data from primary and secondary sources ; maintained reliable data systems. Extensive experience in data ingestion , modelling , architecture design , and consulting for complex, cross-functional projects involving multiple stakeholders. Identified relevant data sources aligned with specific business requirements . Conducted data quality assessments and built validation processes to ensure integrity. Generated actionable insights from data sets, identifying key trends and patterns . Created detailed executive-level reports and dashboards for project teams. Acted as a single point of contact for all data-related issues across business units. Collaborated with vendors, functional, operational, and technical teams to address and support data needs. Ensured data consistency and accuracy from all providers and external sources. Supported the development of data platforms , maintaining focus on data quality and completeness . Created and maintained comprehensive technical documentation . Built for scalability and high performance in all development efforts. Troubleshot data infrastructure and data processing issues effectively. Supported data roadmap initiatives , recommending optimal technologies. Led the automation of multi-step, repetitive tasks , improving efficiency and reducing manual errors.
Posted 1 month ago
5.0 - 10.0 years
16 - 27 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
6-7 Years of Data and Analytics experience with minimum 3 years in Azure Cloud Excellent communication and interpersonal skills. Extensive experience in Azure stack ADLS, Azure SQL DB, Azure Data Factory, Azure Data bricks, Azure Synapse, CosmoDB, Analysis Services, Event Hub etc.. Experience in job scheduling using Oozie or Airflow or any other ETL scheduler Design and build production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala. Good experience in designing & delivering data analytics solutions using Azure Cloud native services. Good experience in Requirements Analysis and Solution Architecture Design, Data modelling, ETL, data integration and data migration design Documentation of solutions (e.g. data models, configurations, and setup). Well versed with Waterfall, Agile, Scrum and similar project delivery methodologies. Experienced in internal as well as external stakeholder management Experience in MDM / DQM / Data Governance technologies like Collibra, Atacama, Alation, Reltio will be added advantage. Azure Data Engineer or Azure Solution Architect certification will be added advantage. Nice to have skills: Working experience with Snowflake, Databricks, Open source stack like Hadoop Bigdata, Pyspark, Scala, Python, Hive etc.
Posted 1 month ago
5.0 - 10.0 years
5 - 10 Lacs
Hyderabad, Telangana, India
On-site
An Ideal Candidate: The ideal candidates should have 3-8 Years of hands-on experience Implementing Azure Cloud data warehouses, Azure and No-SQL databases and hybrid data scenarios. Experience developing Azure Data Factory (covering Azure Functions, LogicApps, Triggers, IR), Databricks (pySpark, Scala), Stream Analytics, Event Hub & HD Insight Components. Experience in working on data lake & DW solutions on Azure. Experience managing Azure DevOps pipelines (CI/CD) Experience managing source data access security, using Vault, configuring authentication and authorization, enforcing data policies and standards. Key Competencies: 1.Design, develop and deploy solutions using different tools, design principles and conventions. 2.Configure robotics processes and objects using core workflow principles in an efficient way; ensure they are easily maintainable and easy to understand. 3.Understand existing processes and facilitate change requirements as part of a structured change control process. 4.Solve day to day issues arising while running robotics processes and provide timely resolutions. 5.Maintain proper documentation for the solutions, test procedures and scenarios during UAT and Production phase. 6.Coordinate with process owners and business to understand the as-is process and design the automation process flow.
Posted 1 month ago
3.0 - 7.0 years
6 - 15 Lacs
Pune, Chennai
Hybrid
Company Description: Volante is on the Leading Edge of Financial Services technology, if you are interested to be on an Innovative fast- moving team that leverages the very best in Cloud technology our team may be right for you. By joining the product team at Volante, you will have an opportunity to shape the future of payments technology, with focus on payment intelligence. We are a financial technology business that provides a market leading, cloud native Payments Processing Platform to Banks and Financial institutions globally. Education Criteria: • B.E, MSc, M.E/MS in Computer Science or similar major. Relevant certification courses from reputed organization. Experience of 3+ years as a Data Engineer Responsibilities: • The role involves design and development of scalable solutions, payment analytics unlocking operational and business insight • Own data modeling, building ETL pipelines and enabling data driven metrics • Build and optimize data models for our application needs • Design & develop data pipelines and workflows that integrate data sources (structured, unstructured data) across the payment landscape • Assess customer's data infrastructure landscape (payment ancillary systems including Sanctions, Fraud, AML) across cloud environments like AWS, Azure as well as on-prem, for deployment design • Lead the enterprise application data architecture design, framework & services plus Identify and enable the services for SaaS environment in Azure and AWS • Implement customizations and data processing required to transform customer datasets that is needed for processing in our analytics framework/BI models • Monitor data processing, machine learning workflows to ensure customer data is successfully processed by our BI models, debugging and resolving any issues faced along the way • Optimize queries, warehouse, data lake costs • Review and provide feedback on Data Architecture Design Document/HLD for our SaaS application • Cross team collaboration to successfully integrate all aspects of the Volante PaaS solution • Mentor to the development team Skills: • 3+ years of data engineering experience data collection, preprocessing, ETL processes and Analytics • Proficiency in data engineering Architecture, Metadata management, Analytics, reporting and database administration • Strong in SQL/NoSQL, Python, JSON and data warehousing/data lake , orchestration, analytical tools • ETL or pipeline design & implementation of large data • Experience with data technologies, frameworks like Databricks, Synapse, Kafka, Spark, Elasticsearch • Knowledge of SCD, CDC, core data warehousing to develop a cost-effective, secure data collection, storage, and distribution of data for SaaS application • Experience in application deployment in AWS or Azure w/container, Kubernetes • Strong problem-solving skills and passion for building data at scale Job Description Engineering Skills (Desirable): • Knowledge of data visualization tools like Tableau • ETL Orchestration tools like Airflow and visualization tools like Grafana • Prior experience in Banking or Payments domain Location: India (Pune or Chennai)
Posted 1 month ago
7.0 - 12.0 years
7 - 12 Lacs
Pune, Maharashtra, India
On-site
As part of a critical healthcare IT transformation, Xpress we'llness is migrating its data infrastructure from Google Cloud Platform (GCP) to Microsoft Azure, and building an end-to-end ETL and reporting system to deliver key KPIs via Power BI. We are seeking a hands-on Technical Lead - Azure Data Engineering to lead the data engineering workstream of the project. The ideal candidate will have deep expertise in Azure cloud data services, strong experience in data migration from GCP to Azure, and a solid understanding of data governance, compliance, and Azure storage architectures. Key Responsibilities: Lead the technical design and implementation of data pipelines and storage on Azure. Drive the GCP-to-Azure data migration strategy and execution. Oversee the development of scalable ETL/ELT processes using Azure Data Factory, Synapse, or Fabric. Ensure alignment with data governance and healthcare compliance standards. Collaborate with architects, data engineers, and Power BI developers to enable accurate KPI delivery. Provide technical mentorship to junior engineers and ensure best practices are followe'd. Act as the primary technical point of contact for data engineering-related discussions. Key Skills & Qualifications: 7-12 years of experience in data engineering, with 3-6 years in Azure Cloud. Strong experience with Azure Data Factory, Azure Data Lake, Microsoft Fabric, ETL, Synapse Analytics, and Azure Storage services. Hands-on experience in data migration projects from GCP to Azure. Knowledge of data governance, Microsoft Purvue, privacy, and compliance (HIPAA preferred). Excellent communication and stakeholder management skills. Relevant Microsoft certifications are a plus. Our Commitment to Diversity & Inclusion: Our Perks and Benefits: Our benefits and rewards program has been thoughtfully designed to recognize your skills and contributions, elevate your learning/upskilling experience and provide care and support for you and your loved ones. As an Apexon Associate, you get continuous skill-based development, opportunities for career advancement, and access to comprehensive health and we'll-being benefits and assistance. We also offer: Group Health Insurance covering family of 4 Term Insurance and Accident Insurance Paid Holidays & Earned Leaves Paid Parental LeaveoLearning & Career Development Employee we'llness
Posted 1 month ago
2.0 - 5.0 years
8 - 13 Lacs
Pune, Maharashtra, India
On-site
We are looking for a proactive and detail-oriented Junior Data Engineer with 2 to 5 years of experience to join our cloud data transformation team. The candidate will work closely with the Data Engineering Lead and Solution Architect to support data migration, pipeline development, testing, and integration efforts on the Microsoft Azure platform. Key Responsibilities: Data Migration Support Assist in migrating structured and semi-structured data from GCP storage systems to Azure Blob Storage, Azure Data Lake, or Synapse. Help validate and reconcile data post-migration to ensure completeness and accuracy. ETL/ELT Development Build and maintain ETL pipelines using Azure Data Factory, Synapse Pipelines, or Microsoft Fabric. Support the development of data transformation logic (SQL/ADF/Dataflows). Ensure data pipelines are efficient, scalable, and meet defined SLAs. Data Modeling & Integration Support the design of data models to enable effective reporting in Power BI. Prepare clean, structured datasets ready for downstream KPI reporting and analytics use cases. Testing & Documentation Conduct unit and integration testing of data pipelines. Maintain documentation of data workflows, metadata, and pipeline configurations. Collaboration & Learning Collaborate with the Data Engineering Lead, BI Developers, and other team members. Stay current with Azure technologies and best practices under the guidance of senior team members. Qualifications: Education: bachelors degree in Computer Science, Information Technology, Engineering, or a related field. Experience: 2 to 5 years of hands-on experience in data engineering or analytics engineering roles. Exposure to at least one cloud platform (preferably Microsoft Azure). Technical Skills Required: Experience with SQL and data transformation logic. Familiarity with Azure data services like Azure Data Factory, Synapse Analytics, Blob Storage, Data Lake, or Microsoft Fabric. Basic knowledge of ETL/ELT concepts and data warehousing principles and familiarity with Unix shell scripting. Familiarity with Power BI datasets or Power Query is a plus. Good understanding of data quality and testing practices. Exposure to version control systems like Git. Soft Skills: Eagerness to learn and grow under the mentorship of experienced team members. Strong analytical and problem-solving skills. Ability to work in a collaborative, fast-paced team environment. Good written and verbal communication skills. Our Perks and Benefits: Our benefits and rewards program has been thoughtfully designed to recognize your skills and contributions, elevate your learning/upskilling experience and provide care and support for you and your loved ones. As an Apexon Associate, you get continuous skill-based development, opportunities for career advancement, and access to comprehensive health and we'll-being benefits and assistance. We also offer: Group Health Insurance covering family of 4 Term Insurance and Accident Insurance Paid Holidays & Earned Leaves Paid Parental Leaveo Learning & Career Development Employee we'llness Job Location : Pune, India
Posted 1 month ago
2.0 - 4.0 years
7 - 9 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
POSITION Senior Data Engineer / Data Engineer LOCATION Bangalore/Mumbai/Kolkata/Gurugram/Hyd/Pune/Chennai EXPERIENCE 2+ Years JOB TITLE: Senior Data Engineer / Data Engineer OVERVIEW OF THE ROLE: As a Data Engineer or Senior Data Engineer, you will be hands-on in architecting, building, and optimizing robust, efficient, and secure data pipelines and platforms that power business-critical analytics and applications. You will play a central role in the implementation and automation of scalable batch and streaming data workflows using modern big data and cloud technologies. Working within cross-functional teams, you will deliver well-engineered, high-quality code and data models, and drive best practices for data reliability, lineage, quality, and security. HASHEDIN BY DELOITTE 2025 Mandatory Skills: Hands-on software coding or scripting for minimum 3 years Experience in product management for at-least 2 years Stakeholder management experience for at-least 3 years Experience in one amongst GCP, AWS or Azure cloud platform Key Responsibilities: Design, build, and optimize scalable data pipelines and ETL/ELT workflows using Spark (Scala/Python), SQL, and orchestration tools (e.g., Apache Airflow, Prefect, Luigi). Implement efficient solutions for high-volume, batch, real-time streaming, and event-driven data processing, leveraging best-in-class patterns and frameworks. Build and maintain data warehouse and lakehouse architectures (e.g., Snowflake, Databricks, Delta Lake, BigQuery, Redshift) to support analytics, data science, and BI workloads. Develop, automate, and monitor Airflow DAGs/jobs on cloud or Kubernetes, following robust deployment and operational practices (CI/CD, containerization, infra-as-code). Write performant, production-grade SQL for complex data aggregation, transformation, and analytics tasks. Ensure data quality, consistency, and governance across the stack, implementing processes for validation, cleansing, anomaly detection, and reconciliation. Collaborate with Data Scientists, Analysts, and DevOps engineers to ingest, structure, and expose structured, semi-structured, and unstructured data for diverse use-cases. Contribute to data modeling, schema design, data partitioning strategies, and ensure adherence to best practices for performance and cost optimization. Implement, document, and extend data lineage, cataloging, and observability through tools such as AWS Glue, Azure Purview, Amundsen, or open-source technologies. Apply and enforce data security, privacy, and compliance requirements (e.g., access control, data masking, retention policies, GDPR/CCPA). Take ownership of end-to-end data pipeline lifecycle: design, development, code reviews, testing, deployment, operational monitoring, and maintenance/troubleshooting. Contribute to frameworks, reusable modules, and automation to improve development efficiency and maintainability of the codebase. Stay abreast of industry trends and emerging technologies, participating in code reviews, technical discussions, and peer mentoring as needed. Skills & Experience: Proficiency with Spark (Python or Scala), SQL, and data pipeline orchestration (Airflow, Prefect, Luigi, or similar). Experience with cloud data ecosystems (AWS, GCP, Azure) and cloud-native services for data processing (Glue, Dataflow, Dataproc, EMR, HDInsight, Synapse, etc.). © HASHEDIN BY DELOITTE 2025 Hands-on development skills in at least one programming language (Python, Scala, or Java preferred); solid knowledge of software engineering best practices (version control, testing, modularity). Deep understanding of batch and streaming architectures (Kafka, Kinesis, Pub/Sub, Flink, Structured Streaming, Spark Streaming). Expertise in data warehouse/lakehouse solutions (Snowflake, Databricks, Delta Lake, BigQuery, Redshift, Synapse) and storage formats (Parquet, ORC, Delta, Iceberg, Avro). Strong SQL development skills for ETL, analytics, and performance optimization. Familiarity with Kubernetes (K8s), containerization (Docker), and deploying data pipelines in distributed/cloud-native environments. Experience with data quality frameworks (Great Expectations, Deequ, or custom validation), monitoring/observability tools, and automated testing. Working knowledge of data modeling (star/snowflake, normalized, denormalized) and metadata/catalog management. Understanding of data security, privacy, and regulatory compliance (access management, PII masking, auditing, GDPR/CCPA/HIPAA). Familiarity with BI or visualization tools (PowerBI, Tableau, Looker, etc.) is an advantage but not core. Previous experience with data migrations, modernization, or refactoring legacy ETL processes to modern cloud architectures is a strong plus. Bonus: Exposure to open-source data tools (dbt, Delta Lake, Apache Iceberg, Amundsen, Great Expectations, etc.) and knowledge of DevOps/MLOps processes. Professional Attributes: Strong analytical and problem-solving skills; attention to detail and commitment to code quality and documentation. Ability to communicate technical designs and issues effectively with team members and stakeholders. Proven self-starter, fast learner, and collaborative team player who thrives in dynamic, fast-paced environments. Passion for mentoring, sharing knowledge, and raising the technical bar for data engineering practices. Desirable Experience: Contributions to open source data engineering/tools communities. Implementing data cataloging, stewardship, and data democratization initiatives. Hands-on work with DataOps/DevOps pipelines for code and data. Knowledge of ML pipeline integration (feature stores, model serving, lineage/monitoring integration) is beneficial. © HASHEDIN BY DELOITTE 2025 EDUCATIONAL QUALIFICATIONS: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience). Certifications in cloud platforms (AWS, GCP, Azure) and/or data engineering (AWS Data Analytics, GCP Data Engineer, Databricks). Experience working in an Agile environment with exposure to CI/CD, Git, Jira, Confluence, and code review processes. Prior work in highly regulated or large-scale enterprise data environments (finance, healthcare, or similar) is a plus.
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Mumbai
Work from Office
Job Summary This position provides input and support for full systems life cycle management activities (e.g., analyses, technical requirements, design, coding, testing, implementation of systems and applications software, etc.). He/She performs tasks within planned durations and established deadlines. This position collaborates with teams to ensure effective communication and support the achievement of objectives. He/She provides knowledge, development, maintenance, and support for applications. Responsibilities: Generates application documentation. Contributes to systems analysis and design. Designs and develops moderately complex applications. Contributes to integration builds. Contributes to maintenance and support. Monitors emerging technologies and products. Technical Skills : Cloud Platforms: Azure (Databricks, Data Factory, Data Lake Storage, Synapse Analytics). Data Processing: Databricks (PySpark, Spark SQL), Apache Spark. Programming Languages: Python, SQL Data Engineering Tools: Delta Lake, Azure Data Factory, Apache Airflow Other: Git, CI/CD Professional Experience : Design and implementation of a scalable data lakehouse on Azure Databricks, optimizing data ingestion, processing, and analysis for improved business insights. Develop and maintain efficient data pipelines using PySpark and Spark SQL for extracting, transforming, and loading (ETL) data from diverse sources.(Azure and GCP). Develop SQL stored procedures for data integrity. Ensure data accuracy and consistency across all layers. Implement Delta Lake for ACID transactions and data versioning, ensuring data quality and reliability. Create frameworks using Databricks and Data Factory to process incremental data for external vendors and applications. Implement Azure functions to trigger and manage data processing workflows. Design and implement data pipelines to integrate various data sources and manage Databricks workflows for efficient data processing. Conduct performance tuning and optimization of data processing workflows. Provide technical support and troubleshooting for data processing issues. Experience with successful migrations from legacy data infrastructure to Azure Databricks,improvingscalability and cost savings. Collaborate with data scientists and analysts to build interactive dashboards and visualizations on Databricks for data exploration and analysis. Effective oral and written management communication skills. Qualifications: Minimum 5 years of Relevant experience Bachelors Degree or International equivalent Bachelor's Degree or International equivalent in Computer Science, Information Systems, Mathematics, Statistics or related field
Posted 1 month ago
6.0 - 11.0 years
30 - 40 Lacs
Chennai
Work from Office
Role & responsibilities Data Engineer, with working on data migration projects. Experience with Azure data stack, including Data Lake Storage, Synapse Analytics, ADF, Azure Databricks, and Azure ML. Solid knowledge of Python, PySpark and other Python packages Familiarity with ML workflows and collaboration with data science teams. Strong understanding of data governance, security, and compliance in financial domains. Experience with CI/CD tools and version control systems (e.g., Azure DevOps, Git). Experience modularizing and migrating ML logic Note:- We encourage interested candidates to submit their updated CVs to mohan.kumar@changepond.com
Posted 1 month ago
11.0 - 17.0 years
20 - 35 Lacs
Indore, Hyderabad
Work from Office
Greetings of the Day !! We have job opening for Microsoft Fabric + ADF with one of our clients. If you are interested in this position, please share update resume in this email id : shaswati.m@bct-consulting.com . * Primary Skill Microsoft Fabric Secondary Skill 1 Azure Data Factory (ADF) 12+ years of experience in Microsoft Azure Data Engineering for analytical projects. Proven expertise in designing, developing, and deploying high-volume, end-to-end ETL pipelines for complex models, including batch, and real-time data integration frameworks using Azure, Microsoft Fabric and Databricks. Extensive hands-on experience with Azure Data Factory, Databricks (with Unity Catalog), Azure Functions, Synapse Analytics, Data Lake, Delta Lake, and Azure SQL Database for managing and processing large-scale data integrations. Experience in Databricks cluster optimization and workflow management to ensure cost-effective and high-performance processing. Sound knowledge of data modelling, data governance, data quality management, and data modernization processes. Develop architecture blueprints and technical design documentation for Azure-based data solutions. Provide technical leadership and guidance on cloud architecture best practices, ensuring scalable and secure solutions. Keep abreast of emerging Azure technologies and recommend enhancements to existing systems. Lead proof of concepts (PoCs) and adopt agile delivery methodologies for solution development and delivery.
Posted 1 month ago
8.0 - 13.0 years
15 - 30 Lacs
Pune
Remote
Join our innovative team and architect the future of data solutions on Azure, Synapse, and Databricks! Senior Data Engineer (Data Architect) Additional Details: Notice Period: 30 days (maximum) Location: Remote About the Role Design and implement scalable data pipelines, data warehouses, and data lakes that drive business growth. Collaborate with stakeholders to deliver data-driven insights and shape the data landscape. Requirements 8+ years of experience in data engineering and data architecture Strong expertise in Azure services (Synapse Analytics, Databricks, Storage, Active Directory) Proven experience in designing and implementing data pipelines, data warehouses, and data lakes Strong understanding of data governance, data quality, and data security Experience with infrastructure design and implementation, including DevOps practices and tools
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Nagpur, Pune
Work from Office
We are looking for a skilled Data Engineer to design, build, and manage scalable data pipelines and ensure high-quality, secure, and reliable data infrastructure across our cloud and on-prem platforms.
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Nagpur, Pune, Gurugram
Work from Office
We are looking for a skilled Data Engineer to design, build, and manage scalable data pipelines and ensure high-quality, secure, and reliable data infrastructure across our cloud and on-prem platforms.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough