Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 5.0 years
14 - 19 Lacs
Hyderabad
Work from Office
Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity.
Posted 2 weeks ago
5.0 - 9.0 years
14 - 19 Lacs
Hyderabad
Work from Office
Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity.
Posted 2 weeks ago
9.0 - 14.0 years
8 - 18 Lacs
Bhopal, Hyderabad, Pune
Hybrid
Urgent Opening for Data Architect Position Exp : Min 9yrs Salary : As per industry NP : Immediate joiners are preferred Skills : Databricks, ADF, Python, Synapse JD Job Description Work youll do This role is responsible for data architecture and support in a fast-paced cross-cultural diverse environment leveraging the Agile methodology. This role requires a solid understanding of taking business requirements to define data models and underlying data structures that support a data architecture's design. The person that fills this role is expected to work closely with product owners and a cross-functional team comprising business analysts, software engineers, functional and non- functional testers, operations engineers and project management. Key Responsibilities Collaborate with product managers, designers, and fellow developers to design, develop, and maintain web-based applications and software solutions. Write clean, maintainable, and efficient code, adhering to coding standards and best practices. Perform code reviews to ensure code quality and consistency. Troubleshoot and debug software defects and issues, providing timely solutions. Participate in the entire software development lifecycle, from requirements gathering to deployment and support. Stay up to date with industry trends and emerging technologies, incorporating them into the development process when appropriate. Mentor junior developers and provide technical guidance when needed. Qualifications : Technical Skills: Bachelor's degree in computer science, Software Engineering, or a related field. Strong experience in Data Engineering Well versed with Data Architecture 10+ years of professional experience in data engineering & architecture Advanced understanding data modelling and design. Strong database management and design, SQL Server preferred. Strong understanding of Databricks & Azure Synapse. Understanding of data pipeline/ETL frameworks and libraries. Experience with Azure Cloud Components (PaaS) and DevOps is required. Experience working in Agile and SAFe development processes. Excellent problem-solving and analytical skills. Other Skills Strong organizational and communication skills. Flexibility, energy and ability to work well with others in a team environment The ability to effectively manage multiple assignments and responsibilities in a fast-paced environment Expert problem solver. Finding simple answers to complex questions or problems. Should be able to learn and upskill on new technologies Drive for results partner with product owners to deliver on short- and long-term milestones Experience working with product owners and development teams to document and clarify business and user requirements and manage scope of defined features and functions during project lifecycle Critical thinking-able to think outside the box; use knowledge gained through prior experience, education, and training to resolve issues and remove project barriers Strong written and verbal communication skills with the ability to present and to collaborate with business leaders Experience interfacing with external software design and development vendors preferred Being a team player that can deliver in a high pressure and high demanding environment If anyone interested, Kindly share your resume on vidya.raskar@newvision-software.com Regards Vidya
Posted 2 weeks ago
10.0 - 12.0 years
11 - 15 Lacs
Hyderabad
Work from Office
Job Information Job Opening ID ZR_2063_JOB Date Opened 17/11/2023 Industry Technology Job Type Work Experience 10-12 years Job Title Azure Data Architect City Hyderabad Province Telangana Country India Postal Code 500003 Number of Positions 4 LocationCoimbatore & Hyderabad : Key-Azure+ SQL+ ADF+ Databricks +design+ Architecture( Mandate) Total experience in data management area for 10 + years with Azure cloud data platform experience Architect with Azure stack (ADLS, AALS, Azure Data Bricks, Azure Streaming Analytics Azure Data Factory, cosmos DB & Azure synapse) & mandatory expertise on Azure streaming Analytics, Data Bricks, Azure synapse, Azure cosmos DB Must have worked experience in large Azure Data platform and dealt with high volume Azure streaming Analytics Experience in designing cloud data platform architecture, designing large scale environments 5 plus Years of experience architecting and building Cloud Data Lake, specifically Azure Data Analytics technologies and architecture is desired, Enterprise Analytics Solutions, and optimising real time 'big data' data pipelines, architectures and data sets. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 2 weeks ago
3.0 - 7.0 years
10 - 20 Lacs
Kochi
Hybrid
Skills and attributes for success 3 to 7 years of Experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases, NoSQL, and data warehouse solutions Extensive hands-on experience implementing data migration and data processing using Azure services: Databricks, ADLS, Azure Data Factory, Azure Functions, Synapse/DW, Azure SQL DB, etc. Hands on experience in programming like python/pyspark Need to have good knowledge on DWH concepts and implementation knowledge on Snowflake Well versed in DevOps and CI/CD deployments Must have hands on experience in SQL and procedural SQL languages Strong analytical skills and enjoys solving complex technical problems Please apply on the below link for further interview process. https://careers.ey.com/job-invite/1537161/
Posted 2 weeks ago
8.0 - 13.0 years
8 - 18 Lacs
Pune
Remote
Data Engineer with good experience in Azure Data Engineer :- Pyspark: Python : Azure Data Bricks: Azure Data Factory :- SQL:- No Hyderabad & Bangalore Candidates
Posted 2 weeks ago
7.0 - 12.0 years
18 - 30 Lacs
Chennai
Hybrid
Hi, We have vacancy for Sr. Data engineer. We are seeking an experienced Senior Data Engineer to join our dynamic team. The ideal candidate will be responsible for Design and implement the data engineering framework. Responsibilities Strong Skill in Big Query, GCP Cloud Data Fusion (for ETL/ELT) and PowerBI. Need to have strong skill in Data Pipelines Able to work with Power BI and Power BI Reporting Design and implement the data engineering framework and data pipelines using Databricks and Azure Data Factory. Document the high-level design components of the Databricks data pipeline framework. Evaluate and document the current dependencies on the existing DEI toolset and agree a migration plan. Lead on the Design and implementation an MVP Databricks framework. Document and agree an aligned set of standards to support the implementation of a candidate pipeline under the new framework. Support integrating a test automation approach to the Databricks framework in conjunction with the test engineering function to support CI/CD and automated testing. Support the development teams capability building by establishing an L&D and knowledge transition approach. Support the implementation of data pipelines against the new framework in line with the agreed migration plan. Ensure data quality management including profiling, cleansing and deduplication to support build of data products for clients Skill Set Experience working in Azure Cloud using Azure SQL, Azure Databricks, Azure Data Lake, Delta Lake, and Azure DevOps. Proficient in Python, Pyspark and SQL coding skills. Profiling data and data modelling experience on large data transformation projects creating data products and data pipelines. Creating data management frameworks and data pipelines which are metadata and business rules driven using Databricks. Experience of reviewing datasets for data products in terms of data quality management and populating data schemas set by Data Modellers Experience with data profiling, data quality management and data cleansing tools. Immediate joining or short notice is required. Pls Call varsha 7200847046 for more info Thanks, varsha 7200847046
Posted 2 weeks ago
7.0 - 9.0 years
14 - 15 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
7+ years of Azure Data Engineering experience with proficiency in SQL and at least one programming language (e.g., Python) for data manipulation and scripting: Strong experience with PySpark, ADF, Databricks, Data Lake, and SQL Preferable experience with MS Fabric. Proficiency in data warehousing concepts and methodologies and implementation. Strong knowledge of Azure Synapse and Azure Databricks. Hands-on experience with data warehouse platforms and ETL tools (e.g Apache Spark). Deep understanding of data modelling principles, data integration techniques, and data governance best practices. Preferrable experience with Power BI, domain knowledge of Finance, Procurement, Human Capital. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote.
Posted 2 weeks ago
8.0 - 12.0 years
15 - 22 Lacs
Pune, Bengaluru
Work from Office
Job Title: Senior Data Engineer Company: NAM Info Private Limited Location : Bangalore Experience : 6-8 Years Responsibilities : Develop and optimize data pipelines using Azure Databricks and PySpark. Write SQL/Advanced SQL queries for data transformation and analysis. Manage data workflows with Azure Data Factory and Azure Data Lake. Collaborate with teams to ensure high-quality, efficient data solutions. Required Skills: 6-8 years of experience in Azure Databricks and PySpark. Advanced SQL query skills. Experience with Azure cloud services, ETL processes, and data optimization. Please send profiles for this role to narasimha@nam-it.com.
Posted 2 weeks ago
5.0 - 10.0 years
7 - 12 Lacs
Ahmedabad
Work from Office
Project Role :Application Lead Project Role Description :Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills :Microsoft Azure Analytics Services Good to have skills :NA Minimum 5 year(s) of experience is required Educational Qualification :BE Summary:As an Application Lead for Packaged Application Development, you will be responsible for designing, building, and configuring applications using Microsoft Azure Analytics Services. Your typical day will involve leading the effort to deliver high-quality applications, acting as the primary point of contact for the project team, and ensuring timely delivery of project milestones. Roles & Responsibilities: Lead the effort to design, build, and configure applications using Microsoft Azure Analytics Services. Act as the primary point of contact for the project team, ensuring timely delivery of project milestones. Collaborate with cross-functional teams to ensure the successful delivery of high-quality applications. Provide technical guidance and mentorship to team members, ensuring adherence to best practices and standards. Professional & Technical Skills: Must To Have Skills:Strong experience with Microsoft Azure Analytics Services. Good To Have Skills:Experience with other Azure services such as Azure Data Factory, Azure Databricks, and Azure Synapse Analytics. Experience in designing, building, and configuring applications using Microsoft Azure Analytics Services. Must have databricks and pyspark Skills. Strong understanding of data warehousing concepts and best practices. Experience with ETL processes and tools such as SSIS or Azure Data Factory. Experience with SQL and NoSQL databases. Experience with Agile development methodologies. Additional Information: The candidate should have a minimum of 5 years of experience in Microsoft Azure Analytics Services. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering high-quality applications. This position is based at our Bengaluru office. Qualifications BE
Posted 2 weeks ago
3.0 - 6.0 years
5 - 15 Lacs
Kochi, Thiruvananthapuram
Hybrid
Hiring for Azure Data Engineer in Kochi Location Experience - 3 to 6 years Location - Kochi JD Overall 3+ years of IT experience with 2+ Relevant experience in Azure Data Factory (ADF) and good hands-on with Exposure to latest ADF Version Hands-on experience on Azure functions & Azure synapse (Formerly SQL Data Warehouse) Should have project experience in Azure Data Lake / Blob (Storage purpose) Should have basic understanding on Batch Account configuration, various control options Sound knowledge in Data Bricks & Logic Apps Should be able to coordinate independently with business stake holders and understand the business requirements, implement the requirements using ADF Interested candidates please share your updated resume with below details at Smita.Dattu.Sarwade@gds.ey.com Total Experience - Relevant Experience - Current Location - Preferred Location - Current Ctc Expected Ctc – Notice period -
Posted 2 weeks ago
4.0 - 7.0 years
7 - 12 Lacs
Gurugram
Hybrid
Role & responsibilities Design and build effective solutions using the primary key skills required for the profile Support the Enterprise Data Environment team particularly for Data Quality and production support. Collaborate on a data migration strategy for existing systems that need to migrate to a next generation Cloud / AWS application software platform. Collaborate with teams as a key contributor of data architecture directives & documentation: including data models, technology roadmaps, standards, guidelines, and best practices. Focus on data quality throughout the ETL & data pipelines, driving improvements to data management processes, data storage, and data security to meet the needs of the business customers. Preferred candidate profile EDUCATION: Bachelor's FIELD OF STUDY: Information Technology EXPERIENCE: 4+ years of total experience into IT industry as a developer/senior developer/data engineer. 3+ years of experience of working extensively with Azure services such as Azure Data Factory, Azure Synapse and Azure Datalake. 3+ years of experience working extensively with Azure SQL, MS SQL Server and good exposure into writing complex SQL queries. 1+ years of experience working with the production support operations team as a production support engineer. Good knowledge and exposure into important SQL concepts such as Query optimization, Data Modelling and Data Governance. Working Knowledge of CI/CD process using Azure DevOps and Azure Logic Apps • Very good written and verbal communication skills. Perks and Benefits Transportation Services : Convenient and reliable commute options to ensure a hassle-free journey to and from work. Meal Facilities : Nutritious and delicious meals provided to keep you energized throughout the day. Career Growth Opportunities : Clear pathways for professional development and advancement within the organization. Captive Unit Advantage : Work in a stable, secure environment with long-term projects and consistent workflow. Continuous Learning : Access to training programs, workshops, and resources to support your personal and professional growth. Link to apply : https://encore.wd1.myworkdayjobs.com/externalnew/job/Gurgaon---Candor-Tech-Space-IT---ITES-SEZ/Senior-Data-Engineer_HR-18537 Or Share your CV at Anjali.panchwan@mcmcg.com
Posted 2 weeks ago
6.0 - 11.0 years
20 - 25 Lacs
Pune
Hybrid
Proficient in Power BI and related technology including MSFT Fabric, Azure SQL Database, Azure Synapse, Databricks and other visualization Hands-on experience with Power BI, machine learning and AI services in Azure. Expertise in DAX
Posted 2 weeks ago
4.0 - 8.0 years
4 - 8 Lacs
Gurugram
Work from Office
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Proficiency in MS Fabric,Azure Data Factory, Azure Synapse Analytics, Azure Databricks Extensive knowledge of MS Fabriccomponents Lakehouses, OneLake, Data Pipelines, Real-Time Analytics, Power BI Integration, Semantic Model. Integrate Fabric capabilities for seamless data flow, governance, and collaborationacross teams. Strong understanding of Delta Lake, Parquet, and distributed data systems. Strong programming skills in Python, PySpark,Scalaor SparkSQL/TSQLfor data transformations. Your Profile Strong experience in implementation and management of lake House using Databricks and Azure Tech stack (ADLS Gen2, ADF, Azure SQL) . Proficiencyin data integration techniques, ETL processes and data pipeline architectures. Understanding of Machine Learning Algorithms & AI/ML frameworks (i.e TensorFlow, PyTorch)and Power BIis an added advantage MS Fabric and PySpark is must. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.
Posted 2 weeks ago
10.0 - 15.0 years
12 - 17 Lacs
Chennai
Work from Office
Job Purpose: We are looking for a Senior Data Engineer with extensive experience in developing ETL processes using PySpark Notebooks and Microsoft Fabric, and supporting existing legacy SQL Server environments. The ideal candidate will possess a strong background in Spark-based development, demonstrate a high proficiency in SQL, and be comfortable working independently, collaboratively within a team, or leading other developers when required, coupled with strong communication skills. Requirements: We are looking for a Senior Data Engineer with extensive experience in developing ETL processes using PySpark Notebooks and Microsoft Fabric, and supporting existing legacy SQL Server environments. The ideal candidate will possess a strong background in Spark-based development, demonstrate a high proficiency in SQL, and be comfortable working independently, collaboratively within a team, or leading other developers when required, coupled with strong communication skills. The ideal candidate will possess Experience with Azure Data Services, including Azure Data Factory, Azure Synapse or similar tools,Experience of creating DAG's, implementing activities, and running Apache Airflow and Familiarity with DevOps practices, CI/CD pipelines and Azure DevOps. The ideal candidate should have: Key Responsibilities: Design, develop, and maintain ETL Notebook orchestration pipelines using PySpark and Microsoft Fabric. Working with Apache Delta Lake tables, Change Data Feed (CDF), Lakehouses and custom libraries Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver efficient data solutions. Migrate and integrate data from legacy SQL Server environments into modern data platforms. Optimize data pipelines and workflows for scalability, efficiency, and reliability. Provide technical leadership and mentorship to junior developers and other team members. Troubleshoot and resolve complex data engineering issues related to performance, data quality, and system scalability. Debugging of code, breaking down to test components, identify issues and resolve Develop, maintain, and enforce data engineering best practices, coding standards, and documentation. Conduct code reviews and provide constructive feedback to improve team productivity and code quality. Support data-driven decision-making processes by ensuring data integrity, availability, and consistency across different platforms. Qualifications : Bachelor s or Master s degree in Computer Science, Data Science, Engineering, or a related field. 10+ years of experience in data engineering, with a strong focus on ETL development using PySpark or other Spark-based tools. Proficiency in SQL with extensive experience in complex queries, performance tuning, and data modeling. Experience with Microsoft Fabric or similar cloud-based data integration platforms is a plus. Strong knowledge of data warehousing concepts, ETL frameworks, and big data processing. Familiarity with other data processing technologies (e.g., Hadoop, Hive, Kafka) is an advantage. Experience working with both structured and unstructured data sources. Excellent problem-solving skills and the ability to troubleshoot complex data engineering issues. Experience with Azure Data Services, including Azure Data Factory, Azure Synapse, or similar tools. Experience of creating DAG's, implementing activities, and running Apache Airflow Familiarity with DevOps practices, CI/CD pipelines and Azure DevOps.
Posted 2 weeks ago
6.0 - 9.0 years
5 - 14 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Role & responsibilities Data Bricks skillset with Pyspark , SQL Strong proficiency in pyspark and SQL Understanding of data warehousing concepts ETL processes/ Data pipeline building with ADB/ADF Experience with Azure cloud platform, knowledge of data manipulation techniques Experience working with business teams to convert the requirements into technical stories for migration Leading the technical discussions and implementing the solution Experience will multi tenant architecture and have delivered projects in Databricks + Azure combination Experience to Unity catalogue is useful
Posted 2 weeks ago
10.0 - 15.0 years
30 - 36 Lacs
Thiruvananthapuram
Work from Office
* Manage Azure data infrastructure using DevOps practices. * Ensure security compliance through automation and collaboration. * Develop IAC tools for efficient data management. Immediate joiners preferred
Posted 2 weeks ago
10.0 - 17.0 years
9 - 19 Lacs
Bengaluru
Remote
Azure Data Engineer Skills Req: Azure Data Engineer Big Data , hadoop Develop and maintain Data Pipelines using Azure services like Data Factory PysparkSynapse, Data Bricks Adobe,Spark Scala etc
Posted 3 weeks ago
2.0 - 5.0 years
8 - 12 Lacs
Chennai
Work from Office
Role: Data Scientist Experience: 2 - 5 Years Qualification: B.Tech /BE Location: Chennai Employment: Full-time Engage with clients to understand business problems and come up with approaches to solve them. Good communication skills, both verbal and written, to understand data needs and report results Create dashboards & Presentations to tell compelling stories about customers and their business Stay up to date with the latest technology, techniques, and methods in AI & ML An ideal candidate would have the below skill sets. Good understanding of statistical and data mining techniques. Hands-on Experience in building Machine Learning models in Python Ability to collect, clean, and engineer large amounts of data using SQL techniques. Data analysis and visualization skills in Power BI or Tableau to bring out insights from data Good understanding of GPTs/LLMs and its use in the field Experience working in at least one of the cloud platforms preferred: Azure, AWS, or GCP.
Posted 3 weeks ago
8.0 - 12.0 years
0 Lacs
Bengaluru
Work from Office
We are looking for Microsoft BI & Data Warehouse Lead to design, develop, and maintain robust data warehouse & ETL solutions using the Microsoft technology stack. The ideal candidate will have extensive expertise in SQL Server development, (ADF), Health insurance Provident fund
Posted 3 weeks ago
8.0 - 13.0 years
16 - 31 Lacs
Pune, Chennai, Bengaluru
Hybrid
No of years experience Min 8 + years (8 to 10 years) – ADF min. 5 years must to have Detailed job description - Skill Set: Experience of MS Azure platform and Strong knowledge in Azure Data factory and Azure Synapse Fluency in SQL with software development experience. Mandatory Skills: Azure Data Factory, Azure Synapse, Strong SQL
Posted 3 weeks ago
5.0 - 6.0 years
12 - 18 Lacs
Indore, Hyderabad, Pune
Hybrid
Min. 5+ years of experience with good work experience in Banking domain • Strong experience in Azure Databricks, Pyspark - Both these skills are mandatory
Posted 3 weeks ago
10.0 - 16.0 years
27 - 37 Lacs
Hyderabad
Work from Office
Data Architect Microsoft Fabric, Snowflake & Modern Data Platforms Location: Hyderabad Employment Type: Full-Time Position Overview: We are seeking a seasoned Data Architect with strong consulting experience to lead the design and delivery of modern data solutions across global clients. This role emphasizes hands-on architecture and engineering using Microsoft Fabric and Snowflake, while also contributing to internal capability development and practice growth. The ideal candidate will bring deep expertise in data modeling, modern data architecture, and data engineering, with a passion for innovation and client impact. Key Responsibilities: Client Delivery & Architecture (75%) Serve as the lead architect for client engagements, designing scalable, secure, and high-performance data solutions using Microsoft Fabric and Snowflake. Apply modern data architecture principles including data lakehouse, ELT/ETL pipelines, and real-time streaming. Collaborate with cross-functional teams (data engineers, analysts, architects) to deliver end-to-end solutions. Translate business requirements into technical strategies with measurable outcomes. Ensure best practices in data governance, quality, and security are embedded in all solutions. Deliver scalable data modeling solutions for various use cases leveraging a modern data platform. Practice & Capability Development (25%) Contribute to the development of reusable assets, accelerators, and reference architectures. Support internal knowledge sharing and mentoring across the India-based consulting team. Stay current with emerging trends in data platforms, AI/ML integration, and cloud-native architectures. Collaborate with global teams to align on delivery standards and innovation initiatives. Qualifications: 10+ years of experience in data architecture and engineering, preferably in a consulting environment. Proven experience with Microsoft Fabric and Snowflake platforms. Strong skills in data modeling, data pipeline development, and performance optimization. Familiarity with Azure Synapse, Azure Data Factory, Power BI, and related Azure services. Excellent communication and stakeholder management skills. Experience working with global delivery teams and agile methodologies. Preferred Certifications: SnowPro Core Certification (preferred but not required) Microsoft Certified: Fabric Analytics Engineer Associate Microsoft Certified: Azure Solutions Architect Expert
Posted 3 weeks ago
1.0 - 5.0 years
5 - 6 Lacs
Bengaluru
Hybrid
Role & responsibilities Ensure the consistency of the data by controlling the definitions and adapting to changes in the business and its environment Work collaboratively with FinOps, and IT teams as well as functional departments to support business projects Develop, visualize, maintain regular or ad-hoc Operational and Financial reporting, related data models and automations Bring existing excel reporting into Power BI Uphold a strict branding style, high level of visual standard Help create business metrics and KPIs Preferred candidate profile Working knowledge with Power BI Working knowledge with SQL programming language Working knowledge of Excel (Excel Power Pivot, Excel Power Query) Working knowledge with Fabric, Azure Synapse, Power Automate will be considered as advantage Willingness to learn new systems and tools
Posted 3 weeks ago
3.0 - 5.0 years
10 - 12 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Technical Requirements: 3 to 6 Years of IT & Azure Data engineering technologies experience Prior experience in ETL, data pipelines, and data flow techniques using Azure Data Services Working experience in Python, PySpark, Azure Data Factory, Azure Data Lake Gen2, Databricks, Azure Synapse, and file formats like JSON & Parquet. Experience in creating ADF Pipelines to source and process data sets. Experience in creating Databricks notebooks to cleanse, transform, and enrich data sets. Development experience in the orchestration of pipelines Good understanding of SQL, Databases, and data warehouse systems, preferably Teradata Experience in deployment and monitoring techniques Working experience with Azure DevOps CI/CD pipelines to deploy Azure resources. Experience in handling operations/Integration with source repository Must have good knowledge of Data Warehouse concepts and Data Warehouse modelling. Working knowledge of SNO, M, S, including resolving incidents, handling Change requests /Service requests, and reporting on metrics to provide insights. Collaborate with the project team to understand tasks to model tables using data warehouse best practices and develop data pipelines to ensure the efficient delivery of data. Non-technical requirement: Work with project leaders to model tables using data warehouse best practices and develop data pipelines to ensure the efficient delivery of data. Think and work agile, from estimation to development, including testing, continuous integration, and deployment. Manage numerous project tasks concurrently and strategically, prioritizing when necessary. Proven ability to work as part of a virtual team of technical consultants working from different locations (including onsite) around project delivery goals. Technologies: Azure data factory Azure Data bricks Azure Synapse PySpark/SQL ADLS, BLOB Azure DevOps with CI/CD implementation. Nice to have skill sets: Business Intelligence tools (preferred Power BI) DP-203 Certified. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
17062 Jobs | Dublin
Wipro
9393 Jobs | Bengaluru
EY
7759 Jobs | London
Amazon
6056 Jobs | Seattle,WA
Accenture in India
6037 Jobs | Dublin 2
Uplers
5971 Jobs | Ahmedabad
Oracle
5764 Jobs | Redwood City
IBM
5714 Jobs | Armonk
Tata Consultancy Services
3524 Jobs | Thane
Capgemini
3518 Jobs | Paris,France