Jobs
Interviews

1655 Adf Jobs - Page 42

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 14.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Hiring: CRM Lead Consultant – Microsoft Dynamics 365 CE/CRM Looking for an experienced CRM Lead Consultant to serve as a technical SME and administrator for Microsoft Dynamics 365 CE/CRM platform. This role is ideal for a highly skilled professional with deep experience in Dynamics customization, integration, reporting, and solution management. 🔧 What You’ll Do Lead development and maintenance of the Dynamics CRM platform Collaborate with business users to gather requirements and architect CRM solutions Build forms, views, dashboards, plugins, workflows, and reports Develop solutions using PowerApps , Azure Data Factory , and automation tools Perform solution deployments and manage GitHub source control Troubleshoot issues and support application performance ✅ What We’re Looking For 10-14 years of experience in Microsoft Dynamics 365 CE/CRM Proficiency in JavaScript, C#, .NET, SQL Server, MVC, FetchXML, REST/OData Hands-on experience with Azure services (ADF, SSIS, DevOps pipelines) Strong knowledge of CRM SDK, security models, and GitHub Bachelor's degree in Computer Science or related STEM field ⭐ Bonus Points Microsoft Dynamics 365 certifications Familiarity with O365 tools (SharePoint, Mobile), Azure SQL, Data Export Service Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About tsworks: tsworks is a leading technology innovator, providing transformative products and services designed for the digital-first world. Our mission is to provide domain expertise, innovative solutions and thought leadership to drive exceptional user and customer experiences. Demonstrating this commitment , we have a proven track record of championing digital transformation for industries such as Banking, Travel and Hospitality, and Retail (including e-commerce and omnichannel), as well as Distribution and Supply Chain, delivering impactful solutions that drive efficiency and growth. We take pride in fostering a workplace where your skills, ideas, and attitude shape meaningful customer engagements. About This Role: tsworks Technologies India Private Limited is seeking driven and motivated Senior Data Engineers to join its Digital Services Team. You will get hands-on experience with projects employing industry-leading technologies. This would initially be focused on the operational readiness and maintenance of existing applications and would transition into a build and maintenance role in the long run. Requirements Position: Data Engineer II Experience: 3 to 10+ Years Location: Bangalore, India Mandatory Required Qualification Strong proficiency in Azure services such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Storage, etc. Expertise in DevOps and CI/CD implementation Good knowledge in SQL Excellent Communication Skills In This Role, You Will Design, implement, and manage scalable and efficient data architecture on the Azure cloud platform. Develop and maintain data pipelines for efficient data extraction, transformation, and loading (ETL) processes. Perform complex data transformations and processing using Azure Data Factory, Azure Databricks, Snowflake's data processing capabilities, or other relevant tools. Develop and maintain data models within Snowflake and related tools to support reporting, analytics, and business intelligence needs. Collaborate with cross-functional teams to understand data requirements and design appropriate data integration solutions. Integrate data from various sources, both internal and external, ensuring data quality and consistency. Ensure data models are designed for scalability, reusability, and flexibility. Implement data quality checks, validations, and monitoring processes to ensure data accuracy and integrity across Azure and Snowflake environments. Adhere to data governance standards and best practices to maintain data security and compliance. Handling performance optimization in ADF and Snowflake platforms Collaborate with data scientists, analysts, and business stakeholders to understand data needs and deliver actionable insights Provide guidance and mentorship to junior team members to enhance their technical skills. Maintain comprehensive documentation for data pipelines, processes, and architecture within both Azure and Snowflake environments including best practices, standards, and procedures. Skills & Knowledge Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 3 + Years of experience in Information Technology, designing, developing and executing solutions. 3+ Years of hands-on experience in designing and executing data solutions on Azure cloud platforms as a Data Engineer. Strong proficiency in Azure services such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Storage, etc. Familiarity with Snowflake data platform would be an added advantage. Hands-on experience in data modelling, batch and real-time pipelines, using Python, Java or JavaScript and experience working with Restful APIs are required. Expertise in DevOps and CI/CD implementation. Hands-on experience with SQL and NoSQL databases. Hands-on experience in data modelling, implementation, and management of OLTP and OLAP systems. Experience with data modelling concepts and practices. Familiarity with data quality, governance, and security best practices. Knowledge of big data technologies such as Hadoop, Spark, or Kafka. Familiarity with machine learning concepts and integration of ML pipelines into data workflows Hands-on experience working in an Agile setting. Is self-driven, naturally curious, and able to adapt to a fast-paced work environment. Can articulate, create, and maintain technical and non-technical documentation. Public cloud certifications are desired. Show more Show less

Posted 1 month ago

Apply

6.0 years

0 Lacs

India

Remote

Job Title: Senior Data Engineer Experience: 6+ Years Location: Remote Employment Type: Full Time Job Summary: We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic data engineering team. The ideal candidate will have deep expertise in C#, Azure Data Factory (ADF), Databricks, SQL Server, and Python, along with a strong understanding of modern CI/CD practices. You will be responsible for designing, developing, and maintaining scalable and efficient data pipelines and solutions to support analytics, reporting, and operational systems. Key Responsibilities: Design, develop, and optimize complex data pipelines using Azure Data Factory, Databricks, and SQL Server. Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Ludhiana, Punjab, India

On-site

Job Purpose To develop and optimize knitting programs for STOLL flat knitting machines (CMS, ADF series) using M1plus software, ensuring timely sample and production readiness with precision, innovation, and minimal errors. Key Responsibilities Programming & Pattern Development Create, edit, and simulate knitting programs using STOLL M1plus software. Translate tech packs, sketches, and knit designs into machine-readable code. Develop patterns, textures, intarsia, jacquards, structures and engineered panels as per design. Work closely with designers and merchandisers to interpret aesthetics technically. Sampling & Production Support Execute knitting trials and finalize programs for sampling & bulk. Fine-tune machine settings (gauge, tension, yarn paths) in coordination with senior operators. Document and maintain archives of all programs with fabric specs, yarn details, and machine settings. Quality & Troubleshooting Evaluate the knitted panels for defects, yarn compatibility, and program accuracy. Revise or troubleshoot patterns in case of loop distortion, miss-knit, or dimensional issues. Coordinate with the Quality team to implement correct shrinkage, GSM, and measurement protocols. Collaboration & Training Support and train junior programmers or interns. Coordinate with operators for smooth handover and machine setup guidance. Participate in innovation sessions for new yarns, stitches, or techniques. Required Skills & Knowledge Proficiency in STOLL M1plus software (must-have). Knowledge of CMS machine series & ADF (Autarkic Direct Feed) is preferred. Understanding of yarn types, knitting structures, and garment construction. Ability to read tech packs, spec sheets, and design layouts. Detail-oriented with logical, structured programming abilities. Familiarity with knitting-related machine settings and gauge variants (3, 5, 7, 12, 14 GG etc.). Preferred Qualifications Degree / Diploma in Textile Engineering, Knitwear Design, or Apparel Technology. Minimum 3 years of experience in a flat knitting setup. Exposure to both domestic and export market requirements. Bonus: Experience in Knit & Wear, 3D fully fashioned garments, or Technical textiles. Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

Ludhiana, Punjab, India

On-site

Job Purpose To upskill the knitting team—programmers, operators, and interns—on Stoll flat knitting machines (CMS, ADF series) through structured, hands-on training in machine operation, program understanding, and best practices in knitted garment production. Key Responsibilities Conduct periodic on-site training sessions for: Stoll machine operation and handling M1plus programming fundamentals and advanced techniques Program-to-machine coordination and troubleshooting Train operators and programmers to understand different knit structures (e.g., Piqué, Links-Links, Ribs, Jacquard, Intarsia). Review and improve existing workflows and operator efficiency in sample and bulk knitting. Assess skill gaps and tailor training modules accordingly. Create easy-to-understand SOPs and visual training guides for reference. Support during implementation of new technologies, yarns, or machines. Advise management on skill development, training materials, or hiring needs in technical knitting. Required Skills & Expertise Minimum 7–10 years experience with Stoll CMS/ADF series machines. Proficient in M1plus software for programming and simulation. Hands-on understanding of both sampling and production processes. Strong ability to explain technical concepts clearly to semi-skilled workers. Experience developing or delivering workshop-based training sessions. Engagement Terms Frequency of Visit: As per mutual discussion Session Duration: As per mutual discussion Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Ludhiana, Punjab, India

On-site

Job Purpose To ensure efficient, high-quality operation of Stoll knitting machines (CMS, ADF, etc.), handle machine settings, minor maintenance, and support the sampling and production process with deep technical knowledge and leadership skills. Key Responsibilities Machine Handling & Setup Operate flat knitting machines (STOLL CMS/ADF series). Perform machine setting, gauge and cam adjustments. Change needle beds, setting yarn feeders, and checking yarn paths. Conduct trials for new yarns and designs with appropriate tension and programming settings. Knitting Execution Run production and sample programs as per tech pack/merchandiser instructions. Monitor in-progress knitting for defects (missed stitches, holes, stripes, yarn breakage). Achieve production targets with minimal downtime and waste. Quality Control & Maintenance Inspect panels for quality and measurements before handing over to linking team. Do regular cleaning and basic preventive maintenance. Report major mechanical/electrical faults to maintenance promptly. Programming Coordination Coordinate with programmers for understanding new patterns or troubleshooting. Suggest improvements in knitting techniques, yarn selection, or settings. Team Leadership & Training Guide and support junior operators/helpers. Maintain discipline and workflow within assigned machines. Assist in onboarding and training of interns or fresh operators. Documentation & Reporting Maintain production logs, downtime reasons, and daily efficiency reports. Flag any raw material (yarn) or tech pack-related issues. Skills & Competencies Expert knowledge of flatbed knitting machines (STOLL – CMS/ADF). Ability to read and interpret knitting programs, technical designs. Hands-on problem-solving skills. Team leadership and communication. Basic understanding of knitting yarns (wool, cotton, synthetics, blends). Focused on quality and timely output. Show more Show less

Posted 1 month ago

Apply

0.0 - 10.0 years

0 Lacs

Thane, Maharashtra

On-site

202503220 Thane, Maharashtra, India Bevorzugt Description Summary of Role: We are seeking a Senior Full Stack Developer with 8–10 years of experience working with the Microsoft technology stack and has experience with Python or other tech stack. The ideal candidate should have deep expertise in .NET Core, Python, C#, SQL Server, Azure Cloud Services, Angular, and other Microsoft-based development frameworks. This role involves full-cycle application development, including frontend, backend, database, and cloud integration, to build scalable, high-performance solutions The Role: Full Stack Development Develop, optimize, and maintain applications using .NET Core, C#, , ASP.NET and Azure functions. Design and implement responsive frontend UI using Angular. Build and maintain RESTful APIs for seamless data exchange. Develop solutions in Python, LangChain, LangGraph Work with connectors, AI Builder, and RPA to extend capabilities. Database & Cloud Services Design and manage SQL Server databases, ensuring performance and security Develop cloud-native applications leveraging Azure services such as Azure Functions, App Services, and Azure SQL Implement data storage solutions using Cosmos DB or Dataverse if required. Architecture & Integration Define and implement scalable, secure, and high-performing architecture Integrate applications with Microsoft 365, Power Platform, SharePoint, and other third-party services Optimize backend services for high-availability and low-latency performance Security & Best Practices Ensure secure coding practices, compliance, and role-based access control Implement DevOps, CI/CD pipelines, and automated deployment strategies. Follow Microsoft best practices for application security and performance. Collaboration & Leadership Work closely with business teams, architects, and UI/UX designers to deliver high-quality applications. Mentor junior developers and contribute to code reviews, design discussions, and technical improvements Stay updated with Microsoft technologies, frameworks, and industry trends The Requirements: Bachelor’s degree in information technology or related field is required 8–10 years of experience in full stack development using Microsoft technologies. Strong expertise in .NET Core, C#, ASP.NET MVC/Web API, Angular, and SQL Server. Experience with Azure Cloud Services (Azure Functions, AI Builder, App Services, Azure SQL, ADF). Proficiency in front-end frameworks (Angular or React) and responsive UI development. Solid understanding of software design patterns, microservices architecture, and API integration. Knowledge of DevOps practices, CI/CD pipelines, and Git-based version control. Excellent problem-solving, analytical, and communication skills. Microsoft certifications (such as AZ-204, AZ-400, or DP-900) are a plus Qualifications Bachelor’s degree in information technology or related field is required

Posted 1 month ago

Apply

0.0 - 13.0 years

0 Lacs

Thane, Maharashtra

On-site

202503219 Thane, Maharashtra, India Bevorzugt Description Summary of Role : We are seeking a Senior Full Stack Developer with 10–13 years of experience working with the Microsoft technology stack and has experience with Python or other tech stack. The ideal candidate should have deep expertise in .NET Core, Python, C#, SQL Server, Azure Cloud Services, Angular, and other Microsoft-based development frameworks. This role involves full-cycle application development, including frontend, backend, database, and cloud integration, to build scalable, high-performance solutions The Role: Full Stack Development Develop, optimize, and maintain applications using .NET Core, C#, , ASP.NET and Azure functions. Design and implement responsive frontend UI using Angular. Build and maintain RESTful APIs for seamless data exchange. Develop solutions in Python, LangChain, LangGraph Work with connectors, AI Builder, and RPA to extend capabilities. Database & Cloud Services Design and manage SQL Server databases, ensuring performance and security Develop cloud-native applications leveraging Azure services such as Azure Functions, App Services, and Azure SQL Implement data storage solutions using Cosmos DB or Dataverse if required. Architecture & Integration Define and implement scalable, secure, and high-performing architecture Integrate applications with Microsoft 365, Power Platform, SharePoint, and other third-party services Optimize backend services for high-availability and low-latency performance Security & Best Practices Ensure secure coding practices, compliance, and role-based access control Implement DevOps, CI/CD pipelines, and automated deployment strategies. Follow Microsoft best practices for application security and performance. Collaboration & Leadership Work closely with business teams, architects, and UI/UX designers to deliver high-quality applications. Mentor junior developers and contribute to code reviews, design discussions, and technical improvements Stay updated with Microsoft technologies, frameworks, and industry trends The Requirements: Bachelor’s degree in information technology or related field is required 10–13 years of experience in full stack development using Microsoft technologies. Strong expertise in .NET Core, C#, ASP.NET MVC/Web API, Angular, and SQL Server. Experience with Azure Cloud Services (Azure Functions, AI Builder, App Services, Azure SQL, ADF). Proficiency in front-end frameworks (Angular or React) and responsive UI development. Solid understanding of software design patterns, microservices architecture, and API integration. Knowledge of DevOps practices, CI/CD pipelines, and Git-based version control. Excellent problem-solving, analytical, and communication skills. Microsoft certifications (such as AZ-204, AZ-400, or DP-900) are a plus Qualifications Bachelor’s degree in information technology or related field is required

Posted 1 month ago

Apply

8.0 years

0 Lacs

Greater Kolkata Area

On-site

Role : Data Engineer-Snowflake Job Description Required Skills and Experience : Bachelor or Master degree in computer science, Data Science, Engineering, or a related field. 8+ years of experience in data engineering or related fields. Strong proficiency in SQL, Snowflake, Stored procedure, Views . Hands-on experience with Snowflake SQL, ADF (Azure Data Factory), Microsoft MDS(Master Data Service). Knowledge of data warehousing concepts. Experience with cloud platforms (Azure). Understanding of data modeling and data warehousing principles. Strong problem-solving and analytical skills, with attention to detail. Excellent communication and collaboration skills. Bonus Skills Exposure to CI/CD practices using Microsoft Azure DevOps . Basic knowledge or understanding of PBI. Key Responsibilities Design, develop, and maintain scalable and efficient ETL/ELT data pipelines using Azure Data Factory (ADF) to ingest, transform, and load data into Snowflake. Develop and optimize complex SQL queries, stored procedures, and views within Snowflake for data transformation, aggregation, and consumption. Implement and manage master data solutions using Microsoft Master Data Services (MDS) to ensure data consistency and quality. Collaborate with data architects, data scientists, and business stakeholders to understand data requirements and translate them into technical specifications and data models. Design and implement data warehousing solutions in Snowflake, adhering to best practices for performance, scalability, and security. Monitor, troubleshoot, and optimize data pipeline performance and data quality issues. Ensure data integrity, accuracy, and reliability across all data solutions. Participate in code reviews and contribute to the continuous improvement of data engineering processes and standards. Stay updated with the latest trends and technologies in data engineering, cloud platforms (Azure), and Snowflake (ref:hirist.tech) Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

On-site

Skill required: Tech for Operations - Microsoft Azure Cloud Services Designation: App Automation Eng Senior Analyst Qualifications: Any Graduation/12th/PUC/HSC Years of Experience: 5 to 8 years About Accenture Accenture is a global professional services company with leading capabilities in digital, cloud and security.Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song— all powered by the world’s largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. We embrace the power of change to create value and shared success for our clients, people, shareholders, partners and communities.Visit us at www.accenture.com What would you do? Accenture is a global professional services company with leading capabilities in digital, cloud and security. Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song— all powered by the world s largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. We embrace the power of change to create value and shared success for our clients, people, shareholders, partners and communities. Visit us at www.accenture.com. In our Service Supply Chain offering, we leverage a combination of proprietary technology and client systems to develop, execute, and deliver BPaaS (business process as a service) or Managed Service solutions across the service lifecycle: Plan, Deliver, and Recover. In this role, you will partner with business development and act as a Business Subject Matter Expert (SME) to help build resilient solutions that will enhance our clients supply chains and customer experience. The Senior Azure Data factory (ADF) Support Engineer Il will be a critical member of our Enterprise Applications Team, responsible for designing, supporting & maintaining robust data solutions. The ideal candidate is proficient in ADF, SQL and has extensive experience in troubleshooting Azure Data factory environments, conducting code reviews, and bug fixing. This role requires a strategic thinker who can collaborate with cross-functional teams to drive our data strategy and ensure the optimal performance of our data systems. What are we looking for? Bachelor s or Master s degree in Computer Science, Information Technology, or a related field. Proven experience (5+ years) as a Azure Data Factory Support Engineer Il Expertise in ADF with a deep understanding of its data-related libraries. Strong experience in Azure cloud services, including troubleshooting and optimizing cloud-based environments. Proficient in SQL and experience with SQL database design. Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy. Experience with ADF pipelines. Excellent problem-solving and troubleshooting skills. Experience in code review and debugging in a collaborative project setting. Excellent verbal and written communication skills. Ability to work in a fast-paced, team-oriented environment. Strong understanding of the business and a passion for the mission of Service Supply Chain Hands on with Jira, Devops ticketing, ServiceNow is good to have Roles and Responsibilities: Innovate. Collaborate. Build. Create. Solve ADF & associated systems Ensure systems meet business requirements and industry practices. Integrate new data management technologies and software engineering tools into existing structures. Recommend ways to improve data reliability, efficiency, and quality. Use large data sets to address business issues. Use data to discover tasks that can be automated. Fix bugs to ensure robust and sustainable codebase. Collaborate closely with the relevant teams to diagnose and resolve issues in data processing systems, ensuring minimal downtime and optimal performance. Analyze and comprehend existing ADF data pipelines, systems, and processes to identify and troubleshoot issues effectively. Develop, test, and implement code changes to fix bugs and improve the efficiency and reliability of data pipelines. Review and validate change requests from stakeholders, ensuring they align with system capabilities and business objectives. Implement robust monitoring solutions to proactively detect and address issues in ADF data pipelines and related infrastructure. Coordinate with data architects and other team members to ensure that changes are in line with the overall architecture and data strategy. Document all changes, bug fixes, and updates meticulously, maintaining clear and comprehensive records for future reference and compliance. Provide technical guidance and support to other team members, promoting a culture of continuous learning and improvement. Stay updated with the latest technologies and practices in ADF to continuously improve the support and maintenance of data systems. Flexible Work Hours to include US Time Zones Flexible working hours however this position may require you to work a rotational On-Call schedule, evenings, weekends, and holiday shifts when need arises Participate in the Demand Management and Change Management processes. Work in partnership with internal business, external 3rd party technical teams and functional teams as a technology partner in communicating and coordinating delivery of technology services from Technology For Operations (TfO) Any Graduation,12th/PUC/HSC Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

India

Remote

Role: Senior Azure / Data Engineer with (ETL/ Data warehouse background) Location: Remote, India Duration: Long Term Contract Need with 10+ years of experience Must have Skills : • Min 5 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks, etc. Azure experience is preferred over other cloud platforms. • 10 + years of proven experience with SQL, schema design, and dimensional data modeling • Solid knowledge of data warehouse best practices, development standards, and methodologies • Experience with ETL/ELT tools like ADF, Informatica, Talend, etc., and data warehousing technologies like Azure Synapse, Azure SQL, Amazon Redshift, Snowflake, Google Big Query, etc.. • Strong experience with big data tools(Databricks, Spark, etc..) and programming skills in PySpark and Spark SQL. • Be an independent self-learner with a “let’s get this done” approach and the ability to work in Fast paced and Dynamic environment. • Excellent communication and teamwork abilities. Nice-to-Have Skills: • Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. • SAP ECC /S/4 and Hana knowledge. • Intermediate knowledge on Power BI • Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes Show more Show less

Posted 1 month ago

Apply

10.0 - 20.0 years

20 - 30 Lacs

Kochi, Kozhikode, Thiruvananthapuram

Work from Office

Expertise in Azure services including App Services, Functions, DevOps pipelines and ADF.Expert-level knowledge of MuleSoft Anypoint Platform, API lifecycle management, and enterprise integration. unit testing frameworks/integration testing in Java Required Candidate profile Proven skills in Java-based web applications with RDBMS or NoSQL backends. Proven skills in Python and JavaScript for backend and full-stack solutions. object-oriented programming and design patterns

Posted 1 month ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description: Technologies: SQL, Azure Synapse, Azure Data Factory (ADF), ETL, Excel We are looking for a person who will be responsible for fulfilling the needs of key business functions through the development of SQL code, Azure data pipelines, ETL, and data models. You will be involved in the development of MS-SQL queries and procedures, creating custom reports, and rolling up the data to the desired level for the client to consume. You will also be responsible for designing databases and extracting data from various sources, integrating it, and ensuring its stability, reliability, and performance. What your day would look like: 2-3 years of experience as a SQL Developer or similar role Excellent understanding of SQL Server and SQL programming, with 2+ years of hands-on experience in SQL programming. Experience with SQL Server Integration Services (SSIS) It is beneficial to have experience in Data Factory pipelines for on-cloud ETL processing. In-depth skills with Azure Data Factory, Azure Synapse, ADLS, with the ability to configure and administrate all aspects of SQL Server at a Consultant level A sense of ownership and pride in your performance and its impact on the company's success Excellent interpersonal and communication skills (both oral and written), with the ability to communicate at various levels with clarity and precision. Critical thinker and problem-solving skills Team player Good time-management skills Experience working in analytics projects in the pharma domain deriving actionable insights and implementing them. Experience in longitudinal data, retail/CPG, Customer level data sets, pharma data, patient data, forecasting, and/or performance reporting Should have an intermediate to strong MS Excel and PowerPoint knowledge. Experience in SQL Server and SSIS In-depth skills with Azure Data Factory, Azure Synapse, ADLS with the ability to configure and administrate all aspects of SQL Server. Ability to manage, efficiently manipulates huge data sets (multi-million record complex relational databases) Right candidate must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products. Articulate communicator handling challenging situations with structured thinking and solution minded focus. Leading communication with internal and external stakeholders with minimal supervision Proactive in identifying potential risks and implementing mitigation strategies to avoid potential issues downstream. Exposure to project management principles – breaking up approach into smaller tasks and planning for the same across resources (Consultant) Ability to learn quickly in a dynamic environment. Successful experience working in a global environment an advantage. Previous experience in healthcare analytics is a plus. IQVIA is a leading global provider of clinical research services, commercial insights and healthcare intelligence to the life sciences and healthcare industries. We create intelligent connections to accelerate the development and commercialization of innovative medical treatments to help improve patient outcomes and population health worldwide. Learn more at https://jobs.iqvia.com Show more Show less

Posted 1 month ago

Apply

4.0 years

0 Lacs

India

On-site

Love turning raw data into powerful insights? Join us! We're partnering with global brands to unlock the full potential of their data. As a Data Engineer, you'll be at the heart of these transformations—building scalable data pipelines, optimizing data flows, and empowering analytics teams to make real-time, data-driven decisions. We are seeking a highly skilled Data Engineer with hands-on experience in Databricks to support data integration, pipeline development, and large-scale data processing for our retail or healthcare client. The ideal candidate will work closely with cross-functional teams to design robust data solutions that drive business intelligence and operational efficiency. Key Responsibilities Develop and maintain scalable data pipelines using Databricks and Spark. Build ETL/ELT workflows to support data ingestion, transformation, and validation. Collaborate with data scientists, analysts, and business stakeholders to gather data requirements. Optimize data processing workflows for performance and reliability. Manage structured and unstructured data across cloud-based data lakes and warehouses (e.g., Delta Lake, Snowflake, Azure Synapse). Ensure data quality and compliance with data governance standards. Required Qualifications 4+ years of experience as a Data Engineer. Strong expertise in Databricks, Apache Spark, and Delta Lake. Proficiency in Python, SQL, and data pipeline orchestration tools (e.g., Airflow, ADF). Experience with cloud platforms such as Azure, AWS, or GCP. Familiarity with data modeling, version control, and CI/CD practices. Experience in the retail or healthcare domain is a plus. Benefits Health Insurance, Accident Insurance. The salary will be determined based on several factors including, but not limited to, location, relevant education, qualifications, experience, technical skills, and business needs. Additional Responsibilities Participate in OP monthly team meetings, and participate in team-building efforts. Contribute to OP technical discussions, peer reviews, etc. Contribute content and collaborate via the OP-Wiki/Knowledge Base. Provide status reports to OP Account Management as requested. About Us OP is a technology consulting and solutions company, offering advisory and managed services, innovative platforms, and staffing solutions across a wide range of fields — including AI, cyber security, enterprise architecture, and beyond. Our most valuable asset is our people: dynamic, creative thinkers, who are passionate about doing quality work. As a member of the OP team, you will have access to industry-leading consulting practices, strategies & and technologies, innovative training & education. An ideal OP team member is a technology leader with a proven track record of technical excellence and a strong focus on process and methodology. Show more Show less

Posted 1 month ago

Apply

3.0 - 6.0 years

0 - 2 Lacs

Pune

Work from Office

Greetings of the Day !!! We have job opening for Data Warehouse + ADF + ETL with one of our Client .If you are interested for this role , kindly share update resume along with below details in this email id : shaswati.m@bct-consulting.com Job Description: Senior Data Engineer As a Senior Data Engineer, you will support the European World Area using the Windows & Azure suite of Analytics & Data platforms. The focus of the role is on the technical aspects and implementation of data gathering, integration and database design. We look forward to seeing your application! In This Role, Your Responsibilities Will Be: Data Ingestion and Integration : Collaborate with Product Owners and analysts to understand data requirements & design, develop, and maintain data pipelines for ingesting, transforming, and integrating data from various sources into Azure Data Services. Migration of existing ETL packages : Migrate existing SSIS packages to Synapse pipelines Data Modelling : Assist in designing and implementing data models, data warehouses, and databases in Azure Synapse Analytics, Azure Data Lake Storage, and other Azure services. Data Transformation : Develop ETL (Extract, Transform, Load) processes using SQL Server Integration Services (SSIS), Azure Synapse Pipelines, or other relevant tools to prepare data for analysis and reporting. Data Quality and Governance : Implement data quality checks and data governance practices to ensure the accuracy, consistency, and security of data assets. Monitoring and Optimization : Monitor and optimize data pipelines and workflows for performance, scalability, and cost efficiency. Documentation : Maintain comprehensive documentation of processes, including data lineage, data dictionaries, and pipeline schedules. Collaboration : Work closely with cross-functional teams, including data analysts, data scientists, and business stakeholders, to understand their data needs and deliver solutions accordingly. Azure Services : Stay updated on Azure data services and best practices to recommend and implement improvements in our data architecture and processes For This Role, You Will Need: 3-5 years of experience in Data Warehousing with On-Premises or Cloud technologies Strong practical experience of Synapse pipelines / ADF. Strong practical experience of developing ETL packages using SSIS. Strong practical experience with T-SQL or any variant from other RDBMS. Graduate degree educated in computer science or a relevant subject. Strong analytical and problem-solving skills. Strong communication skills in dealing with internal customers from a range of functional areas. Willingness to work flexible working hours according to project requirements. Technical documentation skills. Fluent in English. Preferred Qualifications that Set You Apart: Oracle PL/SQL. Experience in working on Azure Services like Azure Synapse Analytics, Azure Data Lake. Working experience with Azure DevOps paired with knowledge of Agile and/or Scrum methods of delivery. Languages: French, Italian, or Spanish would be an advantage. Agile certification. Thanks, Shaswati

Posted 1 month ago

Apply

6.0 - 11.0 years

8 - 12 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Senior Data Engineer (Remote, Contract 6 Months) Databricks, ADF, and PySpark. We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background Location : - Remote, Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune

Posted 1 month ago

Apply

5.0 - 8.0 years

15 - 27 Lacs

Bengaluru

Hybrid

We are looking for a highly skilled API & Pixel Tracking Integration Engineer to lead the development and deployment of server-side tracking and attribution solutions across multiple platforms. The ideal candidate brings deep expertise in CAPI integrations (Meta, Google, and other platforms), secure data handling using cryptographic techniques, and experience working within privacy-first environments like Azure Clean Rooms . This role requires strong hands-on experience in C# development, Azure cloud services, OCI (Oracle Cloud Infrastructure) , and marketing technology stacks including Adobe Tag Management and Pixel Management . You will work closely with engineering, analytics, and marketing teams to deliver scalable, compliant, and secure data tracking solutions that drive business insights and performance. Key Responsibilities: Design, implement, and maintain CAPI integrations across Meta, Google, and all major platforms , ensuring real-time and accurate server-side event tracking. Develop and manage custom tracking solutions leveraging Azure Clean Rooms , ensuring user NFAs are respected and privacy-compliant logic is implemented. Architect and develop secure REST APIs in C# to support advanced attribution models and marketing analytics pipelines. Implement cryptographic hashing (e.g., SHA-256) Use Azure Data Lake Gen1 & Gen2 (ADLS) , Cosmos DB , and Azure Functions to build and host scalable backend systems. Integrate with Azure Key Vaults to securely manage secrets and sensitive credentials. Design and execute data pipelines in Azure Data Factory (ADF) for processing and transforming tracking data. Lead pixel and tag management initiatives using Adobe Tag Manager , including pixel governance and QA across properties. Collaborate with security teams to ensure all data-sharing and processing complies with Azures data security standards and enterprise privacy frameworks. Utilize Fabric and OCI environments as needed for data integration and marketing intelligence workflows. Monitor, troubleshoot, and optimize existing integrations using logs, diagnostics, and analytics tools. Required Skills: Strong hands-on experience with C# and building scalable APIs. Experience in implementing Meta CAPI , Google Enhanced Conversions , and other platform-specific server-side tracking APIs. Knowledge of Azure Clean Rooms , with experience developing custom logic and code for clean data collaborations . Proficiency with Azure Cloud technologies , especially Cosmos DB, Azure Functions, ADF, Key Vault, ADLS , and Azure security best practices . Familiarity with OCI for hybrid-cloud integration scenarios. Understanding of cryptography and secure data handling (e.g., hashing email addresses with SHA-256). Experience with Adobe Tag Management , specifically in pixel governance and lifecycle. Proven ability to collaborate across functions, especially with marketing and analytics teams. Soft Skills: Strong communication skills to explain technical concepts to non-technical stakeholders. Proven ability to collaborate across teams, especially with marketing, product, and data analytics. Adaptable and proactive in learning and applying evolving technologies and regulatory changes.

Posted 1 month ago

Apply

5.0 - 8.0 years

15 - 27 Lacs

Bengaluru

Hybrid

We are looking for a highly skilled API & Pixel Tracking Integration Engineer to lead the development and deployment of server-side tracking and attribution solutions across multiple platforms. The ideal candidate brings deep expertise in CAPI integrations (Meta, Google, and other platforms), secure data handling using cryptographic techniques, and experience working within privacy-first environments like Azure Clean Rooms . This role requires strong hands-on experience in C# development, Azure cloud services, OCI (Oracle Cloud Infrastructure) , and marketing technology stacks including Adobe Tag Management and Pixel Management . You will work closely with engineering, analytics, and marketing teams to deliver scalable, compliant, and secure data tracking solutions that drive business insights and performance. Key Responsibilities: Design, implement, and maintain CAPI integrations across Meta, Google, and all major platforms , ensuring real-time and accurate server-side event tracking. Utilize Fabric and OCI environments as needed for data integration and marketing intelligence workflows. Develop and manage custom tracking solutions leveraging Azure Clean Rooms , ensuring user NFAs are respected and privacy-compliant logic is implemented. Implement cryptographic hashing (e.g., SHA-256) Use Azure Data Lake Gen1 & Gen2 (ADLS) , Cosmos DB , and Azure Functions to build and host scalable backend systems. Integrate with Azure Key Vaults to securely manage secrets and sensitive credentials. Design and execute data pipelines in Azure Data Factory (ADF) for processing and transforming tracking data. Lead pixel and tag management initiatives using Adobe Tag Manager , including pixel governance and QA across properties. Collaborate with security teams to ensure all data-sharing and processing complies with Azures data security standards and enterprise privacy frameworks. Monitor, troubleshoot, and optimize existing integrations using logs, diagnostics, and analytics tools. Required Skills: Strong hands-on experience with Fabric and building scalable APIs. Experience in implementing Meta CAPI , Google Enhanced Conversions , and other platform-specific server-side tracking APIs. Knowledge of Azure Clean Rooms , with experience developing custom logic and code for clean data collaborations . Proficiency with Azure Cloud technologies , especially Cosmos DB, Azure Functions, ADF, Key Vault, ADLS , and Azure security best practices . Familiarity with OCI for hybrid-cloud integration scenarios. Understanding of cryptography and secure data handling (e.g., hashing email addresses with SHA-256). Experience with Adobe Tag Management , specifically in pixel governance and lifecycle. Proven ability to collaborate across functions, especially with marketing and analytics teams. Soft Skills: Strong communication skills to explain technical concepts to non-technical stakeholders. Proven ability to collaborate across teams, especially with marketing, product, and data analytics. Adaptable and proactive in learning and applying evolving technologies and regulatory changes.

Posted 1 month ago

Apply

5.0 - 8.0 years

15 - 27 Lacs

Bengaluru

Hybrid

We are looking for a highly skilled API & Pixel Tracking Integration Engineer to lead the development and deployment of server-side tracking and attribution solutions across multiple platforms. The ideal candidate brings deep expertise in CAPI integrations (Meta, Google, and other platforms), secure data handling using cryptographic techniques, and experience working within privacy-first environments like Azure Clean Rooms . This role requires strong hands-on experience in Azure cloud services, OCI (Oracle Cloud Infrastructure) , and marketing technology stacks including Adobe Tag Management and Pixel Management . You will work closely with engineering, analytics, and marketing teams to deliver scalable, compliant, and secure data tracking solutions that drive business insights and performance. Key Responsibilities: Design, implement, and maintain CAPI integrations across Meta, Google, and all major platforms , ensuring real-time and accurate server-side event tracking. Utilize OCI environments as needed for data integration and marketing intelligence workflows. Develop and manage custom tracking solutions leveraging Azure Clean Rooms , ensuring user NFAs are respected, and privacy-compliant logic is implemented. Implement cryptographic hashing (e.g., SHA-256) Use Azure Data Lake Gen1 & Gen2 (ADLS) , Cosmos DB , and Azure Functions to build and host scalable backend systems. Integrate with Azure Key Vaults to securely manage secrets and sensitive credentials. Design and execute data pipelines in Azure Data Factory (ADF) for processing and transforming tracking data. Lead pixel and tag management initiatives using Adobe Tag Manager , including pixel governance and QA across properties. Collaborate with security teams to ensure all data-sharing and processing complies with Azures data security standards and enterprise privacy frameworks. Monitor, troubleshoot, and optimize existing integrations using logs, diagnostics, and analytics tools. Required Skills: Strong hands-on experience in Python and building scalable APIs. Experience in implementing Meta CAPI , Google Enhanced Conversions , and other platform-specific server-side tracking APIs. Proficiency with Azure Cloud technologies , Azure Functions, ADF, Key Vault, ADLS , and Azure security best practices . Knowledge of Azure Clean Rooms , with experience developing custom logic and code for clean data collaborations . Familiarity with OCI for hybrid-cloud integration scenarios. Understanding of cryptography and secure data handling (e.g., hashing email addresses with SHA-256). Experience with Adobe Tag Management , specifically in pixel governance and lifecycle. Proven ability to collaborate across functions, especially with marketing and analytics teams. Soft Skills: Strong communication skills to explain technical concepts to non-technical stakeholders. Proven ability to collaborate across teams, especially with marketing, product, and data analytics. Adaptable and proactive in learning and applying evolving technologies and regulatory changes.

Posted 1 month ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Overview Data Science Team works in developing Machine Learning (ML) and Artificial Intelligence (AI) projects. Specific scope of this role is to develop ML solution in support of ML/AI projects using big analytics toolsets in a CI/CD environment. Analytics toolsets may include DS tools/Spark/Databricks, and other technologies offered by Microsoft Azure or open-source toolsets. This role will also help automate the end-to-end cycle with Azure Pipelines. You will be part of a collaborative interdisciplinary team around data, where you will be responsible of our continuous delivery of statistical/ML models. You will work closely with process owners, product owners and final business users. This will provide you the correct visibility and understanding of criticality of your developments. Responsibilities Delivery of key Advanced Analytics/Data Science projects within time and budget, particularly around DevOps/MLOps and Machine Learning models in scope Active contributor to code & development in projects and services Partner with data engineers to ensure data access for discovery and proper data is prepared for model consumption. Partner with ML engineers working on industrialization. Communicate with business stakeholders in the process of service design, training and knowledge transfer. Support large-scale experimentation and build data-driven models. Refine requirements into modelling problems. Influence product teams through data-based recommendations. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create reusable packages or libraries. Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards Leverage big data technologies to help process data and build scaled data pipelines (batch to real time) Implement end-to-end ML lifecycle with Azure Databricks and Azure Pipelines Automate ML models deployments Qualifications BE/B.Tech in Computer Science, Maths, technical fields. Overall 2-4 years of experience working as a Data Scientist. 2+ years’ experience building solutions in the commercial or in the supply chain space. 2+ years working in a team to deliver production level analytic solutions. Fluent in git (version control). Understanding of Jenkins, Docker are a plus. Fluent in SQL syntaxis. 2+ years’ experience in Statistical/ML techniques to solve supervised (regression, classification) and unsupervised problems. 2+ years’ experience in developing business problem related statistical/ML modeling with industry tools with primary focus on Python or Pyspark development. Data Science - Hands on experience and strong knowledge of building machine learning models - supervised and unsupervised models. Knowledge of Time series/Demand Forecast models is a plus Programming Skills - Hands-on experience in statistical programming languages like Python, Pyspark and database query languages like SQL Statistics - Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators Cloud (Azure) - Experience in Databricks and ADF is desirable Familiarity with Spark, Hive, Pig is an added advantage Business storytelling and communicating data insights in business consumable format. Fluent in one Visualization tool. Strong communications and organizational skills with the ability to deal with ambiguity while juggling multiple priorities Experience with Agile methodology for team work and analytics ‘product’ creation. Experience in Reinforcement Learning is a plus. Experience in Simulation and Optimization problems in any space is a plus. Experience with Bayesian methods is a plus. Experience with Causal inference is a plus. Experience with NLP is a plus. Experience with Responsible AI is a plus. Experience with distributed machine learning is a plus Experience in DevOps, hands-on experience with one or more cloud service providers AWS, GCP, Azure(preferred) Model deployment experience is a plus Experience with version control systems like GitHub and CI/CD tools Experience in Exploratory data Analysis Knowledge of ML Ops / DevOps and deploying ML models is preferred Experience using MLFlow, Kubeflow etc. will be preferred Experience executing and contributing to ML OPS automation infrastructure is good to have Exceptional analytical and problem-solving skills Stakeholder engagement-BU, Vendors. Experience building statistical models in the Retail or Supply chain space is a plus Show more Show less

Posted 1 month ago

Apply

3.0 - 7.0 years

22 - 25 Lacs

Bengaluru

Hybrid

Role & responsibilities 3-6 years of experience in Data Engineering Pipeline Ownership and Quality Assurance, with hands-on expertise in building, testing, and maintaining data pipelines. Proficiency with Azure Data Factory (ADF), Azure Databricks (ADB), and PySpark for data pipeline orchestration and processing large-scale datasets. Strong experience in writing SQL queries and performing data validation, data profiling, and schema checks. Experience with big data validation, including schema enforcement, data integrity checks, and automated anomaly detection. Ability to design, develop, and implement automated test cases to monitor and improve data pipeline efficiency. Deep understanding of Medallion Architecture (Raw, Bronze, Silver, Gold) for structured data flow management. Hands-on experience with Apache Airflow for scheduling, monitoring, and managing workflows. Strong knowledge of Python for developing data quality scripts, test automation, and ETL validations. Familiarity with CI/CD pipelines for deploying and automating data engineering workflows. Solid data governance and data security practices within the Azure ecosystem. Additional Requirements: Ownership of data pipelines ensuring end-to-end execution, monitoring, and troubleshooting failures proactively. Strong stakeholder management skills, including follow-ups with business teams across multiple regions to gather requirements, address issues, and optimize processes. Time flexibility to align with global teams for efficient communication and collaboration. Excellent problem-solving skills with the ability to simulate and test edge cases in data processing environments. Strong communication skills to document and articulate pipeline issues, troubleshooting steps, and solutions effectively. Experience with Unity Catalog or willingness to learn. Preferred candidate profile Immediate Joiner's

Posted 1 month ago

Apply

3.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Oracle Management Level Associate Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. Those in Oracle technology at PwC will focus on utilising and managing Oracle suite of software and technologies for various purposes within an organisation. You will be responsible for tasks such as installation, configuration, administration, development, and support of Oracle products and solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary Managing business performance in today’s complex and rapidly changing business environment is crucial for any organization’s short-term and long-term success. However ensuring streamlined E2E Oracle fusion Technical to seamlessly adapt to the changing business environment is crucial from a process and compliance perspective. As part of the Technology Consulting -Business Applications - Oracle Practice team, we leverage opportunities around digital disruption, new age operating model and best in class practices to deliver technology enabled transformation to our clients Responsibilities: Extensive experience in Oracle ERP/Fusion SaaS/PaaS project implementations as a technical developer . Completed at least 2 full Oracle Cloud (Fusion) Implementation Extensive Knowledge on database structure for ERP/Oracle Cloud (Fusion) Extensively worked on BI Publisher reports, FBDI/OTBI Cloud and Oracle Integration (OIC) Mandatory skill sets BI Publisher reports, FBDI/OTBI Cloud and Oracle Integration (OIC) Preferred skill sets database structure for ERP/Oracle Cloud (Fusion) Year of experience required Minimum 3 Years of Oracle fusion experience Educational Qualification BE/BTech MBA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering Degrees/Field Of Study Preferred: Certifications (if blank, certifications not specified) Required Skills Oracle Fusion Middleware (OFM) Optional Skills Accepting Feedback, Active Listening, Business Transformation, Communication, Design Automation, Emotional Regulation, Empathy, Inclusion, Intellectual Curiosity, Optimism, Oracle Application Development Framework (ADF), Oracle Business Intelligence (BI) Publisher, Oracle Cloud Infrastructure, Oracle Data Integration, Process Improvement, Process Optimization, Strategic Technology Planning, Teamwork, Well Being Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 month ago

Apply

6.0 years

0 Lacs

Kolkata, West Bengal, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a highly skilled and motivated Senior DataOps Engineer with strong expertise in the Azure data ecosystem. You will play a crucial role in managing and optimizing data workflows across Azure platforms such as Azure Data Factory, Data Lake, Databricks, and Synapse. Your primary focus will be on building, maintaining, and monitoring data pipelines, ensuring high data quality, and supporting critical data operations. You'll also support visualization, automation, and CI/CD processes to streamline data delivery and reporting. Your Key Responsibilities Data Pipeline Management: Build, monitor, and optimize data pipelines using Azure Data Factory (ADF), Databricks, and Azure Synapse for efficient data ingestion, transformation, and storage. ETL Operations: Design and maintain robust ETL processes for batch and real-time data processing across cloud and on-premise sources. Data Lake Management: Organize and manage structured and unstructured data in Azure Data Lake, ensuring performance and security best practices. Data Quality & Validation: Perform data profiling, validation, and transformation using SQL, PySpark, and Python to ensure data integrity. Monitoring & Troubleshooting: Use logging and monitoring tools to troubleshoot failures in pipelines and address data latency or quality issues. Reporting & Visualization: Work with Power BI or Tableau teams to support dashboard development, ensuring the availability of clean and reliable data. DevOps & CI/CD: Support data deployment pipelines using Azure DevOps, Git, and CI/CD practices for version control and automation. Tool Integration: Collaborate with cross-functional teams to integrate Informatica CDI or similar ETL tools with Azure components for seamless data flow. Collaboration & Documentation: Partner with data analysts, engineers, and business stakeholders, while maintaining SOPs and technical documentation for operational efficiency. Skills and attributes for success Strong hands-on experience in Azure Data Factory, Azure Data Lake, Azure Synapse, and Databricks Solid understanding of ETL/ELT design and implementation principles Strong SQL and PySpark skills for data transformation and validation Exposure to Python for automation and scripting Familiarity with DevOps concepts, CI/CD workflows, and source control systems (Azure DevOps preferred) Experience in working with Power BI or Tableau for data visualization and reporting support Strong problem-solving skills, attention to detail, and commitment to data quality Excellent communication and documentation skills to interface with technical and business teamsStrong knowledge of asset management business operations, especially in data domains like securities, holdings, benchmarks, and pricing. To qualify for the role, you must have 4–6 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Azure Databricks: Experience in data transformation and processing using notebooks and Spark. Azure Data Lake: Experience working with hierarchical data storage in Data Lake. Azure Synapse: Familiarity with distributed data querying and data warehousing. Azure Data factory: Hands-on experience in orchestrating and monitoring data pipelines. ETL Process Understanding: Knowledge of data extraction, transformation, and loading workflows, including data cleansing, mapping, and integration techniques. Good to have Power BI or Tableau for reporting support Monitoring/logging using Azure Monitor or Log Analytics Azure DevOps and Git for CI/CD and version control Python and/or PySpark for scripting and data handling Informatica Cloud Data Integration (CDI) or similar ETL tools Shell scripting or command-line data SQL (across distributed and relational databases) What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What we offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Data Engineer- ETL Bangalore, India AXA XL recognizes data and information as critical business assets, both in terms of managing risk and enabling new business opportunities. This data should not only be high quality, but also actionable - enabling AXA XL’s executive leadership team to maximize benefits and facilitate sustained industrious advantage. Our Chief Data Office also known as our Innovation, Data Intelligence & Analytics team (IDA) is focused on driving innovation through optimizing how we leverage data to drive strategy and create a new business model - disrupting the insurance market. As we develop an enterprise-wide data and digital strategy that moves us toward greater focus on the use of data and data-driven insights, we are seeking a Data Engineer. The role will support the team’s efforts towards creating, enhancing, and stabilizing the Enterprise data lake through the development of the data pipelines. This role requires a person who is a team player and can work well with team members from other disciplines to deliver data in an efficient and strategic manner. What You’ll Be Doing What will your essential responsibilities include? Act as a data engineering expert and partner to Global Technology and data consumers in controlling complexity and cost of the data platform, whilst enabling performance, governance, and maintainability of the estate. Understand current and future data consumption patterns, architecture (granular level), partner with Architects to make sure optimal design of data layers. Apply best practices in Data architecture. For example, balance between materialization and virtualization, optimal level of de-normalization, caching and partitioning strategies, choice of storage and querying technology, performance tuning. Leading and hands-on execution of research into new technologies. Formulating frameworks for assessment of new technology vs business benefit, implications for data consumers. Act as a best practice expert, blueprint creator of ways of working such as testing, logging, CI/CD, observability, release, enabling rapid growth in data inventory and utilization of Data Science Platform. Design prototypes and work in a fast-paced iterative solution delivery model. Design, Develop and maintain ETL pipelines using Py spark in Azure Databricks using delta tables. Use Harness for deployment pipeline. Monitor Performance of ETL Jobs, resolve any issue that arose and improve the performance metrics as needed. Diagnose system performance issue related to data processing and implement solution to address them. Collaborate with other teams to make sure successful integration of data pipelines into larger system architecture requirement. Maintain integrity and quality across all pipelines and environments. Understand and follow secure coding practice to make sure code is not vulnerable. You will report to the Application Manager. What You Will BRING We’re looking for someone who has these abilities and skills: Required Skills And Abilities Effective Communication skills. Bachelor’s degree in computer science, Mathematics, Statistics, Finance, related technical field, or equivalent work experience. Relevant years of extensive work experience in various data engineering & modeling techniques (relational, data warehouse, semi-structured, etc.), application development, advanced data querying skills. Relevant years of programming experience using Databricks. Relevant years of experience using Microsoft Azure suite of products (ADF, synapse and ADLS). Solid knowledge on network and firewall concepts. Solid experience writing, optimizing and analyzing SQL. Relevant years of experience with Python. Ability to break complex data requirements and architect solutions into achievable targets. Robust familiarity with Software Development Life Cycle (SDLC) processes and workflow, especially Agile. Experience using Harness. Technical lead responsible for both individual and team deliveries. Desired Skills And Abilities Worked in big data migration projects. Worked on performance tuning both at database and big data platforms. Ability to interpret complex data requirements and architect solutions. Distinctive problem-solving and analytical skills combined with robust business acumen. Excellent basics on parquet files and delta files. Effective Knowledge of Azure cloud computing platform. Familiarity with Reporting software - Power BI is a plus. Familiarity with DBT is a plus. Passion for data and experience working within a data-driven organization. You care about what you do, and what we do. Who WE Are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business − property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What we OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and a diverse workforce enable business growth and are critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most diverse workforce possible, and create an inclusive culture where everyone can bring their full selves to work and can reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe Robust support for Flexible Working Arrangements Enhanced family friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides dynamic compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society - are essential to our future. We’re committed to protecting and restoring nature - from mangrove forests to the bees in our backyard - by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action: We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day - the Global Day of Giving. For more information, please see axaxl.com/sustainability. Show more Show less

Posted 1 month ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Oracle Management Level Manager Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. Those in Oracle technology at PwC will focus on utilising and managing Oracle suite of software and technologies for various purposes within an organisation. You will be responsible for tasks such as installation, configuration, administration, development, and support of Oracle products and solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary Managing business performance in today’s complex and rapidly changing business environment is crucial for any organization’s short-term and long-term success. However ensuring streamlined E2E Oracle fusion Finance to seamlessly adapt to the changing business environment is crucial from a process and compliance perspective. As part of the Technology Consulting -Business Applications - Oracle Practice team, we leverage opportunities around digital disruption, new age operating model and best in class practices to deliver technology enabled transformation to our clients Responsibilities: Experienced Oracle Fusion Finance Consultant to join our finance team. Oracle Fusion Finance Functional : Minimum 2 implementation in Oracle Fusion ERP package - Finance modules as listed. Implementation, configuration, and maintenance of Oracle Fusion Financials modules, such as General Ledger, Accounts Payable, Accounts Receivable, Fixed Assets, lease Accounting, Tax and Cash Management. Ensure that financial systems and processes are designed and maintained in accordance with industry best practices and company policies. - Coordinate with cross-functional teams to ensure that financial systems are integrated with other enterprise systems. - Coordinate with other functional tracks on the accounting/ financial impact of transactions, SLA rules, etc. *Mandatory skill sets Modules: AP, AR, GL, FA & Lease accounting, CM, Tax modules of Fusion *Preferred skill sets - Provide hypercare/ AMS support post Go Live. - Has go the ability to work independently with minimal oversight - Carries a can-do attitude and a mindset of diversity and equality - Proficient in MS – Excel *Year of experience required Minimum 8Years of Oracle fusion experience *Educational Qualification BE/BTech MBA CA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Master of Business Administration, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Oracle Integration Cloud (OIC) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Transformation, Coaching and Feedback, Communication, Creativity, Design Automation, Embracing Change, Emotional Regulation, Empathy, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Oracle Application Development Framework (ADF), Oracle Business Intelligence (BI) Publisher, Oracle Cloud Infrastructure, Oracle Data Integration, Process Improvement, Process Optimization, Professional Courage, Relationship Building, Self-Awareness {+ 4 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies