Jobs
Interviews

1640 Adf Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Responsible for developing, optimize, and maintaining business intelligence and data warehouse systems, ensuring secure, efficient data storage and retrieval, enabling self-service data exploration, and supporting stakeholders with insightful reporting and analysis. Grade - T5 Please note that the Job will close at 12am on Posting Close date, so please submit your application prior to the Close Date Accountabilities What your main responsibilities are: Data Pipeline - Develop and maintain scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity Data Integration - Connect offline and online data to continuously improve overall understanding of customer behavior and journeys for personalization. Data pre-processing including collecting, parsing, managing, analyzing and visualizing large sets of data Data Quality Management - Cleanse the data and improve data quality and readiness for analysis. Drive standards, define and implement/improve data governance strategies and enforce best practices to scale data analysis across platforms Data Transformation - Processes data by cleansing data and transforming them to proper storage structure for the purpose of querying and analysis using ETL and ELT process Data Enablement - Ensure data is accessible and useable to wider enterprise to enable a deeper and more timely understanding of operation. Qualifications & Specifications Masters /Bachelor’s degree in Engineering /Computer Science/ Math/ Statistics or equivalent. Strong programming skills in Python/Pyspark/SAS. Proven experience with large data sets and related technologies – Hadoop, Hive, Distributed computing systems, Spark optimization. Experience on cloud platforms (preferably Azure) and it's services Azure Data Factory (ADF), ADLS Storage, Azure DevOps. Hands-on experience on Databricks, Delta Lake, Workflows. Should have knowledge of DevOps process and tools like Docker, CI/CD, Kubernetes, Terraform, Octopus. Hands-on experience with SQL and data modeling to support the organization's data storage and analysis needs. Experience on any BI tool like Power BI (Good to have). Cloud migration experience (Good to have) Cloud and Data Engineering certification (Good to have) Working in an Agile environment 4-7 years of relevant work experience needed. Experience with stakeholder management will be an added advantage. What We Are Looking For Education: Bachelor's degree or equivalent in Computer Science, MIS, Mathematics, Statistics, or similar discipline. Master's degree or PhD preferred. Knowledge, Skills And Abilities Fluency in English Analytical Skills Accuracy & Attention to Detail Numerical Skills Planning & Organizing Skills Presentation Skills Data Modeling and Database Design ETL (Extract, Transform, Load) Skills Programming Skills FedEx was built on a philosophy that puts people first, one we take seriously. We are an equal opportunity/affirmative action employer and we are committed to a diverse, equitable, and inclusive workforce in which we enforce fair treatment, and provide growth opportunities for everyone. All qualified applicants will receive consideration for employment regardless of age, race, color, national origin, genetics, religion, gender, marital status, pregnancy (including childbirth or a related medical condition), physical or mental disability, or any other characteristic protected by applicable laws, regulations, and ordinances. Our Company FedEx is one of the world's largest express transportation companies and has consistently been selected as one of the top 10 World’s Most Admired Companies by "Fortune" magazine. Every day FedEx delivers for its customers with transportation and business solutions, serving more than 220 countries and territories around the globe. We can serve this global network due to our outstanding team of FedEx team members, who are tasked with making every FedEx experience outstanding. Our Philosophy The People-Service-Profit philosophy (P-S-P) describes the principles that govern every FedEx decision, policy, or activity. FedEx takes care of our people; they, in turn, deliver the impeccable service demanded by our customers, who reward us with the profitability necessary to secure our future. The essential element in making the People-Service-Profit philosophy such a positive force for the company is where we close the circle, and return these profits back into the business, and invest back in our people. Our success in the industry is attributed to our people. Through our P-S-P philosophy, we have a work environment that encourages team members to be innovative in delivering the highest possible quality of service to our customers. We care for their well-being, and value their contributions to the company. Our Culture Our culture is important for many reasons, and we intentionally bring it to life through our behaviors, actions, and activities in every part of the world. The FedEx culture and values have been a cornerstone of our success and growth since we began in the early 1970’s. While other companies can copy our systems, infrastructure, and processes, our culture makes us unique and is often a differentiating factor as we compete and grow in today’s global marketplace.

Posted 2 days ago

Apply

4.0 - 7.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Responsible for developing, optimize, and maintaining business intelligence and data warehouse systems, ensuring secure, efficient data storage and retrieval, enabling self-service data exploration, and supporting stakeholders with insightful reporting and analysis. Grade - T5 Please note that the Job will close at 12am on Posting Close date, so please submit your application prior to the Close Date Accountabilities What your main responsibilities are: Data Pipeline - Develop and maintain scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity Data Integration - Connect offline and online data to continuously improve overall understanding of customer behavior and journeys for personalization. Data pre-processing including collecting, parsing, managing, analyzing and visualizing large sets of data Data Quality Management - Cleanse the data and improve data quality and readiness for analysis. Drive standards, define and implement/improve data governance strategies and enforce best practices to scale data analysis across platforms Data Transformation - Processes data by cleansing data and transforming them to proper storage structure for the purpose of querying and analysis using ETL and ELT process Data Enablement - Ensure data is accessible and useable to wider enterprise to enable a deeper and more timely understanding of operation. Qualifications & Specifications Masters /Bachelor’s degree in Engineering /Computer Science/ Math/ Statistics or equivalent. Strong programming skills in Python/Pyspark/SAS. Proven experience with large data sets and related technologies – Hadoop, Hive, Distributed computing systems, Spark optimization. Experience on cloud platforms (preferably Azure) and it's services Azure Data Factory (ADF), ADLS Storage, Azure DevOps. Hands-on experience on Databricks, Delta Lake, Workflows. Should have knowledge of DevOps process and tools like Docker, CI/CD, Kubernetes, Terraform, Octopus. Hands-on experience with SQL and data modeling to support the organization's data storage and analysis needs. Experience on any BI tool like Power BI (Good to have). Cloud migration experience (Good to have) Cloud and Data Engineering certification (Good to have) Working in an Agile environment 4-7 years of relevant work experience needed. Experience with stakeholder management will be an added advantage. What We Are Looking For Education: Bachelor's degree or equivalent in Computer Science, MIS, Mathematics, Statistics, or similar discipline. Master's degree or PhD preferred. Knowledge, Skills And Abilities Fluency in English Analytical Skills Accuracy & Attention to Detail Numerical Skills Planning & Organizing Skills Presentation Skills Data Modeling and Database Design ETL (Extract, Transform, Load) Skills Programming Skills FedEx was built on a philosophy that puts people first, one we take seriously. We are an equal opportunity/affirmative action employer and we are committed to a diverse, equitable, and inclusive workforce in which we enforce fair treatment, and provide growth opportunities for everyone. All qualified applicants will receive consideration for employment regardless of age, race, color, national origin, genetics, religion, gender, marital status, pregnancy (including childbirth or a related medical condition), physical or mental disability, or any other characteristic protected by applicable laws, regulations, and ordinances. Our Company FedEx is one of the world's largest express transportation companies and has consistently been selected as one of the top 10 World’s Most Admired Companies by "Fortune" magazine. Every day FedEx delivers for its customers with transportation and business solutions, serving more than 220 countries and territories around the globe. We can serve this global network due to our outstanding team of FedEx team members, who are tasked with making every FedEx experience outstanding. Our Philosophy The People-Service-Profit philosophy (P-S-P) describes the principles that govern every FedEx decision, policy, or activity. FedEx takes care of our people; they, in turn, deliver the impeccable service demanded by our customers, who reward us with the profitability necessary to secure our future. The essential element in making the People-Service-Profit philosophy such a positive force for the company is where we close the circle, and return these profits back into the business, and invest back in our people. Our success in the industry is attributed to our people. Through our P-S-P philosophy, we have a work environment that encourages team members to be innovative in delivering the highest possible quality of service to our customers. We care for their well-being, and value their contributions to the company. Our Culture Our culture is important for many reasons, and we intentionally bring it to life through our behaviors, actions, and activities in every part of the world. The FedEx culture and values have been a cornerstone of our success and growth since we began in the early 1970’s. While other companies can copy our systems, infrastructure, and processes, our culture makes us unique and is often a differentiating factor as we compete and grow in today’s global marketplace.

Posted 2 days ago

Apply

5.0 years

0 Lacs

India

On-site

What You'll Do Avalara is an AI-first company. We expect every employee to actively leverage AI to enhance productivity, quality, innovation, and customer value. AI is embedded in our workflows and products — and success at Avalara requires embracing AI as an essential capability, not an optional tool. We are looking for a experienced Oracle Cloud ERP Techno Functional Consultant to join our team. You have experience with Oracle Cloud ERP & Oracle EBS, specifically to Cash, Procure to Pay and Tax module. You have understanding of Core Oracle Technology, Oracle business processes, multiple integration tools and the ability to collaborate with partners. You will be reporting to the Senior Technical Lead. Responsibilities What Your Responsibilities Will Be Technical Expertise: programming skills in relevant technologies like Java, SQL, PL/SQL, XML, RESTful APIs, JavaScript, and ADF and web services. Develop custom solutions, extensions, and integrations to meet specific our requirements. Report and Analytics: Proficiency in creating custom reports, dashboards, and analytics using Oracle BI Publisher, Oracle OTBI (Oracle Transactional Business Intelligence), and other reporting tools. Experience reviewing code to find and address potential issues and defects hands-on experience in BI Publisher, OTBI and Data Models Data Integration and Migration: Experience in data integration between Oracle Fusion applications and other systems and data migration from legacy systems to Oracle Fusion. Knowledge of ETL (Extract, Transform, Load) tools. Customization and Extensions: customize and extend Oracle Fusion applications using tools like Oracle JDeveloper, Oracle ADF, and Oracle Application Composer to tailor the software to meet needs. Oracle Fusion Product Knowledge: Expertise in Oracle Fusion Financials, Oracle Fusion SCM (Supply Chain Management), Oracle Fusion Procurement and Oracle Fusion Tax. Security and Access Control: Knowledge of security models, user roles, and access controls within Oracle Fusion applications to ensure data integrity and compliance. Performance Tuning and Optimization: Skills in diagnosing and resolving performance issues, optimizing database queries, and ensuring a smooth operation of Oracle Fusion applications. Problem Troubleshooting: Experience approaching a problem from different angles, analyzing pros and cons of different solutions to identify and address technical issues, system errors, and integration challenges. Experience communicating updates and resolutions to customers and other partners to work with clients, gather requirements, explain technical solutions to non-technical partners, and collaborate with teams. What You’ll Need To Be Successful Qualifications Minimum 5+ years of experience as Oracle Cloud ERP Financials Minimum 5+ years of experience as Oracle EBS Financials Bachelor's degree in Computer Science, Information Technology, or a related field. Previous experience implementing Tax Modules in Oracle Cloud ERP and Oracle EBS- Experience and desire to work in a Global delivery environment Experience with latest integration methodologies. Proficiency in CI/CD tools (Jenkins, GitLab, etc.) How We’ll Take Care Of You Total Rewards In addition to a great compensation package, paid time off, and paid parental leave, many Avalara employees are eligible for bonuses. Health & Wellness Benefits vary by location but generally include private medical, life, and disability insurance. Inclusive culture and diversity Avalara strongly supports diversity, equity, and inclusion, and is committed to integrating them into our business practices and our organizational culture. We also have a total of 8 employee-run resource groups, each with senior leadership and exec sponsorship. What You Need To Know About Avalara We’re defining the relationship between tax and tech. We’ve already built an industry-leading cloud compliance platform, processing over 54 billion customer API calls and over 6.6 million tax returns a year. Our growth is real - we're a billion dollar business - and we’re not slowing down until we’ve achieved our mission - to be part of every transaction in the world. We’re bright, innovative, and disruptive, like the orange we love to wear. It captures our quirky spirit and optimistic mindset. It shows off the culture we’ve designed, that empowers our people to win. We’ve been different from day one. Join us, and your career will be too. We’re An Equal Opportunity Employer Supporting diversity and inclusion is a cornerstone of our company — we don’t want people to fit into our culture, but to enrich it. All qualified candidates will receive consideration for employment without regard to race, color, creed, religion, age, gender, national orientation, disability, sexual orientation, US Veteran status, or any other factor protected by law. If you require any reasonable adjustments during the recruitment process, please let us know.

Posted 2 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

The BI Data Engineer is a key role within the Enterprise Data team. We are looking for expert Azure data engineer with deep Data engineering, ADF Integration and database development experience. This is a unique opportunity to be involved in delivering leading-edge business analytics using the latest and greatest cutting-edge BI tools, such as cloud-based databases, self-service analytics and leading visualisation tools enabling the company’s aim to become a fully digital organisation. Job Description: Key Responsibilities: Build Enterprise data engineering and Integration solutions using the latest Azure platform, Azure Data Factory, Azure SQL Database, Azure Synapse and Azure Fabric Development of enterprise ETL and integration routines using ADF Evaluate emerging Data enginnering technologies, standards and capabilities Partner with business stakeholder, product managers, and data scientists to understand business objectives and translate them into technical solutions. Work with DevOps, engineering, and operations teams to implement CI/CD pipelines and ensure smooth deployment of data engiinering solutions Required Skills And Experience Technical Expertise : Expertise in the Azure platform including Azure Data Factory, Azure SQL Database, Azure Synapse and Azure Fabric Exposure to Data bricks and lakehouse arcchitect & technologies Extensive knowledge of data modeling, ETL processes and data warehouse design principles. Experienc in machine learning and AI services in Azure. Professional Experience : 5+ years of experience in database development using SQL 5+ Years integration and data engineering experience 5+ years experience using Azure SQL DB, ADF and Azure Synapse 2+ Years experience using Power BI Comprehensive understanding of data modelling Relevant certifications in data engineering, machine learning, AI. Key Competencies: Expertise in data engineering and database development. Familiarity with the Microsoft Fabric technologies including One Lake, Lakehouse and Data Factory Strong understanding of data governance, compliance, and security frameworks. Proven ability to drive innovation in data strategy and cloud solutions. A deep understanding of business intelligence workflows and the ability to align technical solutions Strong database design skills, including an understanding of both normalised form and dimensional form databases. In-depth knowledge and experience of data-warehousing strategies and techniques e.g., Kimball Data warehousing Experience in Cloud based data integration tools like Azure Data Factory Experience in Azure Dev Ops or JIRA is a plus Experience working with finance data is highly desirable Familiarity with agile development techniques and objectives Location: Pune Brand: Dentsu Time Type: Full time Contract Type: Permanent

Posted 2 days ago

Apply

2.0 - 3.0 years

0 Lacs

Telangana

On-site

Role : ML Engineer (Associate / Senior) Experience : 2-3 Years (Associate) 4-5 Years (Senior) Mandatory Skill: Python/ MLOps/ Docker and Kubernetes/FastAPI or Flask/CICD/Jenkins/Spark/SQL/RDB/Cosmos/Kafka/ADLS/API/Databricks Location: Bangalore Notice Period: less than 60 Days Job Description: Other Skills: Azure /LLMOps / ADF/ETL We are seeking a talented and passionate Machine Learning Engineer to join our team and play a pivotal role in developing and deploying cutting-edge machine learning solutions. You will work closely with other engineers and data scientists to bring machine learning models from proof-of-concept to production, ensuring they deliver real-world impact and solve critical business challenges. Collaborate with data scientists, model developers , software engineers, and other stakeholders to translate business needs into technical solutions. Experience of having deployed ML models to production Create high performance real-time inferencing APIs and batch inferencing pipelines to serve ML models to stakeholders. Integrate machine learning models seamlessly into existing production systems. Continuously monitor and evaluate model performance and retrain the models automatically or periodically Streamline existing ML pipelines to increase throughput. Identify and address security vulnerabilities in existing applications proactively. Design, develop, and implement machine learning models for preferably insurance related applications. Well versed with Azure ecosystem Knowledge of NLP and Generative AI techniques. Relevant experience will be a plus. Knowledge of machine learning algorithms and libraries (e.g., TensorFlow, PyTorch) will be a plus. Stay up-to-date on the latest advancements in machine learning and contribute to ongoing innovation within the team.

Posted 2 days ago

Apply

10.0 years

8 - 10 Lacs

Gurgaon

On-site

Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. Senior Software Engineer-MLOps We are looking for a highly skilled Senior Software Engineer – MLOps with deep expertise in building and managing production-grade ML pipelines in AWS and Azure cloud environments. This role requires a strong foundation in software engineering, DevOps principles, and ML model lifecycle automation to enable reliable and scalable machine learning operations across the organization Key Responsibilities include: Design and build robust MLOps pipelines for model training, validation, deployment, and monitoring Automate workflows using CI/CD tools such as GitLab Actions, Azure DevOps, Jenkins, or Argo Workflows Build and manage ML workloads on AWS (SageMaker Unified studio, Bedrock, EKS, Lambda, S3, Athena) and Azure (Azure ML Foundry, AKS, ADF, Blob Storage) Design secure and cost-efficient ML architecture leveraging cloud-native services Manage infrastructure using IaC tools such as Terraform, Bicep, or CloudFormation Implement cost optimization and performance tuning for cloud workloads Package ML models using Docker, and orchestrate deployments with Kubernetes on EKS/AKS Ensure robust CI/CD pipelines and infrastructure as code (IaC) using tools like Terraform or CloudFormation Integrate observability tools for model performance, drift detection, and lineage tracking (e.g., Fiddler, MLflow, Prometheus, Grafana, Azure Monitor, CloudWatch) Ensure model reproducibility, versioning, and compliance with audit and regulatory requirements Collaborate with data scientists, software engineers, DevOps, and cloud architects to operationalize AI/ML use cases Mentor junior MLOps engineers and evangelize MLOps best practices across teams Required Qualification: Bachelor's/Master’s in Computer Science, Engineering, or related discipline 10 years in Devops, with 2+ years in MLOps. Proficient with MLflow, Airflow, FastAPI, Docker, Kubernetes, and Git. Experience with feature stores (e.g., Feast), model registries, and experiment tracking. Proficiency in Devops & MLOps, Automation Cloud formation/Teraform/BICEP Requisition ID: 610750 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!

Posted 2 days ago

Apply

7.0 years

8 - 10 Lacs

Gurgaon

On-site

Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. Software Engineer-MLOps We are seeking an enthusiastic and detail-oriented MLOps Engineer to support the development, deployment, and monitoring of machine learning models in production environments. This is a hands-on role ideal for candidates looking to grow their skills at the intersection of data science, software engineering, and DevOps. You will work closely with senior MLOps engineers, data scientists, and software developers to build scalable, reliable, and automated ML workflows across cloud platforms like AWS and Azure Key Responsibilities include: Assist in building and maintaining ML pipelines for data preparation, training, testing, and deployment Support the automation of model lifecycle tasks including versioning, packaging, and monitoring Build and manage ML workloads on AWS (SageMaker Unified studio, Bedrock, EKS, Lambda, S3, Athena) and Azure (Azure ML Foundry, AKS, ADF, Blob Storage) Assist with containerizing ML models using Docker, and deploying using Kubernetes or cloud-native orchestrators Manage infrastructure using IaC tools such as Terraform, Bicep, or CloudFormation Participate in implementing CI/CD pipelines for ML workflows using GitHub Actions, Azure DevOps, or Jenkins Contribute to testing frameworks for ML models and data validation (e.g., pytest, Great Expectations). Ensure robust CI/CD pipelines and infrastructure as code (IaC) using tools like Terraform or CloudFormation Participate in diagnosing issues related to model accuracy, latency, or infrastructure bottlenecks Continuously improve knowledge of MLOps tools, ML frameworks, and cloud practices. Required Qualification: Bachelor's/Master’s in Computer Science, Engineering, or related discipline 7 years in Devops, with 2+ years in MLOps. Good Understanding of MLflow, Airflow, FastAPI, Docker, Kubernetes, and Git. Proficient in Python and familiar with bash scripting Exposure to MLOps platforms or tools such as SageMaker Studio, Azure ML, or GCP Vertex AI. Requisition ID: 610751 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!

Posted 2 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity.

Posted 2 days ago

Apply

8.0 years

0 Lacs

India

Remote

Azure Data Engineer Location: Remote Shift : 6am - 3pm US central time zone Job Summary: We are seeking a highly skilled Data Engineer with strong experience in PostgreSQL and SQL Server, as well as hands-on expertise in Azure Data Factory (ADF) and Databricks. The ideal candidate will be responsible for building scalable data pipelines, optimizing database performance, and designing robust data models and schemas to support enterprise data initiatives. Key Responsibilities: Design and develop robust ETL/ELT pipelines using Azure Data Factory and Databricks Develop and optimize complex SQL queries and functions in PostgreSQL Develop and optimize complex SQL queries in SQL Server Perform performance tuning and query optimization for PostgreSQL Design and implement data models and schema structures aligned with business and analytical needs Collaborate with data architects, analysts, and business stakeholders to understand data requirements Ensure data quality, integrity, and security across all data platforms Monitor and troubleshoot data pipeline issues and implement proactive solutions Participate in code reviews, sprint planning, and agile ceremonies Required Skills & Qualifications: 8+ years of experience in data engineering or related field Strong expertise in PostgreSQL and SQL Server development, performance tuning, and schema design Experience in data migration from SQL Server to PostgreSQL Hands-on experience with Azure Data Factory (ADF) and Databricks Proficiency in SQL, Python, or Scala for data processing Experience with data modeling techniques (e.g., star/snowflake schemas, normalization) Familiarity with CI/CD pipelines, version control (Git), and agile methodologies Excellent problem-solving and communication skills If interested, share your resume on aditya.dhumal@leanitcorp.com

Posted 2 days ago

Apply

2.0 years

3 - 10 Lacs

India

Remote

Job Title - Sr. Data Engineer Experience - 2+ Years Location - Indpre (onsite) Industry - IT Job Type - Full ime Roles and Responsibilities- 1. Design and develop scalable data pipelines and workflows for data ingestion, transformation, and integration. 2. Build and maintain data storage systems, including data warehouses, data lakes, and relational databases. 3. Ensure data accuracy, integrity, and consistency through validation and quality assurance processes. 4. Collaborate with data scientists, analysts, and business teams to understand data needs and deliver tailored solutions. 5. Optimize database performance and manage large-scale datasets for efficient processing. 6. Leverage cloud platforms (AWS, Azure, or GCP) and big data technologies (Hadoop, Spark, Kafka) for building robust data solutions. 7. Automate and monitor data workflows using orchestration frameworks such as Apache Airflow. 8. Implement and enforce data governance policies to ensure compliance and data security. 9. Troubleshoot and resolve data-related issues to maintain seamless operations. 10. Stay updated on emerging tools, technologies, and trends in data engineering. Skills and Knowledge- 1. Core Skills: ● Proficient in Python (libraries: Pandas, NumPy) and SQL. ● Knowledge of data modeling techniques, including: ○ Entity-Relationship (ER) Diagrams ○ Dimensional Modeling ○ Data Normalization ● Familiarity with ETL processes and tools like: ○ Azure Data Factory (ADF) ○ SSIS (SQL Server Integration Services) 2. Cloud Expertise: ● AWS Services: Glue, Redshift, Lambda, EKS, RDS, Athena ● Azure Services: Databricks, Key Vault, ADLS Gen2, ADF, Azure SQL ● Snowflake 3. Big Data and Workflow Automation: ● Hands-on experience with big data technologies like Hadoop, Spark, and Kafka. ● Experience with workflow automation tools like Apache Airflow (or similar). Qualifications and Requirements- ● Education: ○ Bachelor’s degree (or equivalent) in Computer Science, Information Technology, Engineering, or a related field. ● Experience: ○ Freshers with strong understanding, internships and relevant academic projects are welcome. ○ 2+ years of experience working with Python, SQL, and data integration or visualization tools is preferred. ● Other Skills: ○ Strong communication skills, especially the ability to explain technical concepts to non-technical stakeholders. ○ Ability to work in a dynamic, research-oriented team with concurrent projects. Job Types: Full-time, Permanent Pay: ₹300,000.00 - ₹1,000,000.00 per year Benefits: Paid sick time Provident Fund Work from home Schedule: Day shift Monday to Friday Weekend availability Supplemental Pay: Performance bonus Ability to commute/relocate: Niranjanpur, Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Experience: Data Engineer: 2 years (Preferred) Work Location: In person Application Deadline: 31/08/2025

Posted 2 days ago

Apply

0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Azure Data Engineer (ADF , ADB, Pyspark, Synapse) Must-Have PySpark, SQL, Azure Services (ADF, Databrics, Synapse) Good-to-Have Python, Azure Key Vault EXP- 10 to 12 Location- Bhubaneswar

Posted 2 days ago

Apply

5.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Skill required: Tech for Operations - Microsoft Azure Cloud Services Designation: App Automation Eng Senior Analyst Qualifications: Any Graduation/12th/PUC/HSC Years of Experience: 5 to 8 years About Accenture Accenture is a global professional services company with leading capabilities in digital, cloud and security.Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song— all powered by the world’s largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. We embrace the power of change to create value and shared success for our clients, people, shareholders, partners and communities.Visit us at www.accenture.com What would you do? Accenture is a global professional services company with leading capabilities in digital, cloud and security. Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song— all powered by the world s largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. We embrace the power of change to create value and shared success for our clients, people, shareholders, partners and communities. Visit us at www.accenture.com. In our Service Supply Chain offering, we leverage a combination of proprietary technology and client systems to develop, execute, and deliver BPaaS (business process as a service) or Managed Service solutions across the service lifecycle: Plan, Deliver, and Recover. In this role, you will partner with business development and act as a Business Subject Matter Expert (SME) to help build resilient solutions that will enhance our clients supply chains and customer experience. The Senior Azure Data factory (ADF) Support Engineer Il will be a critical member of our Enterprise Applications Team, responsible for designing, supporting & maintaining robust data solutions. The ideal candidate is proficient in ADF, SQL and has extensive experience in troubleshooting Azure Data factory environments, conducting code reviews, and bug fixing. This role requires a strategic thinker who can collaborate with cross-functional teams to drive our data strategy and ensure the optimal performance of our data systems. What are we looking for? Bachelor s or Master s degree in Computer Science, Information Technology, or a related field. Proven experience (5+ years) as a Azure Data Factory Support Engineer Il Expertise in ADF with a deep understanding of its data-related libraries. Strong experience in Azure cloud services, including troubleshooting and optimizing cloud-based environments. Proficient in SQL and experience with SQL database design. Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy. Experience with ADF pipelines. Excellent problem-solving and troubleshooting skills. Experience in code review and debugging in a collaborative project setting. Excellent verbal and written communication skills. Ability to work in a fast-paced, team-oriented environment. Strong understanding of the business and a passion for the mission of Service Supply Chain Hands on with Jira, Devops ticketing, ServiceNow is good to have Roles and Responsibilities: Innovate. Collaborate. Build. Create. Solve ADF & associated systems Ensure systems meet business requirements and industry practices. Integrate new data management technologies and software engineering tools into existing structures. Recommend ways to improve data reliability, efficiency, and quality. Use large data sets to address business issues. Use data to discover tasks that can be automated. Fix bugs to ensure robust and sustainable codebase. Collaborate closely with the relevant teams to diagnose and resolve issues in data processing systems, ensuring minimal downtime and optimal performance. Analyze and comprehend existing ADF data pipelines, systems, and processes to identify and troubleshoot issues effectively. Develop, test, and implement code changes to fix bugs and improve the efficiency and reliability of data pipelines. Review and validate change requests from stakeholders, ensuring they align with system capabilities and business objectives. Implement robust monitoring solutions to proactively detect and address issues in ADF data pipelines and related infrastructure. Coordinate with data architects and other team members to ensure that changes are in line with the overall architecture and data strategy. Document all changes, bug fixes, and updates meticulously, maintaining clear and comprehensive records for future reference and compliance. Provide technical guidance and support to other team members, promoting a culture of continuous learning and improvement. Stay updated with the latest technologies and practices in ADF to continuously improve the support and maintenance of data systems. Flexible Work Hours to include US Time Zones Flexible working hours however this position may require you to work a rotational On-Call schedule, evenings, weekends, and holiday shifts when need arises Participate in the Demand Management and Change Management processes. Work in partnership with internal business, external 3rd party technical teams and functional teams as a technology partner in communicating and coordinating delivery of technology services from Technology For Operations (TfO)

Posted 2 days ago

Apply

7.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. Software Engineer-MLOps We are seeking an enthusiastic and detail-oriented MLOps Engineer to support the development, deployment, and monitoring of machine learning models in production environments. This is a hands-on role ideal for candidates looking to grow their skills at the intersection of data science, software engineering, and DevOps. You will work closely with senior MLOps engineers, data scientists, and software developers to build scalable, reliable, and automated ML workflows across cloud platforms like AWS and Azure Key Responsibilities Include Assist in building and maintaining ML pipelines for data preparation, training, testing, and deployment Support the automation of model lifecycle tasks including versioning, packaging, and monitoring Build and manage ML workloads on AWS (SageMaker Unified studio, Bedrock, EKS, Lambda, S3, Athena) and Azure (Azure ML Foundry, AKS, ADF, Blob Storage) Assist with containerizing ML models using Docker, and deploying using Kubernetes or cloud-native orchestrators Manage infrastructure using IaC tools such as Terraform, Bicep, or CloudFormation Participate in implementing CI/CD pipelines for ML workflows using GitHub Actions, Azure DevOps, or Jenkins Contribute to testing frameworks for ML models and data validation (e.g., pytest, Great Expectations). Ensure robust CI/CD pipelines and infrastructure as code (IaC) using tools like Terraform or CloudFormation Participate in diagnosing issues related to model accuracy, latency, or infrastructure bottlenecks Continuously improve knowledge of MLOps tools, ML frameworks, and cloud practices. Required Qualification Bachelor's/Master’s in Computer Science, Engineering, or related discipline 7 years in Devops, with 2+ years in MLOps. Good Understanding of MLflow, Airflow, FastAPI, Docker, Kubernetes, and Git. Proficient in Python and familiar with bash scripting Exposure to MLOps platforms or tools such as SageMaker Studio, Azure ML, or GCP Vertex AI. Requisition ID: 610751 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!

Posted 2 days ago

Apply

0 years

0 Lacs

India

On-site

Java Full Stack & Oracle ADF Developer Additional Skills: Experience with WebLogic Server

Posted 2 days ago

Apply

4.0 - 8.0 years

0 Lacs

Pune, Maharashtra

On-site

DataPune Posted On 31 Jul 2025 End Date 31 Dec 2025 Required Experience 4 - 8 Years Basic Section Grade Role Senior Software Engineer Employment Type Full Time Employee Category Organisational Group Company NewVision Company Name New Vision Softcom & Consultancy Pvt. Ltd Function Business Units (BU) Department/Practice Data Department/Practice Data Engineering Region APAC Country India Base Office Location Pune Working Model Hybrid Weekly Off Pune Office Standard State Maharashtra Skills Skill AZURE DATABRICKS Highest Education GRADUATION/EQUIVALENT COURSE CERTIFICATION DP-201: DESIGNING AN AZURE DATA SOLUTION DP-203T00: DATA ENGINEERING ON MICROSOFT AZURE Working Language ENGLISH Job Description Position Summary: We are seeking a talented Databricks Data Engineer with a strong background in data engineering to join our team. You will play a key role in designing, building, and maintaining data pipelines using a variety of technologies, with a focus on the Microsoft Azure cloud platform. Responsibilities: Design, develop, and implement data pipelines using Azure Data Factory (ADF) or other orchestration tools. Write efficient SQL queries to extract, transform, and load (ETL) data from various sources into Azure Synapse Analytics. Utilize PySpark and Python for complex data processing tasks on large datasets within Azure Databricks. Collaborate with data analysts to understand data requirements and ensure data quality. Hands-on experience in designing and developing Datalakes and Warehouses Implement data governance practices to ensure data security and compliance. Monitor and maintain data pipelines for optimal performance and troubleshoot any issues. Develop and maintain unit tests for data pipeline code. Work collaboratively with other engineers and data professionals in an Agile development environment. Preferred Skills & Experience: Good knowledge of PySpark & working knowledge of Python Full stack Azure Data Engineering skills (Azure Data Factory, DataBricks and Synapse Analytics) Experience with large dataset handling Hands-on experience in designing and developing Datalakes and Warehouses New Vision is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Pricing Revenue Growth Consultant, your primary role will be to advise on building a pricing and promotion tool for a Consumer Packaged Goods (CPG) client. This tool will encompass pricing strategies, trade promotions, and revenue growth initiatives. You will be responsible for developing analytics and machine learning models to analyze price elasticity, promotion effectiveness, and trade promotion optimization. Collaboration with CPG business, marketing, data scientists, and other teams will be essential for the successful delivery of the project and tool. Your Business Domain Skills will be crucial in this role, including expertise in Trade Promotion Management (TPM), Trade Promotion Optimization (TPO), Promotion Depth Frequency Forecasting, Price Pack Architecture, Competitive Price Tracking, Revenue Growth Management, and Financial Modeling. Additionally, you will need proficiency in AI, Machine Learning for Pricing, and Dynamic pricing implementation. Key Responsibilities: - Utilize Consulting Skills for hypothesis-driven problem solving, Go-to-Market pricing, and revenue growth execution. - Conduct Advisory Presentations and Data Storytelling. - Provide Project Leadership and Execution. In terms of Technical Requirements, you should possess: - Proficiency in programming languages such as Python and R for data manipulation and analysis. - Expertise in machine learning algorithms and statistical modeling techniques. - Familiarity with data warehousing, data pipelines, and data visualization tools like Tableau or Power BI. - Experience in Cloud platforms like ADF, Databricks, Azure, and their AI services. Your Additional Responsibilities will include: - Working collaboratively with cross-functional teams across sales, marketing, and product development. - Managing stakeholders and leading teams. - Thriving in a fast-paced environment focused on delivering timely insights to support business decisions. - Demonstrating excellent problem-solving skills and the ability to address complex technical challenges. - Communicating effectively with cross-functional teams and stakeholders. - Managing multiple projects simultaneously and prioritizing tasks based on business impact. Qualifications: - A degree in Data Science or Computer Science with a specialization in data science. - A Master's in Business Administration and Analytics is preferred. Preferred Skills: - Experience in Technology, Big Data, and Text Analytics.,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You will be a seasoned Senior ETL/DB Tester with expertise in data validation and database testing across modern data platforms. Your role will involve designing, developing, and executing comprehensive test plans for ETL and database validation processes. You will validate data transformations and integrity across multiple stages and systems such as Talend, ADF, Snowflake, and Power BI. Your responsibilities will include performing manual testing and defect tracking using tools like Zephyr or Tosca. You will analyze business and data requirements to ensure full test coverage, and write and execute complex SQL queries for data reconciliation. Identifying data-related issues and conducting root cause analysis in collaboration with developers will be crucial aspects of your role. You will track and manage bugs and enhancements through appropriate tools, and optimize testing strategies for performance, scalability, and accuracy in ETL processes. Your skills should include proficiency in ETL Tools such as Talend and ADF, working on Data Platforms like Snowflake, and experience in Reporting/Analytics tools like Power BI and VPI. Additionally, you should have expertise in Testing Tools like Zephyr or Tosca, manual testing, and strong SQL skills for validating complex data. Moreover, exposure to API Testing and familiarity with advanced features of Power BI such as Dashboards, DAX, and Data Modeling will be beneficial for this role.,

Posted 2 days ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

Adient is a leading global automotive seating supplier, supporting all major automakers in the differentiation of their vehicles through superior quality, technology, and performance. We are seeking a Sr. Data Analytics Lead to help build Adients data and analytics foundation, directly benefitting our internal business units, and our Consumers. You are self-motivated and data-curious, especially about how data can be used to optimize business opportunities. In this role you will own projects end-to-end, from conception to operationalization, demonstrating your comprehensive understanding of the full data product development lifecycle. You will employ various analytical techniques to solve complex problems, drive scalable cloud data architectures, and deliver data products to enhance decision making across the organization. In this role, you will also own the technical support for released applications being used by internal Adient teams. This includes the daily triage of problem tickets and change requests. You will have 2-3 developer direct reports to accommodate this support as well as new development. The successful candidate can lead medium to large scale analytics projects requiring minimal direction, is highly proficient in SQL and cloud-based technologies, has good communication skills, takes the initiative to explore and tackle problems, and is an effective people leader. The ideal candidate will be working within Adients Advanced Analytics team. You will be a part of an empowered, highly capable team collaborating with Business Relationship Managers, Product Owners, Data Engineers, Production Support, and Visualization Developers within multiple business units to understand the data analytics needs and translate those requirements into world-class solution architectures. You will lead and mentor a team of solution architects to research, analyze, implement, and support scalable data product solutions that power Adients analytics across the enterprise, and deliver on business priorities. Own technical support for released internal analytics applications. This includes the daily triage of problem tickets and change requests. Lead development and execution of reporting and analytics products to enable data-driven business decisions that will drive performance and lead to the accomplishment of annual goals. You will be leading, hiring, developing, and evolving the Analytics team and providing them technical direction with the support of other leads and architects. Understand the road ahead and ensure the team has the skills and tools necessary to succeed. Drive the team to develop operationally efficient analytic solutions. Manage resources/budget and partner with functional and business teams. Advocate sound software development practices and help develop and evangelize great engineering and organizational practices. You will be leading the team that designs and builds highly scalable data pipelines using new generation tools and technologies like Azure, Snowflake, Spark, Databricks, SQL, Python to induct data from various systems. Work with product owners to ensure priorities are understood and direct the team to support the vision of the larger Analytics organization. Translate complex business problem statements into analysis requirements and work with internal customers to define data product details based on expressed partner needs. Work closely with business and technical teams to deliver enterprise-grade datasets that are reliable, flexible, scalable, and provide low cost of ownership. Develop SQL queries and data visualizations to fulfill internal customer application reporting requirements, as well as ad-hoc analysis requests using tools such as PowerBI. Thoroughly document business requirements, data architecture solutions, and processes for business and technical audiences. Serve as a domain specialist on data and business processes within your area of focus and find solutions to operational or data issues in the data pipelines. Grow the technical ability of the team. QUALIFICATIONS - Bachelors Degree or Equivalent with 8+ years of experience in data engineering, computer science, or statistics field with at least 2+ years of experience in leadership/management. - Experience in developing Big Data cloud-based applications using the following technologies: SQL, Azure, Snowflake, PowerBI. - Experience building complex ADF data pipelines and Data Flows to ingest data from on-prem sources, transform, and sink into Snowflake. Good understanding of ADF pipelining Activities. - Familiar with various Azure connectors to establish on-prem data-source connectivity, as well as Snowflake data-warehouse connectivity over private network. - Lead/Work with hybrid teams, communicate effectively, both written and verbal, with technical and non-technical multi-functional teams. - Translate complex business requirements into scalable technical solutions meeting data warehousing design standards. Solid understanding of analytics needs and proactive-ness to build generic solutions to improve efficiency. - Experience with data visualization and dashboarding techniques to make complex data more accessible, understandable, and usable to drive business decisions and outcomes. Efficient in PowerBI. - Extensive experience in data architecture, defining and maintaining data assets, and developing data architecture strategies to support reporting and data visualization tools. - Understands common analytical data models like Kimball. Ensures physical data models align with best practice and requirements. - Thrives in a dynamic environment, keeping composure and a positive attitude. - A plus if your experience was in distribution or manufacturing organizations. PREFERRED - Experience with Snowflake cloud data warehouse. - Experience with Azure PaaS services. - Experience with TSQL, SQL Server, Azure SQL, Snowflake SQL, Oracle SQL. - Experience with Azure Storage account connectivity. - Experience developing visualizations with PowerBI and BusinessObjects. - Experience with Databricks. - Experience with ADLS Gen2. - Experience with Azure VNet private endpoints on a private network. - Proficient with Spark and Python. - Advanced proficiency in SQL, joining multiple data sets across different data grains, query optimization, pivoting data. - MS Azure Certifications. - Snowflake Certifications. - Experience with other leading commercial Cloud platforms like AWS. - Experience with installing and configuring ODBC, JDBC drivers on Windows. - Candidate resides in the Plymouth MI area. PRIMARY LOCATION Pune Tech Center.,

Posted 2 days ago

Apply

15.0 years

0 Lacs

India

On-site

Job Summary As part of the data leadership team, the Capability Lead – Databricks will be responsible for building, scaling, and delivering Databricks-based data and AI capabilities across the organization. This leadership role involves technical vision, solution architecture, team building, partnership development, and delivery excellence using Databricks Unified Analytics Platform across industries. The individual will collaborate with clients, alliance partners (Databricks, Azure, AWS), internal stakeholders, and sales teams to drive adoption of lakehouse architectures, data engineering best practices, and AI/ML modernization. Areas of Responsibility 1. Offering and Capability Development: Develop and enhance Snowflake-based data platform offerings and accelerators Define best practices, architectural standards, and reusable frameworks for Snowflake Collaborate with alliance teams to strengthen partnership with Snowflake 2. Technical Leadership: Provide architectural guidance for Snowflake solution design and implementation Lead solutioning efforts for proposals, RFIs, and RFPs involving Snowflake Conduct technical reviews and ensure adherence to design standards. Act as a technical escalation point for complex project challenges 3. Delivery Oversight: Support delivery teams with technical expertise across Snowflake projects Drive quality assurance, performance optimization, and project risk mitigation. Review project artifacts and ensure alignment with Snowflake best practices Foster a culture of continuous improvement and delivery excellence 4. Talent Development: Build and grow a high-performing Snowflake capability team. Define skill development pathways and certification goals for team members. Mentor architects, developers, and consultants on Snowflake technologies Drive community of practice initiatives to share knowledge and innovations 5. Business Development Support: Engage with sales and pre-sales teams to position Snowflake capabilities Contribute to account growth by identifying new Snowflake opportunities Participate in client presentations, workshops, and technical discussions 6. Thought Leadership and Innovation Build thought leadership through whitepapers, blogs, and webinars Stay updated with Snowflake product enhancements and industry trends This role is highly collaborative and will work extremely closely with cross functional teams to fulfill the above responsibilities. Job Requirements: 12–15 years of experience in data engineering, analytics, and AI/ML 3–5 years of strong hands-on experience with Databricks (on Azure, AWS, or GCP) Expertise in Spark (PySpark/Scala), Delta Lake, Unity Catalog, MLflow, and Databricks notebooks Experience designing and implementing Lakehouse architectures at scale Familiarity with data governance, security, and compliance frameworks (GDPR, HIPAA, etc.) Experience with real-time and batch data pipelines (Structured Streaming, Auto Loader, Kafka, etc.) Strong understanding of MLOps and AI/ML lifecycle management Certifications in Databricks (e.g., Databricks Certified Data Engineer Professional, ML Engineer Associate) are preferred Experience with hyperscaler ecosystems (Azure Data Lake, AWS S3, GCP GCS, ADF, Glue, etc.) Experience managing large, distributed teams and working with CXO-level stakeholders Strong problem-solving, analytical, and decision-making skills Excellent verbal, written, and client-facing communication

Posted 3 days ago

Apply

5.0 - 8.0 years

0 Lacs

India

Remote

Position: Oracle Cloud Technical Location - Gurugram, Kolkata, Mumbai, Chennai, Hyderabad, Bangalore, Pune (Hybrid – 2 days on-site, 3 days remote) Duration: Permanent Role Description Candidate should have atleast 5 - 8 years of experience Should have one or more project implementation experience on Cloud ERP Expertise in web services: Developing and consuming REST and SOAP based services Consuming Oracle Cloud Services including ERP Integration Service, External Report Service, HCM Data Loader, Generic Soap Port, and other common Cloud services Web service security Should have expertise in one of the following areas: Customization and Integrations using PaaS technologies such as JCS, JCS-SX, ADF, OAF, OAIC, OIC, ICS etc. Should have expertise in Data conversions using FBDI Templates and Rapid Implementation Sheets with the ability treconcile the data and correct it using ADFDi Should have expertise in developing BIP Reports, OTBI Reports, SmartView Reports and FRS Reports Should have expertise in creating and scheduling Oracle Cloud ESS Jobs and BIP Jobs Should have experience in the Integration Patterns in Cloud Should have an exposure tonsite/offshore model in order gather requirements, design the solution and create technical Specification for individual requirements Should know about cloud configuration and setup, Application Composer and Page Composer to build UI Extensions Should know about Oracle SaaS Security including privileges, roles and data access Should know on how to migrate SaaS configuration (DFF, Lookups and other related functional setups) Should know best practices of migrating PaaS Solution including DBCS PLSQL/SQL codes, ICS Integrations, SOA Composite deployments and others Knowledge on ADF, OAF, MAF and Oracle EBS on-premise will be an added advantage Should possess excellent verbal and written communication skills

Posted 3 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Summary 8 10 years of experience as an Azure Data Engineer with expertise in Databricks and Azure Data Factory. Programming expertise in SQL, Spark and Python is mandatory 2+ years of experience with medical claims in healthcare and/or managed care is required Expertise in developing ETL/ELT pipelines for BI/ data visualization. Familiarity with normalized, dimensional, star schema and snowflake schematic models are mandatory Prior experience using version control to manage code changes

Posted 3 days ago

Apply

10.0 years

0 Lacs

India

Remote

This is a contract role from August 2025-March 2026, fully remote in India. Required Skills & Experience -10+ years overall Data Science experience -2+ years as Lead Data Scientist communicating with stakeholders, participating in roadmap and design review, and team guidance -Proven experience in delivering successful IBP demand planning forecasting models for enterprise level organizations – i.e. end to end testing, data validations and data quality, model adoption, etc. -Expertise in Python and Pyspark -Experience with various algorithms including, but not limited to: Forecasting, Econometrics, and Curve fitting methods --> Demand Forecasting is most important -3+ years development role, ML Model implementation, experimentation & related software engineering focused -Experience working in Azure ecosystem: ADF, ADLS, Databricks, Blob, etc. -ADO board experience -Strong SQL background -Expertise in CICD -Excellent problem-solving and analytical skills; strong communication and teamwork abilities; ability to work in a fast-paced and dynamic environment -Excellent written and verbal communication -Ability to work in fast paced, global environment -Bachelor’s or Master’s degree in Computer Science, Engineering, or a relate Nice to Have Skills & Experience -Industry experience in CPG, F&B, or FMCB Job Description A Fortune 50 client is looking for a Lead Data Scientist Engineer to come join their team in support of their Integrated Business Planning program for their European sector. As the Lead Data Scientist, you will be working on this client’s large Integration Business Planning program in helping them change their data hierarchy by decreasing their 13k demand forecasting models to 7-8k in an effort to create less models but more efficient models. This role will require someone who understands IBP demand planning forecasting who can help work with the Forecasting Analyst and business stakeholders in creating the roadmap and guide the delivery of the mapping the old existing data to the new data models. You will be hands on in leading the efforts in end-to-end data testing, model adoption, help guide the team to delivery, be hands on in data validation, data quality checks, understanding the machine learning engine (model selection logic), and provide mentorship to other team members. We are looking for someone who can help with making business recommendations and create solutions for problems that arise are key to success. An ideal candidate will have a passion for solving problems, working with business stakeholders, and be comfortable working in a smaller team, within a global organization. This role allows candidates to be remote in India, however you must be available to work either 10am-7pm IST or 11am – 8pm IST.

Posted 3 days ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within PWC Responsibilities About the role: As a Junior/Senior Data Engineer, you'll be taking the lead in designing and maintaining complex data ecosystems. Your experience will be instrumental in optimizing data processes, ensuring data quality, and driving data-driven decision-making within the organization. Architecting and designing complex data systems and pipelines. Leading and mentoring junior data engineers and team members. Collaborating with cross-functional teams to define data requirements. Implementing advanced data quality checks and ensuring data integrity. Optimizing data processes for efficiency and scalability. Overseeing data security and compliance measures. Evaluating and recommending new technologies to enhance data infrastructure. Providing technical expertise and guidance for critical data projects. Required Skills & Experience Proficiency in designing and building complex data pipelines and data processing systems. Leadership and mentorship capabilities to guide junior data engineers and foster skill development. Strong expertise in data modeling and database design for optimal performance. Skill in optimizing data processes and infrastructure for efficiency, scalability, and cost-effectiveness. Knowledge of data governance principles, ensuring data quality, security, and compliance. Familiarity with big data technologies like Hadoop, Spark, or NoSQL. Expertise in implementing robust data security measures and access controls. Effective communication and collaboration skills for cross-functional teamwork and defining data requirements. Skills Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc., Mandatory Skill Sets Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Preferred Skill Sets Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Years Of Experience Required 4-7years Education Qualification BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Data Engineering Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation {+ 19 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 3 days ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within PWC Responsibilities About the role: As a Junior/Senior Data Engineer, you'll be taking the lead in designing and maintaining complex data ecosystems. Your experience will be instrumental in optimizing data processes, ensuring data quality, and driving data-driven decision-making within the organization. Architecting and designing complex data systems and pipelines. Leading and mentoring junior data engineers and team members. Collaborating with cross-functional teams to define data requirements. Implementing advanced data quality checks and ensuring data integrity. Optimizing data processes for efficiency and scalability. Overseeing data security and compliance measures. Evaluating and recommending new technologies to enhance data infrastructure. Providing technical expertise and guidance for critical data projects. Required Skills & Experience Proficiency in designing and building complex data pipelines and data processing systems. Leadership and mentorship capabilities to guide junior data engineers and foster skill development. Strong expertise in data modeling and database design for optimal performance. Skill in optimizing data processes and infrastructure for efficiency, scalability, and cost-effectiveness. Knowledge of data governance principles, ensuring data quality, security, and compliance. Familiarity with big data technologies like Hadoop, Spark, or NoSQL. Expertise in implementing robust data security measures and access controls. Effective communication and collaboration skills for cross-functional teamwork and defining data requirements. Skills Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc., Mandatory Skill Sets Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Preferred Skill Sets Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Years Of Experience Required 4-7years Education Qualification BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation {+ 19 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 3 days ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within PWC Responsibilities About the role: As a Junior/Senior Data Engineer, you'll be taking the lead in designing and maintaining complex data ecosystems. Your experience will be instrumental in optimizing data processes, ensuring data quality, and driving data-driven decision-making within the organization. Architecting and designing complex data systems and pipelines. Leading and mentoring junior data engineers and team members. Collaborating with cross-functional teams to define data requirements. Implementing advanced data quality checks and ensuring data integrity. Optimizing data processes for efficiency and scalability. Overseeing data security and compliance measures. Evaluating and recommending new technologies to enhance data infrastructure. Providing technical expertise and guidance for critical data projects. Required Skills & Experience Proficiency in designing and building complex data pipelines and data processing systems. Leadership and mentorship capabilities to guide junior data engineers and foster skill development. Strong expertise in data modeling and database design for optimal performance. Skill in optimizing data processes and infrastructure for efficiency, scalability, and cost-effectiveness. Knowledge of data governance principles, ensuring data quality, security, and compliance. Familiarity with big data technologies like Hadoop, Spark, or NoSQL. Expertise in implementing robust data security measures and access controls. Effective communication and collaboration skills for cross-functional teamwork and defining data requirements. Skills Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc., Mandatory Skill Sets Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Preferred Skill Sets Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Years Of Experience Required 4-7years Education Qualification BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Master of Engineering, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation {+ 19 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies