Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Mulshi, Maharashtra, India
On-site
Area(s) of responsibility Develop scalable data processing solutions using Azure Databricks, ensuring efficient data workflows and optimized performance. Build and maintain ETL/ELT pipelines, leveraging PySpark/Spark and SQL to transform and process large datasets. Implement data security, access controls, and governance standards, ensuring data quality and integrity throughout ETL processes Cloud Lead with experience in Azure ADF, Databricks, PySpark Responsible for designing and implementing secure, scalable, and highly available cloud-based solutions and estimation on AWS and Azure Cloud Experience in Azure Databricks and ADF, Azure Synapse and PySpark Experience with integration of different data sources with Data Warehouse and Data Lake is required
Posted 3 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Data Engineer (Azure Data Factory and Azure Databricks) Job Description We seek a Data Engineer with at least three years of experience in Azure Databricks, Azure Data Lake Storage gen2 (ADLS), and Azure Data Factory (ADF). The candidate will design, build, and maintain data pipelines using Azure ADF/Databricks. The candidate should be able to create Data Modelling & Governance, they will also work closely with our data science and engineering teams to develop and deploy machine learning models and other advanced analytics solutions. Primary skills: Azure Data Factory, Azure Data Bricks, SQL, ADLS, Spark-SQL, Python (Pandas), Pyspark or Scala Secondary Skills: Basics of Azure Security (RBAC, Azure AD), Hadoop, HDFS, ADLS, Azure DBFS, PowerBI, and Tableau visualization tool is a plus. Responsibilities · Design, build, and maintain data pipelines using Azure ADF/Databricks. · Strong expertise in Azure Data Factory, Databricks, and related technologies such as Azure Synapse Analytics and Azure Functions. · Hands-on experience in designing and implementing data pipelines and workflows in Azure Data Factory and Databricks. · Experience in Data Extraction, Transformation, and Loading of data from multiple data sources into target databases, using Azure Databricks, Spark SQL, Pyspark, and Azure SQL. · Designing and implementing data ingestion pipelines from multiple sources using Azure Databricks. · Sound working experience in Cleansing, Transformation, Business Logic, Incremental Transformation of Data, and merging the data with datamart tables, · Developing scalable and reusable frameworks for ingesting data sets. · Integrating the end-to-end data pipeline - to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times . · Interacting with stakeholders and leaders to understand business goals and data requirements. · Experience in working with Agile (Scrum, Sprint) and Waterfall Methodologies. · Collaborate with data engineers, data architects, and business analysts to understand and translate business requirements into technical designs. · Provide technical guidance and support to junior team members. · Design, develop, and maintain SQL databases, including creating database schemas, tables, views, stored procedures, and triggers. · Self-starter and team player with excellent communication, problem-solving skills, and interpersonal skills, and a good aptitude for learning.
Posted 3 weeks ago
5.0 years
6 - 9 Lacs
Hyderābād
On-site
Data Analytics Engineer – CL4 Role Overview : As a Data Analytics Engineer , you will actively engage in your engineering craft, taking a hands-on approach to multiple high-visibility projects. Your expertise will be pivotal in delivering solutions that delight customers and users, while also driving tangible value for Deloitte's business investments. You will leverage your extensive engineering craftmanship across multiple programming languages and modern frameworks, consistently demonstrating your strong track record in delivering high-quality, outcome-focused solutions. The ideal candidate will be a dependable team player, collaborating with cross-functional teams to design, develop, and deploy advanced software solutions. Key Responsibilities : Outcome-Driven Accountability: Embrace and drive a culture of accountability for customer and business outcomes. Develop engineering solutions that solve complex problems with valuable outcomes, ensuring high-quality, lean designs and implementations. Technical Leadership and Advocacy: Serve as the technical advocate for products, ensuring code integrity, feasibility, and alignment with business and customer goals. Lead requirement analysis, component design, development, unit testing, integrations, and support. Engineering Craftsmanship: Maintain accountability for the integrity of code design, implementation, quality, data, and ongoing maintenance and operations. Stay hands-on, self-driven, and continuously learn new approaches, languages, and frameworks. Create technical specifications, and write high-quality, supportable, scalable code ensuring all quality KPIs are met or exceeded. Demonstrate collaborative skills to work effectively with diverse teams. Customer-Centric Engineering: Develop lean engineering solutions through rapid, inexpensive experimentation to solve customer needs. Engage with customers and product teams before, during, and after delivery to ensure the right solution is delivered at the right time. Incremental and Iterative Delivery: Adopt a mindset that favors action and evidence over extensive planning. Utilize a learning-forward approach to navigate complexity and uncertainty, delivering lean, supportable, and maintainable solutions. Cross-Functional Collaboration and Integration: Work collaboratively with empowered, cross-functional teams including product management, experience, and delivery. Integrate diverse perspectives to make well-informed decisions that balance feasibility, viability, usability, and value. Foster a collaborative environment that enhances team synergy and innovation. Advanced Technical Proficiency: Possess deep expertise in modern software engineering practices and principles, including Agile methodologies and DevSecOps to deliver daily product deployments using full automation from code check-in to production with all quality checks through SDLC lifecycle. Strive to be a role model, leveraging these techniques to optimize solutioning and product delivery. Demonstrate understanding of the full lifecycle product development, focusing on continuous improvement, and learning. Domain Expertise: Quickly acquire domain-specific knowledge relevant to the business or product. Translate business/user needs, architectures, and data designs into technical specifications and code. Be a valuable, flexible, and dedicated team member, supportive of teammates, and focused on quality and tech debt payoff. Effective Communication and Influence: Exhibit exceptional communication skills, capable of articulating complex technical concepts clearly and compellingly. Inspire and influence teammates and product teams through well-structured arguments and trade-offs supported by evidence. Create coherent narratives that align technical solutions with business objectives. Engagement and Collaborative Co-Creation: Engage and collaborate with product engineering teams at all organizational levels, including customers as needed. Build and maintain constructive relationships, fostering a culture of co-creation and shared momentum towards achieving product goals. Align diverse perspectives and drive consensus to create feasible solutions. The team : US Deloitte Technology Product Engineering has modernized software and product delivery, creating a scalable, cost-effective model that focuses on value/outcomes that leverages a progressive and responsive talent structure. As Deloitte’s primary internal development team, Product Engineering delivers innovative digital solutions to businesses, service lines, and internal operations with proven bottom-line results and outcomes. It helps power Deloitte’s success. It is the engine that drives Deloitte, serving many of the world’s largest, most respected companies. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. Key Qualifications : § A bachelor’s degree in computer science, software engineering, or a related discipline. An advanced degree (e.g., MS) is preferred but not required. Experience is the most relevant factor. § Strong data engineering foundation with deep understanding of data-structure, algorithms, code instrumentations, etc. § 5+ years proven experience with data ETL and ELT tools (such as ADF, Alteryx, cloud-native tools), data warehousing tools (such as SAP HANA, Snowflake, ADLS, Amazon Redshift, Google Cloud BigQuery). § 5+ years of experience with cloud-native engineering, using FaaS/PaaS/micro-services on cloud hyper-scalers like Azure, AWS, and GCP. § Strong understanding of methodologies & tools like, XP, Lean, SAFe, DevSecOps, SRE, ADO, GitHub, SonarQube, etc. § Strong preference will be given to candidates with experience in AI/ML and GenAI. § Excellent interpersonal and organizational skills, with the ability to handle diverse situations, complex projects, and changing priorities, behaving with passion, empathy, and care. How You will Grow: At Deloitte, our professional development plans focus on helping people at every level of their career to identify and use their strengths to do their best work every day and excel in everything they do. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India. Benefits to help you thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306141
Posted 3 weeks ago
5.0 - 7.0 years
8 - 10 Lacs
Thiruvananthapuram
On-site
5 - 7 Years 1 Opening Trivandrum Role description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes: Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures of Outcomes: Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected: Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation: Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration: Define and govern the configuration management plan. Ensure compliance within the team. Testing: Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance: Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management: Manage the delivery of modules effectively. Defect Management: Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation: Create and provide input for effort and size estimation for projects. Knowledge Management: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management: Execute and monitor the release process to ensure smooth transitions. Design Contribution: Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface: Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management: Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications: Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples: Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments: UST is seeking a highly skilled and motivated Lead Data Engineer to join our Telecommunications vertical, leading impactful data engineering initiatives for US-based Telco clients. The ideal candidate will have 6–8 years of experience in designing and developing scalable data pipelines using Snowflake, Azure Data Factory, Azure Databricks. Proficiency in Python, PySpark, and advanced SQL is essential, with a strong focus on query optimization, performance tuning, and cost-effective architecture. A solid understanding of data integration, real-time and batch processing, and metadata management is required, along with experience in building robust ETL/ELT workflows. Candidates should demonstrate a strong commitment to data quality, validation, and consistency, with working knowledge of data governance, RBAC, encryption, and compliance frameworks considered a plus. Familiarity with Power BI or similar BI tools is also advantageous, enabling effective data visualization and storytelling. The role demands the ability to work in a dynamic, fast-paced environment, collaborating closely with stakeholders and cross-functional teams while also being capable of working independently. Strong communication skills and the ability to coordinate across multiple teams and stakeholders are critical for success. In addition to technical expertise, the candidate should bring experience in solution design and architecture planning, contributing to scalable and future-ready data platforms. A proactive mindset, eagerness to learn, and adaptability to the rapidly evolving data engineering landscape—including AI integration into data workflows—are highly valued. This is a leadership role that involves mentoring junior engineers, fostering innovation, and driving continuous improvement in data engineering practices. Skills Azure Databricks,Snowflake,python,Data Engineering About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 3 weeks ago
5.0 years
3 - 5 Lacs
Ahmedabad
On-site
We are looking for a data engineering professional with strong experience in designing and implementing end-to-end ETL solutions using Azure Data Factory (ADF), Azure Data Lake Storage (ADLS), Azure Databricks, and SSIS. The candidate should be proficient in SQL, REST API integration, and automation through CI/CD pipelines using Azure DevOps and Git. They should also have a solid understanding of maintaining and optimizing data pipelines, warehouses, and reporting within the Microsoft SQL stack. Job Title: Sr. Data Engineer Location: Ahmedabad/Pune Experience: 5 + Years Educational Qualification: UG: BS/MS in Computer Science or other engineering/technical degree Roles and responsibilities: Design and implement end-to-end data solutions using Microsoft Azure Data Factory (ADF) and Azure Data Lake Storage (ADLS), Azure Databricks, and SSIS. Develop complex transformation logic using SQL Server, SSIS, and ADF, and develop ETL Jobs/Pipelines to execute those mappings concurrently. Maintain and enhance existing ETL Pipelines, Warehouses, and Reporting leveraging traditional MS SQL Stack Understanding of REST API principles and creating ADF pipelines to handle HTTP requests for APIs. Well-versed with best practices for development, deployment of SSIS packages, SQL jobs, and ADF pipelines. Implement and manage source control practices using GIT within Azure DevOps to ensure code integrity and facilitate collaboration Participate in the development and maintenance of CI/CD pipelines for automated testing and deployment of BI solutions Preferred skills, but not required: Understanding of the Azure environment and developing Azure Logic Apps and Azure Function Apps. Understanding of Code deployment, GIT, CI/CD, and deployment of developed ETL code (SSIS, ADF).
Posted 3 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity.
Posted 3 weeks ago
15.0 - 20.0 years
50 - 55 Lacs
Bengaluru
Work from Office
Mode: Contract As an Azure Data Architect, you will: Lead architectural design and migration strategies, especially from Oracle to Azure Data Lake Architect and build end-to-end data pipelines leveraging Databricks, Spark, and Delta Lake Design secure, scalable data solutions integrating ADF, SQL Data Warehouse, and on-prem/cloud systems Optimize cloud resource usage and pipeline performance Set up CI/CD pipelines with Azure DevOps Mentor team members and align architecture with business needs Qualifications: 10-15 years in Data Engineering/Architecture roles Extensive hands-on with: Databricks, Azure Data Factory, Azure SQL Data Warehouse Data integration, migration, cluster configuration, and performance tuning Azure DevOps and cloud monitoring tools Excellent interpersonal and stakeholder management skills
Posted 3 weeks ago
0 years
0 Lacs
Raurkela, Odisha, India
On-site
We are currently hiring for a Data Strategy Management role with a leading global company, based in Pune and Bangalore . We are looking for immediate joiners or candidates with up to 15 days' notice . Key focus areas (1st preference): Master Data Management (MDM) Data Governance Data Quality Additional desirable experience: Data Ingestion, ETL, Dimensional Modelling, Data Migration Data Warehousing, Data Modelling, Data Visualization Tools: Informatica MDM, IDQ, Informatica PC/IICS, Talend, Collibra Cloud tech: ADB, ADF, Synapse Experience implementing ETL/BI/Analytics/Data Management on cloud platforms Preferred certifications: PMP / CSM Cloud certifications (Azure/AWS/GCP) CDMP We’re looking for someone with strong analytical, communication, and negotiation skills who can drive strategic data initiatives end to end. If this sounds interesting, I’d be happy to share more details and discuss how this could align with your career goals. Could we connect or schedule a quick call? Looking forward to hearing from you!
Posted 3 weeks ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
🚀 We're Hiring – Data Engineers | Hybrid Roles 🚀 Join our team of passionate technologists working on cutting-edge data platforms! 🔹 Position 1: Data Engineer 📍 Locations: Noida, Gurgaon, Jaipur, Indore, Hyderabad (Hybrid) 🧠 Experience: 8+ Years 🕒 Shift: 11:00 AM – 8:30 PM 💼 Skills: Azure Architecture, Azure Functions, App Services, Python, CI/CD 🔹 Position 2: Data Engineer / Full Stack Engineer 📍 Locations: Noida, Gurgaon, Hyderabad, Bangalore (Hybrid) 🧠 Experience: 8+ Years 🕒 Shift: 11:00 AM – 8:30 PM 💼 Skills: PySpark, Databricks, ADF, Big Data, Hadoop, Hive If you’re ready to take the next step in your data engineering career, we’d love to connect! 📩 Share your resume at kumar.unnati@cloudstakes.com or DM me directly.
Posted 3 weeks ago
2.0 - 6.0 years
0 Lacs
ahmedabad, gujarat
On-site
Bachelors" degree in business, computer science or related field. Experience in creation of BIP, OTBI, HCM Extracts, HDL and Fast Formula in Oracle Cloud environment. Completed at least 2 full ERP Oracle Cloud (Fusion) project implementations. Experience in integrating ERP with other systems using SOA and Web Services Extensive Knowledge on underlying database structure. Excellent Oracle technical skills with the ability to build complex Oracle components using PL/SQL and SQL. Experience in ADF and Web Services in Oracle Cloud environment. Strong problem-solving skills with the ability to work cross-functionally in a fast-paced and rapidly changing work environment either on a team or as an individual contributor Proven ability to design and optimize business processes and to integrate business processes across disparate systems. Ability to multi-task and perform effectively in a fast-paced environment. Able to work independently and consider cross-functional and downstream impacts with close attention to detail. Proven ability to work remotely and independently in support of Clients/Business Qualifications [Some qualifications you may want to include are Skills, Education, Experience, or Certifications.] Example: Excellent verbal and written communication skills,
Posted 3 weeks ago
8.0 - 10.0 years
10 - 12 Lacs
Chennai
Hybrid
Job Title: Product Owner / Subject Matter Expert (AI & Data) Experience Required: 10+ years Location: The selected candidate is required to work onsite for the initial 1 to 3-month project training and execution period at either our Kovilpatti or Chennai location, which will be confirmed during the onboarding process. After the initial period, remote work opportunities will be offered. Job Description: The Product Owner / Subject Matter Expert (AI & Data) will lead the definition, prioritization, and successful delivery of intelligent, data-driven products by aligning business needs with AI/ML and data platform capabilities. Acting as a bridge between stakeholders, data engineering teams, and AI developers, this role ensures that business goals are translated into actionable technical requirements. The candidate will manage product backlogs, define epics and features, and guide cross-functional teams throughout the product development lifecycle. They will play a crucial role in driving innovation, ensuring data governance, and realizing value through AI-enhanced digital solutions. Key Responsibilities: Define and manage the product roadmap across AI and data domains based on business strategy and stakeholder input. Translate business needs into technical requirements, user stories, and use cases for AI and data-driven applications. Collaborate with data scientists, AI engineers, and data engineers to prioritize features, define MVPs, and validate solution feasibility. Lead backlog refinement, sprint planning, and iteration reviews across multidisciplinary teams. Drive the adoption of AI models (e.g., LLMs, classification, prediction, recommendation) and data pipelines that support operational goals. Ensure inclusion of data governance, lineage, and compliance requirements in product development. Engage with business units to define KPIs and success metrics for AI and analytics products. Document product artifacts such as PRDs, feature definitions, data mappings, model selection criteria, and risk registers. Facilitate workshops, stakeholder demos, and solution walkthroughs to ensure ongoing alignment. Support responsible AI practices and secure data sharing standards. Technical Skills: Product Management Tools: Azure DevOps, Jira, Confluence AI/ML Concepts: LLMs, NLP, predictive analytics, computer vision, generative AI AI Tools: OpenAI, Azure OpenAI, MLflow, LangChain, prompt engineering Data Platforms: Azure Data Factory, Databricks, Synapse Analytics, Purview, SQL, NoSQL Data Governance: Metadata management, data lineage, PII handling, classification standards Documentation: PRDs, data dictionaries, process flows, KPI dashboards Methodologies: Agile/Scrum, backlog management, MVP delivery Qualification: Bachelors or Master’s in Computer Science, Data Science, Information Systems, or a related field. Preferred Certifications: Microsoft Certified (Azure AI Engineer Associate / Azure Data Fundamentals / Azure Data Engineer Associate). 10+ years of experience in product ownership, business analysis, or solution delivery in AI and data-centric environments. Proven success in delivering AI-enabled products and scalable data platforms. Strong communication, stakeholder facilitation, and technical documentation skills.
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
R1 India is proud to be recognized amongst Top 25 Best Companies to Work For 2024, by the Great Place to Work Institute. This is our second consecutive recognition on this prestigious Best Workplaces list, building on the Top 50 recognition we achieved in 2023. Our focus on employee wellbeing and inclusion and diversity is demonstrated through prestigious recognitions with R1 India being ranked amongst Best in Healthcare, Top 100 Best Companies for Women by Avtar & Seramount, and amongst Top 10 Best Workplaces in Health & Wellness. We are committed to transform the healthcare industry with our innovative revenue cycle management services. Our goal is to make healthcare work better for all by enabling efficiency for healthcare systems, hospitals, and physician practices. With over 30,000 employees globally, we are about 16,000+ strong in India with presence in Delhi NCR, Hyderabad, Bangalore, and Chennai. Our inclusive culture ensures that every employee feels valued, respected, and appreciated with a robust set of employee benefits and engagement activities. Position Title: Senior Specialist Reports to : Program Manager- Analytics BI Position Summary A Specialist shall work with the development team and responsible for development task as individual contribution. He/she should be able to mentor team and able to help in resolving issues. He/she should be technical sound and able to communicate with client perfectly. Key Duties & Responsibilities Work as Lead Developer Data engineering project for E2E Analytics. Ensure Project delivery on time. Mentor other team mates and guide them. Will take the requirement from client and communicate as well. Ensure Timely documents creation for knowledge base, user guides, and other various communications systems. Ensures delivery against Business needs, team goals and objectives, i.e., meeting commitments and coordinating overall schedule. Works with large datasets in various formats, integrity/QA checks, and reconciliation for accounting systems. Leads efforts to troubleshoot and solve process or system related issues. Understand, support, enforce and comply with company policies, procedures and Standards of Business Ethics and Conduct. Experience working with Agile methodology Experience, Skills and Knowledge: Bachelors degree in Computer Science or equivalent experience is required. B.Tech/MCA preferable. Minimum 5 7 years experience. Excellent communications and strong commitment for delivering the highest level of service Technical Skills Expert knowledge and experience working with Spark, Scala Experience in Azure data Factory ,Azure Data bricks, data Lake Experience working with SQL and Snowflake Experience with data integration tools such as SSIS, ADF Experience with programming languages such as Python Expert in Astronomer Airflow. Experience with programming languages such as Python, Spark, Scala Experience or exposure on Microsoft Azure Data Fundamentals Key Competency Profile Own youre a development by implementing and sharing your learnings Motivate each other to perform at our highest level Work the right way by acting with integrity and living our values every day Succeed by proactively identifying problems and solutions for yourself and others. Communicate effectively if there any challenge. Accountability and Responsibility should be there. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Our associates are given valuable opportunities to contribute, to innovate and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package. To learn more, visit: r1rcm.com Visit us on Facebook,
Posted 3 weeks ago
7.0 - 12.0 years
15 - 22 Lacs
Pune, Chennai, Bengaluru
Work from Office
Primary: Azure, Databricks, ADF, Pyspark/Python Secondary: Datawarehouse, SAS/Alteryx Must Have • 8+ Years of IT experience in Datawarehouse and ETL • Hands-on data experience on Cloud Technologies on Azure, ADF, Synapse, Pyspark/Python • Ability to understand Design, Source to target mapping (STTM) and create specifications documents • Flexibility to operate from client office locations • Able to mentor and guide junior resources, as needed Nice to Have • Any relevant certifications • Banking experience on RISK & Regulatory OR Commercial OR Credit Cards/Retail
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Technical Skills: o Proficiency in programming languages like Python and R for data manipulation and analysis o Expertise in machine learning algorithms and statistical modeling techniques o Familiarity with data warehousing and data pipelines o Experience with data visualization tools like Tableau or Power BI o Experience in Cloud platforms (e.g., ADF, Data bricks, Azure) and their AI services. Consulting Skills: o Hypothesis-driven problem solving o Go-to-market pricing and revenue growth execution o Advisory, Presentation, Data Storytelling o Project Leadership and Execution Typical Work Environment: Collaborative work with cross-functional teams across sales, marketing, and product development Stakeholder Management, Team Handling Fast-paced environment with a focus on delivering timely insights to support business decisions Excellent problem-solving skills and ability to address complex technical challenges. Effective communication skills to collaborate with cross-functional teams and stakeholders. Potential to work on multiple projects simultaneously, prioritizing tasks based on business impact Qualification: Degree in Data Science, Computer Science with data science specialization Master’s in business administration and Analytics preferred
Posted 3 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Position Summary Data Analytics Engineer – CL4 Role Overview : As a Data Analytics Engineer , you will actively engage in your engineering craft, taking a hands-on approach to multiple high-visibility projects. Your expertise will be pivotal in delivering solutions that delight customers and users, while also driving tangible value for Deloitte's business investments. You will leverage your extensive engineering craftmanship across multiple programming languages and modern frameworks, consistently demonstrating your strong track record in delivering high-quality, outcome-focused solutions. The ideal candidate will be a dependable team player, collaborating with cross-functional teams to design, develop, and deploy advanced software solutions. Key Responsibilities : Outcome-Driven Accountability: Embrace and drive a culture of accountability for customer and business outcomes. Develop engineering solutions that solve complex problems with valuable outcomes, ensuring high-quality, lean designs and implementations. Technical Leadership and Advocacy: Serve as the technical advocate for products, ensuring code integrity, feasibility, and alignment with business and customer goals. Lead requirement analysis, component design, development, unit testing, integrations, and support. Engineering Craftsmanship: Maintain accountability for the integrity of code design, implementation, quality, data, and ongoing maintenance and operations. Stay hands-on, self-driven, and continuously learn new approaches, languages, and frameworks. Create technical specifications, and write high-quality, supportable, scalable code ensuring all quality KPIs are met or exceeded. Demonstrate collaborative skills to work effectively with diverse teams. Customer-Centric Engineering: Develop lean engineering solutions through rapid, inexpensive experimentation to solve customer needs. Engage with customers and product teams before, during, and after delivery to ensure the right solution is delivered at the right time. Incremental and Iterative Delivery: Adopt a mindset that favors action and evidence over extensive planning. Utilize a learning-forward approach to navigate complexity and uncertainty, delivering lean, supportable, and maintainable solutions. Cross-Functional Collaboration and Integration: Work collaboratively with empowered, cross-functional teams including product management, experience, and delivery. Integrate diverse perspectives to make well-informed decisions that balance feasibility, viability, usability, and value. Foster a collaborative environment that enhances team synergy and innovation. Advanced Technical Proficiency: Possess deep expertise in modern software engineering practices and principles, including Agile methodologies and DevSecOps to deliver daily product deployments using full automation from code check-in to production with all quality checks through SDLC lifecycle. Strive to be a role model, leveraging these techniques to optimize solutioning and product delivery. Demonstrate understanding of the full lifecycle product development, focusing on continuous improvement, and learning. Domain Expertise: Quickly acquire domain-specific knowledge relevant to the business or product. Translate business/user needs, architectures, and data designs into technical specifications and code. Be a valuable, flexible, and dedicated team member, supportive of teammates, and focused on quality and tech debt payoff. Effective Communication and Influence: Exhibit exceptional communication skills, capable of articulating complex technical concepts clearly and compellingly. Inspire and influence teammates and product teams through well-structured arguments and trade-offs supported by evidence. Create coherent narratives that align technical solutions with business objectives. Engagement and Collaborative Co-Creation: Engage and collaborate with product engineering teams at all organizational levels, including customers as needed. Build and maintain constructive relationships, fostering a culture of co-creation and shared momentum towards achieving product goals. Align diverse perspectives and drive consensus to create feasible solutions. The team : US Deloitte Technology Product Engineering has modernized software and product delivery, creating a scalable, cost-effective model that focuses on value/outcomes that leverages a progressive and responsive talent structure. As Deloitte’s primary internal development team, Product Engineering delivers innovative digital solutions to businesses, service lines, and internal operations with proven bottom-line results and outcomes. It helps power Deloitte’s success. It is the engine that drives Deloitte, serving many of the world’s largest, most respected companies. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. Key Qualifications : A bachelor’s degree in computer science, software engineering, or a related discipline. An advanced degree (e.g., MS) is preferred but not required. Experience is the most relevant factor. Strong data engineering foundation with deep understanding of data-structure, algorithms, code instrumentations, etc. 5+ years proven experience with data ETL and ELT tools (such as ADF, Alteryx, cloud-native tools), data warehousing tools (such as SAP HANA, Snowflake, ADLS, Amazon Redshift, Google Cloud BigQuery). 5+ years of experience with cloud-native engineering, using FaaS/PaaS/micro-services on cloud hyper-scalers like Azure, AWS, and GCP. Strong understanding of methodologies & tools like, XP, Lean, SAFe, DevSecOps, SRE, ADO, GitHub, SonarQube, etc. Strong preference will be given to candidates with experience in AI/ML and GenAI. Excellent interpersonal and organizational skills, with the ability to handle diverse situations, complex projects, and changing priorities, behaving with passion, empathy, and care. How You will Grow: At Deloitte, our professional development plans focus on helping people at every level of their career to identify and use their strengths to do their best work every day and excel in everything they do. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306141
Posted 3 weeks ago
2.0 - 5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Experience - 2-5 Years Work Location - Hyderabad/Gurugram/Noida JOB description Develops and operationalizes data pipelines to make data available for consumption (BI, Advanced analytics, Services). Works in tandem with data architects and data/BI engineers to design data pipelines and recommends ongoing optimization of data storage, data ingestion, data quality and orchestration. Designs, develops, and implements ETL/ELT processes using Azure services such as Azure Data Bricks, Azure SQL Synapse, ADLS etc. to improve and speed up delivery of our data products and services. Implement solutions by developing scalable data processing platforms to drive high-value insights to the organization. Identifies, designs, and implements internal process improvements: automating manual processes, optimizing data delivery. Identifies ways to improve data reliability, efficiency, and quality of data management. Communicates technical concepts to non-technical audiences both in written and verbal form. If Lead - Then performs peer reviews for other data engineer’s work. Good Understanding of Data integration : Onboarding and integration of data from external and internal data sources through API management, sftp processes and others using synapse pipelines Deep expertise of core data platforms : Azure, Data Lakehouse design, big data concept using spark architecture Strong knowledge: With Integration technologies: pySpark, Python, ADF, Databricks With conceptual, logical, and physical database modeling. T-SQL knowledge and experience working with relational databases, query authoring, stored procedure development, debug, and optimize SQL queries. Proven success as a technical lead and individual contributor Familiarity with Project management methodologies: Agile, DevOps. Qualification Bachelor’s degree (or equivalent) in computer science, information technology, engineering, or related discipline Experience in building or maintaining ETL processes Professional certification
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Years of relevant Experience – 5-7 Job Description: Key Responsibilities Data Management: Collect, analyze, and interpret large datasets to provide actionable insights. Data Solutions: Develop and manage Advanced Analytics solutions, including BI Dashboards design, Reports and digital solutions. Stakeholder Management: Collaborate with stakeholders to understand business needs and translate them into technical requirements. Project Management: Lead and manage Data and Advance Analytics projects, ensuring timely delivery and alignment with business goals. Documentation: Create and maintain documentation, including design specifications and user manuals. Continuous Improvement: Identify opportunities for process improvements and recommend digital solutions. Experience:Must Have Experience with Databricks for big data processing and analytics.Strong skills in SQL for database querying, Data Modeling and DWH (Must)Develop design documentation by translating business requirements into surce-to-target mappings. (Must)Must have experience in Power BI and Qlik Sense; development background is an advantageExperience with Azure services for data storage, processing, and analytics.Knowledge of data fabric architecture and implementation.Azure Data Factory (ADF): Expertise in data integration and orchestration using ADF (Advantage).Power Platform: Proficiency in using Power Apps, Power Automate, and Power Virtual Agents to create and manage digital solutions.AI Technologies: Knowledge of AI tools and frameworks to develop predictive models and automate data analysis. 3 must haves AI 4/5 DWH 4/5 Data mgmt 3/5
Posted 3 weeks ago
7.0 - 14.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Bachelor's or master’s degree in computer science, Information Technology, or a related field. Demonstrated expertise in working with DataBricks for data processing and analysis. Experience in implementing and optimizing data solutions using multiple programming languages in an Azure environment. 7-14 years' experience in Data Modelling, hands-on experience in developing data pipelines and managing data infrastructure within Azure. Python and SQL experience required Worked on end-to-end Data Product deployment Hands on Data Engineering, Data Modelling, ADF, ADL, Python, pyspark Digital logic: it is important to possess this skill to clean and organize an unstructured set of data. Computer architecture and organization: A solid understanding of computer architecture and organization will enable you to maximize efficiency when working with data. Data representation: This allows for easier gathering, manipulation, and analysis of data, which can save valuable time and money. Memory architecture: The most important part of memory architecture is being able to find the method that best combines speed, durability, reliability, and cost-effectiveness while not compromising the integrity of the data. Familiarity with Erwin modeling tool. Adapt to new modeling methods. SQL language and its implementation. Sufficient experience using database systems: Relational Database Management Systems (RDBMS) that possess big data handling capabilities, such as the ability to quickly store and fetch data.
Posted 3 weeks ago
3.0 years
6 - 9 Lacs
Hyderābād
On-site
Data Analytics Engineer – CL3 Role Overview : As a Data Analytics Engineer , you will actively engage in your engineering craft, taking a hands-on approach to multiple high-visibility projects. Your expertise will be pivotal in delivering solutions that delight customers and users, while also driving tangible value for Deloitte's business investments. You will leverage your extensive engineering craftmanship across multiple programming languages and modern frameworks, consistently demonstrating your strong track record in delivering high-quality, outcome-focused solutions. The ideal candidate will be a dependable team player, collaborating with cross-functional teams to design, develop, and deploy advanced software solutions. Key Responsibilities : Outcome-Driven Accountability: Embrace and drive a culture of accountability for customer and business outcomes. Develop engineering solutions that solve complex problems with valuable outcomes, ensuring high-quality, lean designs and implementations. Technical Leadership and Advocacy: Serve as the technical advocate for products, ensuring code integrity, feasibility, and alignment with business and customer goals. Lead requirement analysis, component design, development, unit testing, integrations, and support. Engineering Craftsmanship: Maintain accountability for the integrity of code design, implementation, quality, data, and ongoing maintenance and operations. Stay hands-on, self-driven, and continuously learn new approaches, languages, and frameworks. Create technical specifications, and write high-quality, supportable, scalable code ensuring all quality KPIs are met or exceeded. Demonstrate collaborative skills to work effectively with diverse teams. Customer-Centric Engineering: Develop lean engineering solutions through rapid, inexpensive experimentation to solve customer needs. Engage with customers and product teams before, during, and after delivery to ensure the right solution is delivered at the right time. Incremental and Iterative Delivery: Adopt a mindset that favors action and evidence over extensive planning. Utilize a learning-forward approach to navigate complexity and uncertainty, delivering lean, supportable, and maintainable solutions. Cross-Functional Collaboration and Integration: Work collaboratively with empowered, cross-functional teams including product management, experience, and delivery. Integrate diverse perspectives to make well-informed decisions that balance feasibility, viability, usability, and value. Foster a collaborative environment that enhances team synergy and innovation. Advanced Technical Proficiency: Possess deep expertise in modern software engineering practices and principles, including Agile methodologies and DevSecOps to deliver daily product deployments using full automation from code check-in to production with all quality checks through SDLC lifecycle. Strive to be a role model, leveraging these techniques to optimize solutioning and product delivery. Demonstrate understanding of the full lifecycle product development, focusing on continuous improvement, and learning. Domain Expertise: Quickly acquire domain-specific knowledge relevant to the business or product. Translate business/user needs, architectures, and data designs into technical specifications and code. Be a valuable, flexible, and dedicated team member, supportive of teammates, and focused on quality and tech debt payoff. Effective Communication and Influence: Exhibit exceptional communication skills, capable of articulating complex technical concepts clearly and compellingly. Inspire and influence teammates and product teams through well-structured arguments and trade-offs supported by evidence. Create coherent narratives that align technical solutions with business objectives. Engagement and Collaborative Co-Creation: Engage and collaborate with product engineering teams at all organizational levels, including customers as needed. Build and maintain constructive relationships, fostering a culture of co-creation and shared momentum towards achieving product goals. Align diverse perspectives and drive consensus to create feasible solutions. The team : US Deloitte Technology Product Engineering has modernized software and product delivery, creating a scalable, cost-effective model that focuses on value/outcomes that leverages a progressive and responsive talent structure. As Deloitte’s primary internal development team, Product Engineering delivers innovative digital solutions to businesses, service lines, and internal operations with proven bottom-line results and outcomes. It helps power Deloitte’s success. It is the engine that drives Deloitte, serving many of the world’s largest, most respected companies. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. Key Qualifications : § A bachelor’s degree in computer science, software engineering, or a related discipline. An advanced degree (e.g., MS) is preferred but not required. Experience is the most relevant factor. § Strong data engineering foundation with deep understanding of data-structure, algorithms, code instrumentations, etc. § 3+ years proven experience with data ETL and ELT tools (such as ADF, Alteryx, cloud-native tools), data warehousing tools (such as SAP HANA, Snowflake, ADLS, Amazon Redshift, Google Cloud BigQuery). § 3+ years of experience with cloud-native engineering, using FaaS/PaaS/micro-services on cloud hyper-scalers like Azure, AWS, and GCP. § Strong understanding of methodologies & tools like XP, Lean, SAFe, DevSecOps, SRE, ADO, GitHub, SonarQube, etc. § Strong preference will be given to candidates with experience in AI/ML and GenAI. § Excellent interpersonal and organizational skills, with the ability to handle diverse situations, complex projects, and changing priorities, behaving with passion, empathy, and care. How You will Grow: At Deloitte, our professional development plans focus on helping people at every level of their career to identify and use their strengths to do their best work every day and excel in everything they do. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India. Benefits to help you thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306373
Posted 3 weeks ago
5.0 years
6 - 9 Lacs
Hyderābād
On-site
Data Analytics Engineer – CL4 Role Overview : As a Data Analytics Engineer , you will actively engage in your engineering craft, taking a hands-on approach to multiple high-visibility projects. Your expertise will be pivotal in delivering solutions that delight customers and users, while also driving tangible value for Deloitte's business investments. You will leverage your extensive engineering craftmanship across multiple programming languages and modern frameworks, consistently demonstrating your strong track record in delivering high-quality, outcome-focused solutions. The ideal candidate will be a dependable team player, collaborating with cross-functional teams to design, develop, and deploy advanced software solutions. Key Responsibilities : Outcome-Driven Accountability: Embrace and drive a culture of accountability for customer and business outcomes. Develop engineering solutions that solve complex problems with valuable outcomes, ensuring high-quality, lean designs and implementations. Technical Leadership and Advocacy: Serve as the technical advocate for products, ensuring code integrity, feasibility, and alignment with business and customer goals. Lead requirement analysis, component design, development, unit testing, integrations, and support. Engineering Craftsmanship: Maintain accountability for the integrity of code design, implementation, quality, data, and ongoing maintenance and operations. Stay hands-on, self-driven, and continuously learn new approaches, languages, and frameworks. Create technical specifications, and write high-quality, supportable, scalable code ensuring all quality KPIs are met or exceeded. Demonstrate collaborative skills to work effectively with diverse teams. Customer-Centric Engineering: Develop lean engineering solutions through rapid, inexpensive experimentation to solve customer needs. Engage with customers and product teams before, during, and after delivery to ensure the right solution is delivered at the right time. Incremental and Iterative Delivery: Adopt a mindset that favors action and evidence over extensive planning. Utilize a learning-forward approach to navigate complexity and uncertainty, delivering lean, supportable, and maintainable solutions. Cross-Functional Collaboration and Integration: Work collaboratively with empowered, cross-functional teams including product management, experience, and delivery. Integrate diverse perspectives to make well-informed decisions that balance feasibility, viability, usability, and value. Foster a collaborative environment that enhances team synergy and innovation. Advanced Technical Proficiency: Possess deep expertise in modern software engineering practices and principles, including Agile methodologies and DevSecOps to deliver daily product deployments using full automation from code check-in to production with all quality checks through SDLC lifecycle. Strive to be a role model, leveraging these techniques to optimize solutioning and product delivery. Demonstrate understanding of the full lifecycle product development, focusing on continuous improvement, and learning. Domain Expertise: Quickly acquire domain-specific knowledge relevant to the business or product. Translate business/user needs, architectures, and data designs into technical specifications and code. Be a valuable, flexible, and dedicated team member, supportive of teammates, and focused on quality and tech debt payoff. Effective Communication and Influence: Exhibit exceptional communication skills, capable of articulating complex technical concepts clearly and compellingly. Inspire and influence teammates and product teams through well-structured arguments and trade-offs supported by evidence. Create coherent narratives that align technical solutions with business objectives. Engagement and Collaborative Co-Creation: Engage and collaborate with product engineering teams at all organizational levels, including customers as needed. Build and maintain constructive relationships, fostering a culture of co-creation and shared momentum towards achieving product goals. Align diverse perspectives and drive consensus to create feasible solutions. The team : US Deloitte Technology Product Engineering has modernized software and product delivery, creating a scalable, cost-effective model that focuses on value/outcomes that leverages a progressive and responsive talent structure. As Deloitte’s primary internal development team, Product Engineering delivers innovative digital solutions to businesses, service lines, and internal operations with proven bottom-line results and outcomes. It helps power Deloitte’s success. It is the engine that drives Deloitte, serving many of the world’s largest, most respected companies. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. Key Qualifications : § A bachelor’s degree in computer science, software engineering, or a related discipline. An advanced degree (e.g., MS) is preferred but not required. Experience is the most relevant factor. § Strong data engineering foundation with deep understanding of data-structure, algorithms, code instrumentations, etc. § 5+ years proven experience with data ETL and ELT tools (such as ADF, Alteryx, cloud-native tools), data warehousing tools (such as SAP HANA, Snowflake, ADLS, Amazon Redshift, Google Cloud BigQuery). § 5+ years of experience with cloud-native engineering, using FaaS/PaaS/micro-services on cloud hyper-scalers like Azure, AWS, and GCP. § Strong understanding of methodologies & tools like, XP, Lean, SAFe, DevSecOps, SRE, ADO, GitHub, SonarQube, etc. § Strong preference will be given to candidates with experience in AI/ML and GenAI. § Excellent interpersonal and organizational skills, with the ability to handle diverse situations, complex projects, and changing priorities, behaving with passion, empathy, and care. How You will Grow: At Deloitte, our professional development plans focus on helping people at every level of their career to identify and use their strengths to do their best work every day and excel in everything they do. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India. Benefits to help you thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306372
Posted 3 weeks ago
5.0 - 9.0 years
15 - 25 Lacs
Hyderābād
On-site
Job Role – Sr ADF Developer Experience – 5 to 9 Years Location – Hyderabad Only Work Mode – Hybrid Note – Immediate Joiners Job Description Interpret requirements and design solution that satisfy the customer needs and also satisfied the need for standardization, usability and maintainability of the different Agent facing applications. Breakdown solution into components that could be assigned and tracked in pursuit of a fast and cost-effective solution. Must have experience in SQL/PL-SQL. Should have Implement ADF Business Components / Web Services / Oracle Objects Calls that would provide the data access layer for the Agent Applications processes. Implement the ADF View Controller components, including Tasks Flows, Beans and JSF pages that would allow the successful interaction between the agents and the applications. Supervise tasks performed by other members of the solution team in order to ensure a cohesive approach that satisfies the external (customer) requirements and internal (technology) requirements. Design and develop enhancements using exit-point based architecture within loan the origination/servicing system Designing and developing enhancements using exit-point based architecture within loan origination/servicing system Design and implement custom business applications using Oracle Application Development framework Produce, present and validate AS-IS and TO-BE landscape models for technical approval showing migration path Produce and validate solution design documentation for projects ensuring a consistent quality approach is maintained Input into the Technical Environment Plans (TEP) for projects Work directly with business users in all aspects of design and development Produce code that meets quality, coding and performance standards according to local IT guidelines and policies including Security, auditing and SOX requirements Ensure technical & supporting documentation is written to company standards Undertake Unit testing making full utilization of automation tools available and working with the system test team to ensure tests are integrated as part of the overall test plan Job Types: Full-time, Permanent Pay: ₹1,500,000.00 - ₹2,500,000.00 per year Benefits: Health insurance Schedule: Monday to Friday Application Question(s): Are you available to join immediately? How many years of experience do you have in the relevant field as mentioned in the Job Description? The job location is Hyderabad(Onsite- hybrid). Are you interested? Experience: total work: 5 years (Required) Work Location: In person
Posted 3 weeks ago
8.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures Of Outcomes Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration Define and govern the configuration management plan. Ensure compliance within the team. Testing Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management Manage the delivery of modules effectively. Defect Management Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation Create and provide input for effort and size estimation for projects. Knowledge Management Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management Execute and monitor the release process to ensure smooth transitions. Design Contribution Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments UST is seeking a highly skilled and motivated Lead Data Engineer to join our Telecommunications vertical, leading impactful data engineering initiatives for US-based Telco clients. The ideal candidate will have 6–8 years of experience in designing and developing scalable data pipelines using Snowflake, Azure Data Factory, Azure Databricks. Proficiency in Python, PySpark, and advanced SQL is essential, with a strong focus on query optimization, performance tuning, and cost-effective architecture. A solid understanding of data integration, real-time and batch processing, and metadata management is required, along with experience in building robust ETL/ELT workflows. Candidates should demonstrate a strong commitment to data quality, validation, and consistency, with working knowledge of data governance, RBAC, encryption, and compliance frameworks considered a plus. Familiarity with Power BI or similar BI tools is also advantageous, enabling effective data visualization and storytelling. The role demands the ability to work in a dynamic, fast-paced environment, collaborating closely with stakeholders and cross-functional teams while also being capable of working independently. Strong communication skills and the ability to coordinate across multiple teams and stakeholders are critical for success. In addition to technical expertise, the candidate should bring experience in solution design and architecture planning, contributing to scalable and future-ready data platforms. A proactive mindset, eagerness to learn, and adaptability to the rapidly evolving data engineering landscape—including AI integration into data workflows—are highly valued. This is a leadership role that involves mentoring junior engineers, fostering innovation, and driving continuous improvement in data engineering practices. Skills Azure Databricks,Snowflake,python,Data Engineering
Posted 3 weeks ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Required Skills: Proven experience with Azure Databricks and Azure Data Factory. Strong programming skills in PySpark, SQL, and Python. Experience with semi-structured data and building generic frameworks for data pipelines. Familiarity with Azure cloud fundamentals and best practices. Experience with CI/CD pipelines for deploying ADF and ADB artifacts using Azure DevOps. Preferred knowledge of streaming technologies such as Kafka. Excellent problem-solving skills and attention to detail. Willingness to provide on-call support, including during non-office hours, for critical systems. Nice to have: Knowledge of Cosmos DB is a plus. Experience with Python FastAPI for API development. Experience with real-time data processing and streaming. Familiarity with data governance and compliance standards Required Experience : 7+ years Location : Noida and Bangalore
Posted 3 weeks ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Overview We are PepsiCo PepsiCo is one of the world's leading food and beverage companies with more than $79 Billion in Net Revenue and a global portfolio of diverse and beloved brands. We have a complementary food and beverage portfolio that includes 22 brands that each generate more than $1 Billion in annual retail sales. PepsiCo's products are sold in more than 200 countries and territories around the world. PepsiCo's strength is its people. We are over 250,000 game changers, mountain movers and history makers, located around the world, and united by a shared set of values and goals. We believe that acting ethically and responsibly is not only the right thing to do, but also the right thing to do for our business. At PepsiCo, we aim to deliver top-tier financial performance over the long term by integrating sustainability into our business strategy, leaving a positive imprint on society and the environment. We call this Winning with Purpose . For more information on PepsiCo and the opportunities it holds, visit www.pepsico.com . Data Science Team works in developing Machine Learning (ML) and Artificial Intelligence (AI) projects. Specific scope of this role is to develop ML solution in support of ML/AI projects using big analytics toolsets in a CI/CD environment. Analytics toolsets may include DS tools/Spark/Databricks, and other technologies offered by Microsoft Azure or open-source toolsets. This role will also help automate the end-to-end cycle with Azure Machine Learning Services and Pipelines. PepsiCo Data Analytics & AI Overview: With data deeply embedded in our DNA, PepsiCo Data, Analytics and AI (DA&AI) transforms data into consumer delight. We build and organize business-ready data that allows PepsiCo’s leaders to solve their problems with the highest degree of confidence. Our platform of data products and services ensures data is activated at scale. This enables new revenue streams, deeper partner relationships, new consumer experiences, and innovation across the enterprise. The Data Science Pillar in DA&AI will be the organization where Data Scientist and ML Engineers report to in the broader D+A Organization. Also DS will lead, facilitate and collaborate on the larger DS community in PepsiCo. DS will provide the talent for the development and support of DS component and its life cycle within DA&AIProducts. And will support “pre-engagement” activities as requested and validated by the prioritization framework of DA&AI. Data Scientist: Hyderabad and Gurugram You will be part of a collaborative interdisciplinary team around data, where you will be responsible of our continuous delivery of statistical/ML models. You will work closely with process owners, product owners and final business users. This will provide you the correct visibility and understanding of criticality of your developments. Responsibilities Delivery of key Advanced Analytics/Data Science projects within time and budget, particularly around DevOps/MLOps and Machine Learning models in scope Active contributor to code & development in projects and services Partner with data engineers to ensure data access for discovery and proper data is prepared for model consumption. Partner with ML engineers working on industrialization. Communicate with business stakeholders in the process of service design, training and knowledge transfer. Support large-scale experimentation and build data-driven models. Refine requirements into modelling problems. Influence product teams through data-based recommendations. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create reusable packages or libraries. Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards Leverage big data technologies to help process data and build scaled data pipelines (batch to real time) Implement end-to-end ML lifecycle with Azure Machine Learning and Azure Pipelines Automate ML models deployments Qualifications BE/B.Tech in Computer Science, Maths, technical fields. Overall 5+ years of experience working as a Data Scientist. 4+ years’ experience building solutions in the commercial or in the supply chain space. 4+ years working in a team to deliver production level analytic solutions. Fluent in git (version control). Understanding of Jenkins, Docker are a plus. Fluent in SQL syntaxis. 4+ years’ experience in Statistical/ML techniques to solve supervised (regression, classification) and unsupervised problems. 4+ years’ experience in developing business problem related statistical/ML modeling with industry tools with primary focus on Python or Pyspark development. Skills, Abilities, Knowledge: Data Science – Hands on experience and strong knowledge of building machine learning models – supervised and unsupervised models. Knowledge of Time series/Demand Forecast models is a plus Programming Skills – Hands-on experience in statistical programming languages like Python, Pyspark and database query languages like SQL Statistics – Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators Cloud (Azure) – Experience in Databricks and ADF is desirable Familiarity with Spark, Hive, Pig is an added advantage Business storytelling and communicating data insights in business consumable format. Fluent in one Visualization tool. Strong communications and organizational skills with the ability to deal with ambiguity while juggling multiple priorities Experience with Agile methodology for team work and analytics ‘product’ creation.
Posted 3 weeks ago
0.0 - 2.0 years
5 - 12 Lacs
Pune, Maharashtra
On-site
Company Name : PibyThree Consulting Pvt Ltd. Job Title : Team Lead - Data Migration and Snowflake Skill : Azure Data factory, Databricks, PySpark, Snowflake & Data Migration Location : Pune, Maharashtra. Website : PibyThree Start Date About Us: Πby3 is A Cloud Transformation company enabling Enterprises for Future. We are nimble, and Highly dynamic focused team with a passion to serve our clients with utmost trust and ownership. Our expertise in Technology with vast experience over the years helps client get Solutions with optimized cost and reduced risks. Job Description: We are looking for an experienced Team Lead – Data Warehouse Migration, Data Engineering & BI to lead enterprise-level data transformation initiatives. The ideal candidate will have deep expertise in migration , Snowflake , Power BI and end-to-end data engineering using tools like Azure Data Factory , Databricks , and PySpark . Key Responsibilities: Lead and manage data warehouse migration projects , including extraction, transformation, and loading (ETL/ELT) across legacy and modern platforms. Architect and implement scalable Snowflake data warehousing solutions for analytics and reporting. Develop and schedule robust data pipelines using Azure Data Factory and Databricks . Write efficient and maintainable PySpark code for batch and real-time data processing. Design and develop dashboards and reports using Power BI to support business insights. Ensure data accuracy, security, and consistency throughout the project lifecycle. Collaborate with stakeholders to understand data and reporting requirements. Mentor and lead a team of data engineers and BI developers. Manage project timelines, deliverables, and team performance effectively Must-Have Skills: Data Migration: Hands-on experience with large-scale data migration, reconciliation, and transformation. Snowflake: Data modeling, performance tuning, ELT/ETL development, role-based access control. Azure Data Factory: Pipeline development, integration services, linked services. Databricks: Spark SQL, notebooks, cluster management, orchestration. PySpark: Advanced transformations, error handling, and optimization techniques. Power BI: Data visualization, DAX, Power Query, dashboard/report publishing and maintenance. Preferred Skills: Familiarity with Agile methodologies and sprint-based development. Experience in working with CI/CD for data workflows. Ability to lead client discussions and manage stakeholder expectations. Strong analytical and problem-solving abilities. Job Type: Full-time Pay: ₹500,000.00 - ₹1,200,000.00 per year Schedule: Day shift Ability to commute/relocate: Pune, Maharashtra: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: total work: 4 years (Preferred) Pyspark: 2 years (Required) Azure Data Factory: 2 years (Required) Databricks: 2 years (Required) Work Location: In person
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France