Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
20 - 24 Lacs
Pune, Maharashtra, India
On-site
Exp: 5 - 12 Yrs Work Mode: Hybrid Location: Bangalore, Chennai, Kolkata, Pune and Gurgaon Primary Skills: Snowflake, SQL, DWH, Power BI, ETL and Informatica. We are seeking a skilled Snowflake Developer with a strong background in Data Warehousing (DWH), SQL, Informatica, Power BI, and related tools to join our Data Engineering team. The ideal candidate will have 5+ years of experience in designing, developing, and maintaining data pipelines, integrating data across multiple platforms, and optimizing large-scale data architectures. This is an exciting opportunity to work with cutting-edge technologies in a collaborative environment and help build scalable, high-performance data solutions. Key Responsibilities Minimum of 5+ years of hands-on experience in Data Engineering, with a focus on Data Warehousing, Business Intelligence, and related technologies. Data Integration & Pipeline Development: Develop and maintain data pipelines using Snowflake, Fivetran, and DBT for efficient ELT processes (Extract, Load, Transform) across various data sources. SQL Query Development & Optimization: Write complex, scalable SQL queries, including stored procedures, to support data transformation, reporting, and analysis. Data Modeling & ELT Implementation: Implement advanced data modeling techniques, such as Slowly Changing Dimensions (SCD Type-2), using DBT. Design and optimize high-performance data architectures. Business Requirement Analysis: Collaborate with business stakeholders to understand data needs and translate business requirements into technical solutions. Troubleshooting & Data Quality: Perform root cause analysis on data-related issues, ensuring effective resolution and maintaining high data quality standards. Collaboration & Documentation: Work closely with cross-functional teams to integrate data solutions. Create and maintain clear documentation for data processes, data models, and pipelines. Skills & Qualifications Expertise in Snowflake for data warehousing and ELT processes. Strong proficiency in SQL for relational databases and writing complex queries. Experience with Informatica PowerCenter for data integration and ETL development. Experience using Power BI for data visualization and business intelligence reporting. Experience with Fivetran for automated ELT pipelines. Familiarity with Sigma Computing, Tableau, Oracle, and DBT. Strong data analysis, requirement gathering, and mapping skills. Familiarity with cloud services such as Azure (RDBMS, Data Bricks, ADF), with AWS or GCP Experience with workflow management tools such as Airflow, Azkaban, or Luigi. Proficiency in Python for data processing (other languages like Java, Scala are a plus). Education- Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or a related quantitative field. Skills: data modeling,business intelligence,python,dbt,performene tuning,airflow,informatica,azkaban,luigi,power bi,etl,dwh,fivetran,data quality,snowflake,sql
Posted 1 month ago
5.0 years
20 - 24 Lacs
Gurugram, Haryana, India
On-site
Exp: 5 - 12 Yrs Work Mode: Hybrid Location: Bangalore, Chennai, Kolkata, Pune and Gurgaon Primary Skills: Snowflake, SQL, DWH, Power BI, ETL and Informatica. We are seeking a skilled Snowflake Developer with a strong background in Data Warehousing (DWH), SQL, Informatica, Power BI, and related tools to join our Data Engineering team. The ideal candidate will have 5+ years of experience in designing, developing, and maintaining data pipelines, integrating data across multiple platforms, and optimizing large-scale data architectures. This is an exciting opportunity to work with cutting-edge technologies in a collaborative environment and help build scalable, high-performance data solutions. Key Responsibilities Minimum of 5+ years of hands-on experience in Data Engineering, with a focus on Data Warehousing, Business Intelligence, and related technologies. Data Integration & Pipeline Development: Develop and maintain data pipelines using Snowflake, Fivetran, and DBT for efficient ELT processes (Extract, Load, Transform) across various data sources. SQL Query Development & Optimization: Write complex, scalable SQL queries, including stored procedures, to support data transformation, reporting, and analysis. Data Modeling & ELT Implementation: Implement advanced data modeling techniques, such as Slowly Changing Dimensions (SCD Type-2), using DBT. Design and optimize high-performance data architectures. Business Requirement Analysis: Collaborate with business stakeholders to understand data needs and translate business requirements into technical solutions. Troubleshooting & Data Quality: Perform root cause analysis on data-related issues, ensuring effective resolution and maintaining high data quality standards. Collaboration & Documentation: Work closely with cross-functional teams to integrate data solutions. Create and maintain clear documentation for data processes, data models, and pipelines. Skills & Qualifications Expertise in Snowflake for data warehousing and ELT processes. Strong proficiency in SQL for relational databases and writing complex queries. Experience with Informatica PowerCenter for data integration and ETL development. Experience using Power BI for data visualization and business intelligence reporting. Experience with Fivetran for automated ELT pipelines. Familiarity with Sigma Computing, Tableau, Oracle, and DBT. Strong data analysis, requirement gathering, and mapping skills. Familiarity with cloud services such as Azure (RDBMS, Data Bricks, ADF), with AWS or GCP Experience with workflow management tools such as Airflow, Azkaban, or Luigi. Proficiency in Python for data processing (other languages like Java, Scala are a plus). Education- Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or a related quantitative field. Skills: data modeling,business intelligence,python,dbt,performene tuning,airflow,informatica,azkaban,luigi,power bi,etl,dwh,fivetran,data quality,snowflake,sql
Posted 1 month ago
4.0 years
0 Lacs
Thane, Maharashtra, India
On-site
Description Experience: 4+ years of experience Key Responsibilities Help define/improvise actionable and decision driving management information Ensure streamlining, consistency and standardization of MI within the handled domain Build and operate flexible processes/reports that meet changing business needs Would be required to prepare the detailed documentation of schemas Any other duties commensurate with position or level of responsibility Desired Profile Prior experience in insurance companies/insurance sector would be an added advantage Experience of Azure Technologies (include SSAS, SQL Server, Azure Data Lake, Synapse) Hands on experience on SQL and PowerBI. Excellent understanding in developing stored procedures, functions, views and T-SQL programs. Developed and maintained ETL (Data Extraction, Transformation and Loading) mappings using ADF to extract the data from multiple source systems. Analyze existing SQL queries for performance improvements. Excellent written and verbal communication skills Ability to create and design MI for the team Expected to handle multiple projects, with stringent timelines Good interpersonal skills Actively influences strategy by creative and unique ideas Exposure to documentation activities Key Competencies Technical Learning - Can learn new skills and knowledge, as per business requirement Action Orientated - Enjoys working; is action oriented and full of energy for the things he/she sees as challenging; not fearful of acting with a minimum of planning; seizes more opportunities than others. Decision Quality - Makes good decisions based upon a mixture of analysis, wisdom, experience, and judgment. Detail Orientation : High attention to detail, especially in data quality, documentation, and reporting Communication & Collaboration : Strong interpersonal skills, effective in team discussions and stakeholder interactions Qualifications B.Com/ BE/BTech/MCA from reputed College/Institute
Posted 1 month ago
7.0 - 10.0 years
0 Lacs
Greater Madurai Area
On-site
Job Requirements Why work for us? Alkegen brings together two of the world’s leading specialty materials companies to create one new, innovation-driven leader focused on battery technologies, filtration media, and specialty insulation and sealing materials. Through global reach and breakthrough inventions, we are delivering products that enable the world to breathe easier, live greener, and go further than ever before. With over 60 manufacturing facilities with a global workforce of over 9,000 of the industry’s most experienced talent, including insulation and filtration experts, Alkegen is uniquely positioned to help customers impact the environment in meaningful ways. Alkegen offers a range of dynamic career opportunities with a global reach. From production operators to engineers, technicians to specialists, sales to leadership, we are always looking for top talent ready to bring their best. Come grow with us! Key Responsibilities Lead and manage the Data Operations team, including BI developers and ETL developers, to deliver high-quality data solutions. Oversee the design, development, and maintenance of data models, data transformation processes, and ETL pipelines. Collaborate with business stakeholders to understand their data needs and translate them into actionable data insights solutions. Ensure the efficient and reliable operation of data pipelines and data integration processes. Develop and implement best practices for data management, data quality, and data governance. Utilize SQL, Python, and Microsoft SQL Server to perform data analysis, data manipulation, and data transformation tasks. Build and deploy data insights solutions using tools such as PowerBI, Tableau, and other BI platforms. Design, create, and maintain data warehouse environments using Microsoft SQL Server and the data vault design pattern. Design, create, and maintain ETL packages using Microsoft SQL Server and SSIS. Work closely with cross-functional teams in a matrix organization to ensure alignment with business objectives and priorities. Lead and mentor team members, providing guidance and support to help them achieve their professional goals. Proactively identify opportunities for process improvements and implement solutions to enhance data operations. Communicate effectively with stakeholders at all levels, presenting data insights and recommendations in a clear and compelling manner. Implement and manage CI/CD pipelines to automate the testing, integration, and deployment of data solutions. Apply Agile methodologies and Scrum practices to ensure efficient and timely delivery of projects. Skills & Qualifications Master's or Bachelor’s degree in computer science, Data Science, Information Technology, or a related field. 7 to 10 years of experience in data modelling, data transformation, and building and managing ETL processes. Strong proficiency in SQL, Python, and Microsoft SQL Server for data manipulation and analysis. Extensive experience in building and deploying data insights solutions using BI tools such as PowerBI and Tableau. At least 2 years of experience leading BI developers or ETL developers. Experience working in a matrix organization and collaborating with cross-functional teams. Proficiency in cloud platforms such as Azure, AWS, and GCP. Familiarity with data engineering tools such as ADF, Databricks, Power Apps, Power Automate, and SSIS. Strong stakeholder management skills with the ability to communicate complex data concepts to non-technical audiences. Proactive and results-oriented, with a focus on delivering value aligned with business objectives. Knowledge of CI/CD pipelines and experience implementing them for data solutions. Experience with Agile methodologies and Scrum practices. Relevant certifications in Data Analytics, Data Architecture, Data Warehousing, and ETL are highly desirable. At Alkegen, we strive every day to help people – ALL PEOPLE – breathe easier, live greener and go further than ever before. We believe that diversity and inclusion is central to this mission and to our impact. Our diverse and inclusive culture drives our growth & innovation and we nurture it by actively embracing our differences and using our varied perspectives to solve the complex challenges facing our changing and diverse world. Employment selection and related decisions are made without regard to sex, race, ethnicity, nation of origin, religion, color, gender identity and expression, age, disability, education, opinions, culture, languages spoken, veteran’s status, or any other protected class.
Posted 1 month ago
4.0 years
0 Lacs
Thane, Maharashtra, India
On-site
Description Experience: 4+ years of experience Key Responsibilities Help define/improvise actionable and decision driving management information Ensure streamlining, consistency and standardization of MI within the handled domain Build and operate flexible processes/reports that meet changing business needs Would be required to prepare the detailed documentation of schemas Any other duties commensurate with position or level of responsibility Desired Profile Prior experience in insurance companies/insurance sector would be an added advantage Experience of Azure Technologies (include SSAS, SQL Server, Azure Data Lake, Synapse) Hands on experience on SQL and PowerBI. Excellent understanding in developing stored procedures, functions, views and T-SQL programs. Developed and maintained ETL (Data Extraction, Transformation and Loading) mappings using ADF to extract the data from multiple source systems. Analyze existing SQL queries for performance improvements. Excellent written and verbal communication skills Ability to create and design MI for the team Expected to handle multiple projects, with stringent timelines Good interpersonal skills Actively influences strategy by creative and unique ideas Exposure to documentation activities Key Competencies Technical Learning - Can learn new skills and knowledge, as per business requirement Action Orientated - Enjoys working; is action oriented and full of energy for the things he/she sees as challenging; not fearful of acting with a minimum of planning; seizes more opportunities than others. Decision Quality - Makes good decisions based upon a mixture of analysis, wisdom, experience, and judgment. Detail Orientation : High attention to detail, especially in data quality, documentation, and reporting Communication & Collaboration : Strong interpersonal skills, effective in team discussions and stakeholder interactions Qualifications B.Com/ BE/BTech/MCA from reputed College/Institute
Posted 1 month ago
8.0 - 12.0 years
12 - 22 Lacs
Hyderabad, Bengaluru
Work from Office
Role & responsibilities 8-12 years of professional work experience in a relevant field Proficient in Azure Databricks, ADF, Delta Lake, SQL Data Warehouse, Unity Catalog, Mongo DB, Python Experience/ prior knowledge on semi structure data and Structured Streaming, Azure synapse analytics, data lake, data warehouse. Proficient in creating Azure Data Factory pipelines for ETL/ELT processing ; copy activity, custom Azure development etc. Lead the technical team of 4-6 resource. Prior Knowledge in Azure DevOps and CI/CD processes including Github . Good knowledge of SQL and Python for data manipulation, transformation, and analysis knowledge on Power bi would be beneficial. Understand business requirements to set functional specifications for reporting applications
Posted 1 month ago
6.0 - 8.0 years
3 - 8 Lacs
Bengaluru
On-site
Country/Region: IN Requisition ID: 26961 Work Model: Position Type: Salary Range: Location: INDIA - BENGALURU - HP Title: Technical Lead-Data Engg Description: Area(s) of responsibility Azure Data Lead - 5A ( HP Role – Senior Data Engineer) Experience: 6 to 8 Years Azure Lead with experience in Azure ADF, ADLS Gen2, Databricks, PySpark and Advanced SQL Responsible for designing and implementing secure, scalable, and highly available cloud-based solutions and estimation on Azure Cloud 4 Years of experince in Azure Databricks and PySpark Experience in Performance Tuning Experience with integration of different data sources with Data Warehouse and Data Lake is required Experience in creating Data warehouse, data lakes Understanding of data modelling and data architecture concepts To be able to clearly articulate pros and cons of various technologies and platforms Experience in supporting tools GitHub, Jira, Teams, Confluence need to be used Collaborate with clients to understand their business requirements and translate them into technical solutions that leverage AWS and Azure cloud platforms. Mandatory Skillset: Azure Databricks, PySpark and Advanced SQL
Posted 1 month ago
12.0 - 14.0 years
9 - 10 Lacs
Bengaluru
On-site
Country/Region: IN Requisition ID: 26981 Work Model: Position Type: Salary Range: Location: INDIA - BENGALURU - AUTOMOTIVE Title: Project Manager Description: Area(s) of responsibility Job description - Azure Tech Project Manager Experience Required- 12- 14 years Project Manager will be responsible for driving project management activities in the Azure Cloud Services (ADF, Azure Databricks, PySpark, ADLS Gen2) Strong understanding of Azure Services process execution from acquiring data from source system to visualization Experience in Azure DevOPS Experience in Data Warehouse and Data Lake and Visualizations Project Management skills including time and risk management, resource prioritization and project structuring Responsible for END TO END project execution and delivery across multiple clients Understand ITIL processes related to incident management, problem management, application life cycle management, operational health management. Strong in Agile and Jira Tool Strong customer service, problem solving, organizational and conflict management skills Should be able to prepare weekly/monthly reports both internal and client management Strong in Agile and Jira Tool Should be able to help team members on technical issue Should be good learner and open to learn new functionalities
Posted 1 month ago
0 years
1 - 9 Lacs
Noida
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. This role is crucial for us on the Cloud Data Engineering team (Data Exchange)for all the Cloud development, migrations, and support works related to WellMed Data Services. Team and Role require to maintain and supports EDW, IS cloud modernization in wellmed, that involves cloud development on Data Streaming using Apache Kafka, Kubernetes, Databricks, Snowflake, SQL Server, Airflow for monitoring and support. GIT and DevOps for pipeline automations. Primary Responsibility: Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Hands-on experience in Cloud Data Engineering Contributed for all the Cloud development, migrations, and support works Proven to be comfortable independently maintain and supports EDW, cloud modernization, SQL development, Azure cloud development, ETL using Azure Data Factory Proven to successfully implemented data Streaming using Apache Kafka, Kubernetes, Databricks, Snowflake, SQL Server, Airflow for monitoring and support. GIT and DevOps for pipeline automations At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 1 month ago
12.0 - 14.0 years
0 Lacs
Greater Bengaluru Area
On-site
Area(s) of responsibility Job Description - Azure Tech Project Manager Experience Required- 12- 14 years Project Manager will be responsible for driving project management activities in the Azure Cloud Services (ADF, Azure Databricks, PySpark, ADLS Gen2) Strong understanding of Azure Services process execution from acquiring data from source system to visualization Experience in Azure DevOPS Experience in Data Warehouse and Data Lake and Visualizations Project Management skills including time and risk management, resource prioritization and project structuring Responsible for END TO END project execution and delivery across multiple clients Understand ITIL processes related to incident management, problem management, application life cycle management, operational health management. Strong in Agile and Jira Tool Strong customer service, problem solving, organizational and conflict management skills Should be able to prepare weekly/monthly reports both internal and client management Strong in Agile and Jira Tool Should be able to help team members on technical issue Should be good learner and open to learn new functionalities
Posted 1 month ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Key Responsibilities • Build and optimize ETL/ELT pipelines using Databricks and ADF , ingesting data from diverse sources including APIs, flat files, and operational databases. • Develop and maintain scalable PySpark jobs for batch and incremental data processing across Bronze, Silver, and Gold layers. • Write clean, production-ready Python code for data processing, orchestration, and integration tasks. • Contribute to the medallion architecture design and help implement data governance patterns across data layers. • Collaborate with analytics, data science, and business teams to design pipelines that meet performance and data quality expectations. • Monitor, troubleshoot, and continuously improve pipeline performance and reliability. • Support CI/CD for data workflows using Git , Databricks Repos , and optionally Terraform for infrastructure-as-code. • Document pipeline logic, data sources, schema transformations, and operational playbooks. ⸻ Required Qualifications • 3–5 years of experience in data engineering roles with increasing scope and complexity. • Strong hands-on experience with Databricks , including Spark, Delta Lake, and SQL-based transformations. • Proficiency in PySpark and Python for large-scale data manipulation and pipeline development. • Hands-on experience with Azure Data Factory for orchestrating data workflows and integrating with Azure services. • Solid understanding of data modeling concepts and modern warehousing principles (e.g., star schema, slowly changing dimensions). • Comfortable with Git-based development workflows and collaborative coding practices. ⸻ Preferred / Bonus Qualifications • Experience with Terraform to manage infrastructure such as Databricks workspaces, ADF pipelines, or storage resources. • Familiarity with Unity Catalog , Databricks Asset Bundles (DAB) , or Delta Live Tables (DLT) . • Experience with Azure DevOps or GitHub Actions for CI/CD in a data environment. • Knowledge of data governance , role-based access control , or data quality frameworks . • Exposure to real-time ingestion using tools like Event Hubs , Azure Functions , or Autoloader .
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
1. Strategy, Framework, and Governance Operating Model - Develop and maintain enterprise-wide data governance strategies, standards, and policies. - Align governance practices with business goals like regulatory compliance and analytics readiness. - Define roles and responsibilities within the governance operating model. - Drive governance maturity assessments and lead change management initiatives. 2. Stakeholder Alignment & Organizational Enablement - Collaborate across IT, legal, business, and compliance teams to align governance priorities. - Define stewardship models and create enablement, training, and communication programs. - Conduct onboarding sessions and workshops to promote governance awareness. 3. Architecture Design for Data Governance Platforms - Design scalable and modular data governance architecture. - Evaluate tools like Microsoft Purview, Collibra, Alation, BigID, Informatica. - Ensure integration with metadata, privacy, quality, and policy systems. 4. Microsoft Purview Solution Architecture - Lead end-to-end implementation and management of Microsoft Purview. - Configure RBAC, collections, metadata scanning, business glossary, and classification rules. - Implement sensitivity labels, insider risk controls, retention, data map, and audit dashboards. 5. Metadata, Lineage & Glossary Management - Architect metadata repositories and ingestion workflows. - Ensure end-to-end lineage (ADF → Synapse → Power BI). - Define governance over business glossary and approval workflows.
Posted 1 month ago
5.0 years
0 Lacs
India
Remote
This is full time remote contract position. (So no freelancing or moonlighting possible) You may need to provide few hours of overlapping time with US timezone . You may need to go through the background verification process in which your claimed experience, education certificates and references given will be verified. So pls don't apply if you are not comfortable to go through this verification process. This is client facing role hence excellent communication in English language is MUST . Min. Experience : 5+ years About role : Our client is about to start an ERP replacement. They plan to move away from the AWS platform and move to an Azure data lake feeding Snowflake. We need a resource who can be Snowflake thought leader and having Microsoft azure data engineering expertise. Key Responsibilities: Data Ingestion & Orchestration (Transformation & Cleansing) : o Design and maintain Azure Data Factory (ADF) pipelines : Extract data from sources like ERPs (SAP, Oracle), UKG, SharePoint, and REST APIs. o Configure scheduled/event-driven loads : Set up ADF for automated data ingestion. o Transform and cleanse data : Develop logic in ADF for Bronze-to-Silver layer transformations. o Implement data quality checks : Ensure accuracy and consistency. · Snowflake Data Warehousing: o Design/optimize data models: Create tables, views, and stored procedures for Silver and Gold layers. o ETL/ELT in Snowflake: Transform curated Silver data into analytical Gold structures. o Performance tuning: Optimize queries and data loads. Design, develop, and optimize data models within Snowflake, including creating tables, views, and stored procedures for both Silver and Gold layers. o Implement ETL/ELT processes within Snowflake to transform curated data (Silver) into highly optimized analytical structures (Gold) Data Lake Management: o Implement Azure Data Lake Gen2 solutions : Follow medallion architecture (Bronze, Silver). o Manage partitioning, security, governance: Ensure efficient and secure data storage. · Collaboration & Documentation: Partner with stakeholders to convert data needs into technical solutions, document pipelines and models, and uphold best practices through code reviews. Monitoring & Support: Track pipeline performance, resolve issues, and deploy alerting/logging for proactive data integrity and issue detection. · Data visualization tools : Proficient in like Power BI, DAX, and Power Query for creating insightful reports. Skilled in Python for data processing and analysis to support data engineering tasks. Required Skills & Qualifications: Over 5+ years of experience in data engineering, data warehousing, or ETL development. Microsoft Azure proficiency: Azure Data Factory (ADF): Experience in designing, developing, and deploying complex data pipelines. Azure Data Lake Storage Gen2: Hands-on experience with data ingestion, storage, and organization. Expertise in Snowflake Data Warehouse and ETL/ELT: Understanding Snowflake architecture. SQL proficiency for manipulation and querying. Experience with Snowpipe, tasks, streams, and stored procedures. Strong understanding of data warehousing concepts and ETL/ELT principles. Data Formats & Integration : Experience with various data formats (e.g., Parquet, CSV, JSON) and data integration patterns. Data Visualization: Experience with Power BI, DAX, Power Query. Scripting: Python for data processing and analysis. Soft Skills: Problem-solving, attention to detail, communication, and collaboration Nice-to-Have Skills : Version control (e.g., Git), Agile/Scrum methodologies and Data governance and security best practices.
Posted 1 month ago
5.0 years
0 Lacs
India
On-site
Years of experience: 5+ Years JD: High-Level Responsibilities: • Design and develop scalable data pipelines using Azure Data Factory, incorporating SSIS packages where applicable. • Write and optimize T-SQL queries for data transformation, validation, and loading. • Collaborate with the customer’s data architects to understand and modernize legacy data integration patterns. • Perform relational database design and schema optimization for Azure SQL or Synapse targets. • Support migration of on-premise or legacy ETL jobs into cloud-native Azure Integration Services. • Conduct unit testing and troubleshoot data pipeline issues during sprint cycles. • Provide support during UK overlap hours (up to 8 PM IST) to align with customer team’s collaboration windows. Mapped Skills: • Azure Data Factory development (SSIS helpful) • T-SQL development • Relational database design • SQL Server Management Studio • Azure Data Studio • Azure Portal • Visual Studio 2022 • Experience migrating existing integrations to AIS Recommended Skills: • Azure Synapse Analytics (often paired with ADF in modern pipelines) • Data flow transformations in ADF • Data lake concepts and Azure Data Lake Gen2 • Monitoring & debugging ADF pipelines • Integration Runtime setup and optimization • Azure Key Vault integration in ADF • Performance tuning in SQL Server and Azure SQL DB • Knowledge of Delta Lake format if modern analytics is a goal
Posted 1 month ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title : Snowflake Developer/Data Engineer Location : Chennai(Hybrid) Experience: 6+ Years About the Role We are looking for a Snowflake Developer with 6+ years of hands-on experience in Snowflake, SnowSQL, Cortex, DBT, and data warehousing . The ideal candidate should have strong expertise in data modeling, transformation, and optimization, along with excellent communication skills to collaborate with business and technical teams. Key Responsibilities Develop and optimize Snowflake data models, schemas, and performance-tuned queries. Write and execute SnowSQL scripts for data transformation and automation. Utilize Snowflake Cortex to integrate AI-driven analytics and insights. Implement DBT (Data Build Tool) for data transformation, testing, and orchestration. Design and maintain ADF data pipelines and ETL/ELT workflows . Collaborate with cross-functional teams to understand data needs and provide solutions. Ensure data security, governance, and best practices in Snowflake. Troubleshoot performance issues and implement tuning strategies. Required Skills & Qualifications 6+ years of hands-on experience with Snowflake and cloud data warehousing. Strong expertise in SnowSQL and DBT . Expertise in Cortex is a Plus Experience in data modeling, performance tuning, and query optimization . Hands-on experience with ETL/ELT processes and data pipelines . Strong understanding of SQL, data warehousing concepts, and cloud architecture . Experience integrating Snowflake with other BI/Analytics tools . Excellent problem-solving skills and attention to detail . Strong communication skills to interact with business and technical stakeholders. Knowledge / hands on experience in PowerBI, Fabric is a plus.
Posted 1 month ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job description Job Name: Senior Data Engineer DBT & Snowflake Years of Experience: 5 Job Description: We are looking for a skilled and experienced DBT-Snowflake Developer to join our team! As part of the team, you will be involved in the implementation of the ongoing and new initiatives for our company. If you love learning, thinking strategically, innovating, and helping others, this job is for you! Primary Skills: DBT,Snowflake Secondary Skills: ADF,Databricks,Python,Airflow,Fivetran,Glue Role Description: Data engineering role requires creating and managing technological infrastructure of a data platform, be in-charge / involved in architecting, building, and managing data flows / pipelines and construct data storages (noSQL, SQL), tools to work with big data (Hadoop, Kafka), and integration tools to connect sources or other databases. Role Responsibility: Translate functional specifications and change requests into technical specifications Translate business requirement document, functional specification, and technical specification to related coding Develop efficient code with unit testing and code documentation Ensuring accuracy and integrity of data and applications through analysis, coding, documenting, testing, and problem solving Setting up the development environment and configuration of the development tools Communicate with all the project stakeholders on the project status Manage, monitor, and ensure the security and privacy of data to satisfy business needs Contribute to the automation of modules, wherever required To be proficient in written, verbal and presentation communication (English) Co-ordinating with the UAT team Role Requirement: Proficient in basic and advanced SQL programming concepts (Procedures, Analytical functions etc.) Good Knowledge and Understanding of Data warehouse concepts (Dimensional Modeling, change data capture, slowly changing dimensions etc.) Knowledgeable in Shell / PowerShell scripting Knowledgeable in relational databases, nonrelational databases, data streams, and file stores Knowledgeable in performance tuning and optimization Experience in Data Profiling and Data validation Experience in requirements gathering and documentation processes and performing unit testing Understanding and Implementing QA and various testing process in the project Knowledge in any BI tools will be an added advantage Sound aptitude, outstanding logical reasoning, and analytical skills Willingness to learn and take initiatives Ability to adapt to fast-paced Agile environment Additional Requirement: • Design, develop, and maintain scalable data models and transformations using DBT in conjunction with Snowflake, ensure the effective transformation and load data from diverse sources into data warehouse or data lake. • Implement and manage data models in DBT, guarantee accurate data transformation and alignment with business needs. • Utilize DBT to convert raw, unstructured data into structured datasets, enabling efficient analysis and reporting. • Write and optimize SQL queries within DBT to enhance data transformation processes and improve overall performance. • Establish best DBT processes to improve performance, scalability, and reliability. • Expertise in SQL and a strong understanding of Data Warehouse concepts and Modern Data Architectures. • Familiarity with cloud-based platforms (e.g., AWS, Azure, GCP). • Migrate legacy transformation code into modular DBT data models
Posted 1 month ago
0 years
0 Lacs
India
On-site
We are seeking highly motivated and skilled DevOps Support Engineers to join our team. The ideal candidates will have a strong background in modern DevOps tools and practices, with expertise in Kubernetes, Snowflake, Python, Azure, Azure Data Factory (ADF), and other relevant technologies. This role requires a blend of technical expertise, problem-solving skills, and a customer-focused mindset to ensure the smooth operation and scalability of our infrastructure. Location: Off-Shore (India) Positions: 4 Key Responsibilities: 1. Platform Support and Maintenance: Provide day-to-day operational support for our systems, ensuring high availability, performance, and reliability. Monitor, troubleshoot, and resolve issues related to Kubernetes clusters, Snowflake data pipelines, and Azure infrastructure. Collaborate with cross-functional teams to address incidents and implement robust solutions. 2. Infrastructure Automation and Optimization: Develop and maintain automation scripts and tools using Python to streamline deployment, monitoring, and scaling processes. Optimize Kubernetes cluster configurations, including resource allocation and scaling strategies. Implement best practices for cloud resource utilization on Azure to reduce costs and improve efficiency. 3. Data Pipeline Management: Support and enhance data pipelines built on Snowflake and Azure Data Factory (ADF). Monitor data flow, troubleshoot pipeline failures, and ensure data integrity and availability. Collaborate with data engineering teams to implement new data workflows and improve existing pipelines. 4. Security and Compliance: Ensure the platform adheres to security standards and compliance requirements. Perform regular audits of infrastructure and implement security patches as needed. Manage role-based access control (RBAC) and permissions in Kubernetes, Snowflake, and Azure environments. 5. Collaboration and Communication: Work closely with development, QA, and product teams to ensure seamless integration and deployment of new features. Participate in on-call rotations to provide 24/7 support for critical issues. Document processes, configurations, and troubleshooting guides to improve knowledge sharing across the team. Required Skills and Qualifications: 1. Technical Expertise: Proficient in managing Kubernetes clusters, including deployment, scaling, and monitoring. Hands-on experience with Snowflake, including data modeling, query optimization, and pipeline management. Strong programming skills in Python for automation and scripting. Solid understanding of Azure cloud services, including compute, storage, networking, and identity management. Familiarity with Azure Data Factory (ADF) for building and managing ETL/ELT pipelines. 2. DevOps Practices : Experience with CI/CD pipelines and tools (e.g., Jenkins, GitHub Actions, Azure DevOps). Knowledge of infrastructure-as-code (IaC) tools such as Terraform or ARM templates. Proficiency in monitoring tools like Prometheus, Grafana, or Azure Monitor. 3. Soft Skills: Excellent problem-solving and analytical skills, with a proactive mindset. Strong communication skills to work effectively with cross-functional teams. Ability to prioritize tasks and manage multiple responsibilities in a fast-paced environment.
Posted 1 month ago
10.0 years
0 Lacs
India
Remote
Job Title: Lead Data Engineer Experience: 8–10 Years Location: Remote Mandatory: Prior hands-on experience with Fivetran integrations About the Role: We are seeking a highly skilled Lead Data Engineer with 8–10 years of deep expertise in cloud-native data platforms, including Snowflake, Azure, DBT , and Fivetran . This role will drive the design, development, and optimization of scalable data pipelines, leading a cross-functional team and ensuring data engineering best practices are implemented and maintained. Key Responsibilities: Lead the design and development of data pipelines (batch and real-time) using Azure, Snowflake, DBT, Python , and Fivetran . Translate complex business and data requirements into scalable, efficient data engineering solutions. Architect multi-cluster Snowflake setups with an eye on performance and cost. Design and implement robust CI/CD pipelines for data workflows (Git-based). Collaborate closely with analysts, architects, and business teams to ensure data architecture aligns with organizational goals. Mentor and review work of onshore/offshore data engineers. Define and enforce coding standards, testing frameworks, monitoring strategies , and data quality best practices. Handle real-time data processing scenarios where applicable. Own end-to-end delivery and documentation for data engineering projects. Must-Have Skills: Fivetran : Proven experience integrating and managing Fivetran connectors and sync strategies. Snowflake Expertise : Warehouse management, cost optimization, query tuning Internal vs. external stages, loading/unloading strategies Schema design, security model, and user access Python (advanced): Modular, production-ready code for ETL/ELT, APIs, and orchestration DBT : Strong command of DBT for transformation workflows and modular pipelines Azure : Azure Data Factory (ADF), Databricks Integration with Snowflake and other services SQL : Expert-level SQL for transformations, validations, and optimizations Version Control : Git, branching, pull requests, and peer code reviews CI/CD : DevOps/DataOps workflows for data pipelines Data Modeling : Star schema, Data Vault, normalization/denormalization techniques Strong documentation using Confluence, Word, Excel, etc. Excellent communication skills – verbal and written Good to Have: Experience with real-time data streaming tools (Event Hub, Kafka) Exposure to monitoring/data observability tools Experience with cost management strategies for cloud data platforms Exposure to Agile/Scrum-based environments
Posted 1 month ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. This role is crucial for us on the Cloud Data Engineering team (Data Exchange)for all the Cloud development, migrations, and support works related to WellMed Data Services. Team and Role require to maintain and supports EDW, IS cloud modernization in wellmed, that involves cloud development on Data Streaming using Apache Kafka, Kubernetes, Databricks, Snowflake, SQL Server, Airflow for monitoring and support. GIT and DevOps for pipeline automations. Primary Responsibility Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Hands-on experience in Cloud Data Engineering Contributed for all the Cloud development, migrations, and support works Proven to be comfortable independently maintain and supports EDW, cloud modernization, SQL development, Azure cloud development, ETL using Azure Data Factory Proven to successfully implemented data Streaming using Apache Kafka, Kubernetes, Databricks, Snowflake, SQL Server, Airflow for monitoring and support. GIT and DevOps for pipeline automations At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 1 month ago
8.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Data Engineer – Azure Data Platform Location: Padi, Chennai Job Type: Full-Time Role Overview: We are looking for an experienced Data Engineer to join our Azure Data Platform team. The ideal candidate will have a deep understanding of Azure’s data engineering and cloud technology stack. This role is pivotal in driving data-driven decision-making, operational analytics, and advanced manufacturing intelligence initiatives. Key Responsibilities: Lead the design and implementation of data architectures that support operational analytics and advanced manufacturing intelligence, ensuring scalability and flexibility to handle increasing data volumes. Design, implement, and maintain scalable data and analytics platforms using Microsoft Azure services, such as Azure Data Factory (ADF), Azure Data Lake Storage Gen2, and Azure Synapse Analytics. Develop and manage ETL processes, data pipelines, and batch jobs to ensure efficient data flow and transformation, optimizing pipeline runs and monitoring compute and storage usage. Implement metadata management solutions to ensure data quality and governance, leading to consistent data quality and integrity. Integrate data from key sources such as SAP, SQL Server, and cloud databases, IoT and other live streaming data into centralized data structures to support analytics and decision-making. Provide expertise on data ingestion (SAP, SQL), data transformation, and the automation of data pipelines in a manufacturing context. Ensure the data platform supports dashboarding and advanced analytics, enabling business users to independently create and evolve dashboards. Implement manufacturing-specific analytics solutions, including leadership and operational dashboards, and other analytics solutions across our value chain leveraging Azure’s comprehensive toolset. Define and monitor KPIs, ensuring data quality and the accuracy of insights delivered to business stakeholders. Identify and manage project risks related to data security, system integration, and scalability. Independently maintain the data platform, ensuring its reliability and performance, and implementing best practices for data security and compliance. Advise the Data Platform project manager and leadership team on best practices for data management and scaling needs, providing guidance on integrating data from IoT and other SaaS platforms, as well as newer systems as they come into the digital landscape. Work closely with data scientists to ensure data is available in the required format for their analyses and collaborate with Power BI developers to support dashboarding and reporting needs. Create data marts for business users to facilitate self-service analytics. Mentor and train junior engineers, fostering their professional growth and development, and providing guidance and support on best practices and technical challenges. Qualifications & Experience: Education: Bachelor’s degree in Engineering, Computer Science, or a related field. Experience: 8-10 years of experience, with a minimum of 5 years working on core data engineering responsibilities on a cloud platform. Project Management experience is a big plus. Proven track record of implementing data-driven solutions in areas such as plant automation, operational analytics, quality control, supply chain optimization. Technical Proficiency: Expertise in cloud-based data platforms, particularly within the Azure ecosystem (Azure Data Factory, Synapse Analytics, Databricks). Familiarity with SAP as a data source. Proficiency in programming languages such as SQL, Python, and R for analytics and reporting. Soft Skills: Strong analytical mindset with the ability to translate manufacturing challenges into data-driven insights and solutions. Excellent communication and organizational skills. What We Offer: The opportunity to work on transformative data analytics projects that drive innovation and operational excellence in manufacturing. A collaborative and dynamic work environment focused on professional growth and career development.
Posted 1 month ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Your Responsibilities We are seeking an experienced and highly motivated Sr Data Engineer - Data Ingestion to join our dynamic team. The ideal candidate will have strong hands-on experience with Azure Data Factory (ADF), a deep understanding of relational and non-relational data ingestion techniques, and proficiency in Python programming. You will be responsible for designing and implementing scalable data ingestion solutions that interface with Azure Data Lake Storage Gen 2 (ADLS Gen 2), Databricks, and various other Azure ecosystem services. The Data Ingestion Engineer will work closely with stakeholders to gather data ingestion requirements, create modularized ingestion solutions, and define best practices to ensure efficient, robust, and scalable data pipelines. This role requires effective communication skills, ownership, and accountability for the delivery of high-quality data solutions. Data Ingestion Strategy & Development: Design, develop, and deploy scalable and efficient data pipelines in Azure Data Factory (ADF) to move data from multiple sources (relational, non-relational, files, APIs, etc.) into Azure Data Lake Storage Gen 2 (ADLS Gen 2), Azure SQL Database, and other target systems. Implement ADF activities (copy, lookup, execute pipeline, etc.) to integrate data from on-premises and cloud-based systems. Build parameterized and reusable pipeline templates in ADF to standardize the data ingestion process, ensuring maintainability and scalability of ingestion workflows. Integrate custom data transformation activities within ADF pipelines, utilizing Python, Databricks, or Azure Functions when required. ADF Data Flows Design & Development: Leverage Azure Data Factory Data Flows for visually designing and orchestrating data transformation tasks, enabling complex ETL (Extract, Transform, Load) logic to process large datasets at scale. Design data flow transformations such as filtering, aggregation, joins, lookups, and sorting to process and transform data before loading it into target systems like ADLS Gen 2 or Azure SQL Database. Implement incremental loading strategies in Data Flows to ensure efficient and optimized data ingestion for large volumes of data while minimizing resource consumption. Develop reusable data flow components to streamline transformation processes, ensuring consistency and reducing development time for new data ingestion pipelines. Utilize debugging tools in Data Flows to troubleshoot, test, and optimize data transformations, ensuring accurate results and performance. ADF Orchestration & Automation: Use ADF triggers and scheduling to automate pipeline execution based on time or events, ensuring timely and efficient data ingestion. Configure ADF monitoring and alerting capabilities to proactively track pipeline performance, handle failures, and address issues in a timely manner. Implement ADF version control practices using Git to manage code changes, collaborate effectively with other team members, and ensure code integrity. Data Integration with Various Sources: Ingest data from diverse sources such as on-premise SQL Servers, REST APIs, cloud databases (e.g., Azure SQL Database, Cosmos DB), file-based systems (CSV, Parquet, JSON), and third-party services using ADF. Design and implement ADF linked services to securely connect to external data sources (databases, file systems, APIs, etc.). Develop and configure ADF datasets and dataflows to efficiently transform, clean, and load data into Azure Data Lake or other destinations. Pipeline Monitoring and Optimization: Continuously monitor and optimize ADF pipelines to ensure they run with high performance and minimal cost. Apply techniques like data partitioning, parallel processing, and incremental loading where appropriate. Implement data quality checks within the pipelines to ensure data integrity and handle data anomalies or errors in a systematic manner. Review pipeline execution logs and performance metrics regularly, and apply tuning recommendations to improve execution times and reduce operational costs. Collaboration and Communication: Work closely with business and technical stakeholders to capture and translate data ingestion requirements into ADF pipeline designs. Provide ADF-specific technical expertise to both internal and external teams, guiding them in the use of ADF for efficient and cost-effective data pipelines. Document ADF pipeline designs, error handling strategies, and best practices to ensure the team can maintain and scale the solutions. Conduct training sessions or knowledge transfer with junior engineers or other team members on ADF best practices and architecture. Security and Compliance: Ensure all data ingestion solutions built in ADF follow security and compliance guidelines, including encryption at rest and in transit, data masking, and identity and access management. Implement role-based access control (RBAC) and managed identities within ADF to manage access securely and reduce the risk of unauthorized access to sensitive data. Integration with Azure Ecosystem: Leverage other Azure services, such as Azure Logic Apps, Azure Function Apps, and Azure Databricks, to augment the capabilities of ADF pipelines, enabling more advanced data processing, event-driven workflows, and custom transformations. Incorporate Azure Key Vault to securely store and manage sensitive data (e.g., connection strings, credentials) used in ADF pipelines. Integrate ADF with Azure Data Lake Analytics, Synapse Analytics, or other data warehousing solutions for advanced querying and analytics after ingestion. Best Practices & Continuous Improvement: Develop and enforce best practices for building and maintaining ADF pipelines and data flows, ensuring the solutions are modular, reusable, and follow coding standards. Identify opportunities for pipeline automation to reduce manual intervention and improve operational efficiency. Regularly review and suggest new tools or services within the Azure ecosystem to enhance ADF pipeline performance and increase the overall efficiency of data ingestion workflows. Incident and Issue Management: Actively monitor the health of the data pipelines, swiftly addressing any failures, data quality issues, or performance bottlenecks. Troubleshoot ADF pipeline errors, including issues within Data Flows, and work with other teams to root-cause issues related to data availability, quality, or connectivity. Participate in post-mortem analysis for any major incidents, documenting lessons learned and implementing preventative measures for the future. Your Profile Experience with Azure Data Services: Strong experience with Azure Data Factory (ADF) for orchestrating data pipelines. Hands-on experience with ADLS Gen 2, Databricks, and various data formats (e.g., Parquet, JSON, CSV). Solid understanding of Azure SQL Database, Azure Logic Apps, Azure Function Apps, and Azure Container Apps. Programming and Scripting: Proficient in Python for data ingestion, automation, and transformation tasks. Ability to write clean, reusable, and maintainable code. Data Ingestion Techniques: Solid understanding of relational and non-relational data models and their ingestion techniques. Experience working with file-based data ingestion, API-based data ingestion, and integrating data from various third-party systems. Problem Solving & Analytical Skills Communication Skills #IncludingYou Diversity, equity, inclusion and belonging are cornerstones of ADM’s efforts to continue innovating, driving growth, and delivering outstanding performance. We are committed to attracting and retaining a diverse workforce and create welcoming, truly inclusive work environments — environments that enable every ADM colleague to feel comfortable on the job, make meaningful contributions to our success, and grow their career. We respect and value the unique backgrounds and experiences that each person can bring to ADM because we know that diversity of perspectives makes us better, together. For more information regarding our efforts to advance Diversity, Equity, Inclusion & Belonging, please visit our website here: Diversity, Equity and Inclusion | ADM. About ADM At ADM, we unlock the power of nature to provide access to nutrition worldwide. With industry-advancing innovations, a complete portfolio of ingredients and solutions to meet any taste, and a commitment to sustainability, we give customers an edge in solving the nutritional challenges of today and tomorrow. We’re a global leader in human and animal nutrition and the world’s premier agricultural origination and processing company. Our breadth, depth, insights, facilities and logistical expertise give us unparalleled capabilities to meet needs for food, beverages, health and wellness, and more. From the seed of the idea to the outcome of the solution, we enrich the quality of life the world over. Learn more at www.adm.com. Req/Job ID 97477BR Ref ID
Posted 1 month ago
40.0 years
0 Lacs
Hyderābād
On-site
Engineering Graduate / Post Graduate preferably in Computer Science or MCA having 2+ yrs of development experience in : Oracle and ADF based applications Knowledge of RDBMS and data modeling concepts Oracle database, knowledge of SQL, and PL/SQL Cient side web development languages (JavaScript, HTML, DHTML, and CSS) Desirable : Rest API Implementation SOA (REST-based micro-services) Collaborative development , (Gitflow, peer reviewing) Maven - SQL - Continuous Integration/delivery (Jenkins,Docker) Diversity and Inclusion: An Oracle career can span industries, roles, Countries and cultures, giving you the opportunity to flourish in new roles and innovate, while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry.In order to nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Analyze, design develop, troubleshoot and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications. Analyze, design develop, troubleshoot and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications. Java developers need to be successful building cloud-native applications. Leverage deep integrations with familiar tools like Spring, Maven, Kubernetes, and IntelliJ to get started quickly. As a member of the software engineering division, you will perform high-level design based on provided external specifications. Specify, design and implement minor changes to existing software architecture. Build highly complex enhancements and resolve complex bugs. Build and execute unit tests and unit plans. Review integration and regression test plans created by QA. Communicate with QA and porting engineering as necessary to discuss minor changes to product functionality and to ensure quality and consistency across specific products. Duties and tasks are varied and complex needing independent judgment. Fully competent in own area of expertise. May have project lead role and or supervise lower level personnel. BS or MS degree or equivalent experience relevant to functional area. 4 years of software engineering or related experience.
Posted 1 month ago
4.0 years
10 - 10 Lacs
Bengaluru
On-site
Location: Bengaluru, KA, IN Company: ExxonMobil About us At ExxonMobil, our vision is to lead in energy innovations that advance modern living and a net-zero future. As one of the world’s largest publicly traded energy and chemical companies, we are powered by a unique and diverse workforce fueled by the pride in what we do and what we stand for. The success of our Upstream, Product Solutions and Low Carbon Solutions businesses is the result of the talent, curiosity and drive of our people. They bring solutions every day to optimize our strategy in energy, chemicals, lubricants and lower-emissions technologies. We invite you to bring your ideas to ExxonMobil to help create sustainable solutions that improve quality of life and meet society’s evolving needs. Learn more about our What and our Why and how we can work together . ExxonMobil’s affiliates in India ExxonMobil’s affiliates have offices in India in Bengaluru, Mumbai and the National Capital Region. ExxonMobil’s affiliates in India supporting the Product Solutions business engage in the marketing, sales and distribution of performance as well as specialty products across chemicals and lubricants businesses. The India planning teams are also embedded with global business units for business planning and analytics. ExxonMobil’s LNG affiliate in India supporting the upstream business provides consultant services for other ExxonMobil upstream affiliates and conducts LNG market-development activities. The Global Business Center - Technology Center provides a range of technical and business support services for ExxonMobil’s operations around the globe. ExxonMobil strives to make a positive contribution to the communities where we operate and its affiliates support a range of education, health and community-building programs in India. Read more about our Corporate Responsibility Framework. To know more about ExxonMobil in India, visit ExxonMobil India and the Energy Factor India. What role you will play in our team Design, build, and maintain data systems, architectures, and pipelines to extract insights and drive business decisions. Collaborate with stakeholders to ensure data quality, integrity, and availability. What you will do Design, develop, and maintain robust ETL pipelines using tools like Airflow, Azure Data Factory, Qlik Replicate, and Fivetran. Automate data extraction, transformation, and loading processes across cloud platforms (Azure, Snowflake). Build and optimize Snowflake data models in collaboration with system architects to support business needs. Develop and maintain CI/CD pipelines using GitHub and Azure DevOps (ADO). Create and manage data input and review screens in Sigma, including performance dashboards. Integrate third-party ETL tools for Cloud-to-Cloud (C2C) and On-Premises to Cloud (OP2C) data flows. Implement monitoring and alerting systems for pipeline health and data quality. Support data cleansing, enrichment, and curation to enable business use cases. Troubleshoot and resolve data issues, including missing or incorrect data, long-running queries, and Sigma screen problems. Collaborate with cross-functional teams to deliver data solutions for platforms like CEDAR. Manage Snowflake security, including roles, shares, and access controls. Optimize and tune SQL queries across Snowflake, MSSQL, Postgres, Oracle, and Azure SQL. Develop large-scale aggregate queries across multiple schemas and datasets. About You Skills and Qualifications Core Technical Skills Languages: Proficient in Python, with experience in C#, C++, F#, or Java. Databases: Strong experience with SQL and NoSQL, including Snowflake, Azure SQL, PostgreSQL, MSSQL, Oracle. ETL Tools: Expertise in Airflow, Qlik Replicate, Fivetran, Azure Data Factory. Cloud Platforms: Deep knowledge of Azure services including Azure Data Explorer (ADX), ADF, Databricks. Data Modeling: Hands-on experience with Snowflake modeling, including stored procedures, UDFs, Snowpipe, streams, shares. Monitoring & Optimization: Skilled in query tuning, performance measurement, and pipeline monitoring. CI/CD: Experience managing pipelines using GitHub and Azure DevOps. Additional Tools & Technologies Sigma: Experience designing and managing Sigma dashboards and screens (or strong background in Power BI/Tableau with willingness to learn Sigma). Streamlit: Experience developing Streamlit apps using Python. DBT: Experience managing Snowflake with DBT scripting. Preferred Qualifications 4+ years of hands-on experience as a Data Engineer. Proficiency in Snowflake with Data Modelling Experience in Change Management and working in Agile environments. Prior experience in the Energy industry is a plus. Bachelor’s or Master’s degree in Computer Science, IT, or related engineering disciplines with a minimum GPA of 7.0. Your benefits An ExxonMobil career is one designed to last. Our commitment to you runs deep: our employees grow personally and professionally, with benefits built on our core categories of health, security, finance and life. We offer you: Competitive compensation Medical plans, maternity leave and benefits, life, accidental death and dismemberment benefits Retirement benefits Global networking & cross-functional opportunities Annual vacations & holidays Day care assistance program Training and development program Tuition assistance program Workplace flexibility policy Relocation program Transportation facility Please note benefits may change from time to time without notice, subject to applicable laws. The benefits programs are based on the Company’s eligibility guidelines. Stay connected with us Learn more about ExxonMobil in India, visit ExxonMobil India and Energy Factor India . Follow us on LinkedIn and Instagram Like us on Facebook Subscribe our channel at YouTube EEO Statement ExxonMobil is an Equal Opportunity Employer: All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, age, national origin or disability status. Business solicitation and recruiting scams ExxonMobil does not use recruiting or placement agencies that charge candidates an advance fee of any kind (e.g., placement fees, immigration processing fees, etc.). Follow the LINK to understand more about recruitment scams in the name of ExxonMobil. Nothing herein is intended to override the corporate separateness of local entities. Working relationships discussed herein do not necessarily represent a reporting connection, but may reflect a functional guidance, stewardship, or service relationship. Exxon Mobil Corporation has numerous affiliates, many with names that include ExxonMobil, Exxon, Esso and Mobil. For convenience and simplicity, those terms and terms like corporation, company, our, we and its are sometimes used as abbreviated references to specific affiliates or affiliate groups. Abbreviated references describing global or regional operational organizations and global or regional business lines are also sometimes used for convenience and simplicity. Similarly, ExxonMobil has business relationships with thousands of customers, suppliers, governments, and others. For convenience and simplicity, words like venture, joint venture, partnership, co-venturer, and partner are used to indicate business relationships involving common activities and interests, and those words may not indicate precise legal relationships. Competencies (B) Adapts (B) Applies Learning (B) Analytical (B) Collaborates (B) Communicates Effectively (B) Innovates Nothing herein is intended to override the corporate separateness of local entities. Working relationships discussed herein do not necessarily represent a reporting connection, but may reflect a functional guidance, stewardship, or service relationship. Exxon Mobil Corporation has numerous affiliates, many with names that include ExxonMobil, Exxon, Esso and Mobil. For convenience and simplicity, those terms and terms like corporation, company, our, we and its are sometimes used as abbreviated references to specific affiliates or affiliate groups. Abbreviated references describing global or regional operational organizations and global or regional business lines are also sometimes used for convenience and simplicity. Similarly, ExxonMobil has business relationships with thousands of customers, suppliers, governments, and others. For convenience and simplicity, words like venture, joint venture, partnership, co-venturer, and partner are used to indicate business relationships involving common activities and interests, and those words may not indicate precise legal relationships. Job Segment: Sustainability, SQL, Database, Oracle, Computer Science, Energy, Technology
Posted 1 month ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Oracle Management Level Associate Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. Those in Oracle technology at PwC will focus on utilising and managing Oracle suite of software and technologies for various purposes within an organisation. You will be responsible for tasks such as installation, configuration, administration, development, and support of Oracle products and solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary Managing business performance in today’s complex and rapidly changing business environment is crucial for any organization’s short-term and long-term success. However ensuring streamlined E2E Oracle fusion Technical to seamlessly adapt to the changing business environment is crucial from a process and compliance perspective. As part of the Technology Consulting -Business Applications - Oracle Practice team, we leverage opportunities around digital disruption, new age operating model and best in class practices to deliver technology enabled transformation to our clients Responsibilities: Extensive experience in Oracle ERP/Fusion SaaS/PaaS project implementations as a technical developer Completed at least 2 full Oracle Cloud (Fusion) Implementation Extensive Knowledge on database structure for ERP/Oracle Cloud (Fusion) Extensively worked on BI Publisher reports, FBDI/OTBI Cloud and Oracle Integration (OIC) Mandatory skill sets BI Publisher reports, FBDI/OTBI Cloud and Oracle Integration (OIC) Preferred skill sets database structure for ERP/Oracle Cloud (Fusion) Year of experience required Minimum 2+ Years of Oracle fusion experience Educational Qualification BE/BTech MBA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Master of Business Administration, Bachelor of Engineering Degrees/Field Of Study Preferred: Certifications (if blank, certifications not specified) Required Skills Oracle Business Intelligence (BI) Publisher, Oracle Fusion Applications Optional Skills Accepting Feedback, Active Listening, Business Transformation, Communication, Design Automation, Emotional Regulation, Empathy, Inclusion, Intellectual Curiosity, Optimism, Oracle Application Development Framework (ADF), Oracle Business Intelligence (BI) Publisher, Oracle Cloud Infrastructure, Oracle Data Integration, Process Improvement, Process Optimization, Strategic Technology Planning, Teamwork, Well Being Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 1 month ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Oracle Management Level Senior Associate Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. Those in Oracle technology at PwC will focus on utilising and managing Oracle suite of software and technologies for various purposes within an organisation. You will be responsible for tasks such as installation, configuration, administration, development, and support of Oracle products and solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary Managing business performance in today’s complex and rapidly changing business environment is crucial for any organization’s short-term and long-term success. However ensuring streamlined E2E Oracle fusion Technical to seamlessly adapt to the changing business environment is crucial from a process and compliance perspective. As part of the Technology Consulting -Business Applications - Oracle Practice team, we leverage opportunities around digital disruption, new age operating model and best in class practices to deliver technology enabled transformation to our clients Responsibilities: Extensive experience in Oracle ERP/Fusion SaaS/PaaS project implementations as a technical developer Completed at least 2 full Oracle Cloud (Fusion) Implementation Extensive Knowledge on database structure for ERP/Oracle Cloud (Fusion) Extensively worked on BI Publisher reports, FBDI/OTBI Cloud and Oracle Integration (OIC) Mandatory skill sets BI Publisher reports, FBDI/OTBI Cloud and Oracle Integration (OIC) Preferred skill sets database structure for ERP/Oracle Cloud (Fusion) Years of experience required Minimum 4Years of Oracle fusion experience Education Qualification BE/BTech MBA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Bachelor of Technology, Master of Business Administration Degrees/Field Of Study Preferred: Certifications (if blank, certifications not specified) Required Skills Oracle Integration Cloud (OIC) Optional Skills Accepting Feedback, Active Listening, Analytical Thinking, Business Transformation, Communication, Creativity, Design Automation, Embracing Change, Emotional Regulation, Empathy, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Oracle Application Development Framework (ADF), Oracle Business Intelligence (BI) Publisher, Oracle Cloud Infrastructure, Oracle Data Integration, Process Improvement, Process Optimization, Self-Awareness, Strategic Technology Planning, Teamwork, Well Being Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France