Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 10.0 years
6 - 8 Lacs
Hyderābād
Remote
At Accellor, we are a trusted consultant that uses best-of-breed Cloud technology to deliver superior customer engagement and business effectiveness for clients. We bring a deep understanding of Financial, Retail, High Tech, Healthcare, and retail verticals. We’ve created an atmosphere that encourages curiosity, constant learning, and persistence. We encourage our employees to grow and explore their interests. We cultivate an environment of collaboration, autonomy, and delegation – we know our people have a strong work ethic and a sense of pride and ownership over their work. They are passionate, eager, and motivated – focused on building the perfect solution but never losing sight of the bigger picture. As a Lead Data Engineer specializing in Snowflake and Databricks, you will be responsible for designing, developing, and delivering data engineering solutions using modern cloud data platforms. The candidate should have strong expertise in the data lifecycle, including data ingestion, transformation, and modeling, as well as experience with distributed data processing, data security, and integration with internal and external data sources. Additionally, the candidate should be proficient in leveraging best practices in data architecture and performance optimization. The role also requires the ability to drive end-to-end project delivery aligned with business objectives and ensure the realization of data-driven value. Responsibilities: Demonstrated ability to have successfully completed multiple, complex technical projects and create high-level design and architecture of the solution, including class, sequence and deployment infrastructure diagrams. Take ownership of technical solutions from design and architecture perspective for projects in presales phase as well as on-going projects. Experience with gathering end user requirements and writing technical documentation. Suggest innovative solutions based on new technologies and latest trends. Review the architectural/ technological solutions for ongoing projects and ensure right choice of solution. Work closely with client teams to understand their business, capture requirements, identify pain areas, accordingly, propose an ideal solution and win business. Requirements 7-10 years of experience working with Snowflake/Databricks in a data engineering or architecture role. Familiarity with programming languages such as Python, Java, or Scala for data processing and automation. Strong expertise in SQL, data modeling and advanced query optimization techniques. Hands-on experience with cloud platforms (AWS, Azure, or GCP) and their integration with Snowflake. Proficiency in ETL/ELT tools such as ADF, Fabric, etc. Experience with data visualization tools like Tableau, Power BI, or Looker. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work in a fast-paced, dynamic environment. Certification in Databricks is added advantage Benefits Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global canters. Work-Life Balance: Accellor prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training, Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Personal Accident Insurance, Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Disclaimer: Accellor is proud to be an equal opportunity employer. We do not discriminate in hiring or any employment decision based on race, color, religion, national origin, age, sex (including pregnancy, childbirth, or related medical conditions), marital status, ancestry, physical or mental disability, genetic information, veteran status, gender identity or expression, sexual orientation, or other applicable legally protected characteristic.
Posted 2 weeks ago
5.0 years
4 - 5 Lacs
Hyderābād
On-site
Job Description: At least 5+ years’ of relevant hands on development experience as Azure Data Engineering role Proficient in Azure technologies like ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions At DXC Technology, we believe strong connections and community are key to our success. Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances. We’re committed to fostering an inclusive environment where everyone can thrive. Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is available here .
Posted 2 weeks ago
7.0 - 9.0 years
0 Lacs
Thiruvananthapuram
On-site
7 - 9 Years 1 Opening Trivandrum Role description We are seeking a highly experienced and motivated Azure Cloud Administrator with a strong background in Windows Server infrastructure , Azure IaaS/PaaS services , and cloud networking . The ideal candidate will have over 10 years of relevant experience and will be responsible for managing and optimizing our Azure environment while ensuring high availability, scalability, and security of our infrastructure. Key Responsibilities: Administer and manage Azure Cloud infrastructure including both IaaS and PaaS services. Deploy, configure, and maintain Windows Servers (2016/2019/2022). Manage Azure resources such as Virtual Machines, Storage Accounts, SQL Managed Instances, Azure Functions, Logic Apps, App Services, Azure Monitor, Azure Key Vault, Azure Recovery Services, Databricks, ADF, Synapse, and more. Ensure security and network compliance through effective use of Azure Networking features including NSGs, Load Balancers, and VPN gateways. Monitor and troubleshoot infrastructure issues using tools such as Log Analytics, Application Insights, and Azure Metrics. Perform server health checks, patch management, upgrades, backup/restoration, and DR testing. Implement and maintain Group Policies, DNS, IIS, Active Directory, and Entra ID (formerly Azure AD). Collaborate with DevOps teams to support infrastructure automation using Terraform and Azure DevOps. Support ITIL-based processes including incident, change, and problem management. Deliver Root Cause Analysis (RCA) and post-incident reviews for high-severity issues. Provide after-hours support as required during outages or maintenance windows. Required Technical Skills: Windows Server Administration – Deep expertise in Windows Server 2016/2019/2022. Azure Administration – Strong hands-on experience with Azure IaaS/PaaS services. Azure Networking – Solid understanding of cloud networking principles and security best practices. Azure Monitoring – Familiarity with Azure Monitor, Log Analytics, Application Insights. Infrastructure Tools – Experience with Microsoft IIS, DNS, AD, Group Policy, and Entra ID Connect. Cloud Automation – Good to have working knowledge of Terraform and Azure DevOps pipelines. Troubleshooting & RCA – Proven ability to analyze, resolve, and document complex technical issues. Skills Azure,Windows, Monitoring About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 2 weeks ago
2.0 - 3.0 years
0 Lacs
Cochin
On-site
Job Title - Data Engineer Sr.Analyst ACS Song Management Level: Level 10- Sr. Analyst Location: Kochi, Coimbatore, Trivandrum Must have skills: Python/Scala, Pyspark/Pytorch Good to have skills: Redshift Job Summary You’ll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering About Our Company | Accenture (do not remove the hyperlink) Experience: 3.5 -5 years of experience is required Educational Qualification: Graduation (Accurate educational details should capture)
Posted 2 weeks ago
7.0 years
6 - 10 Lacs
Gurgaon
On-site
About US:- We turn customer challenges into growth opportunities. Material is a global strategy partner to the world’s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a part of an Awesome Tribe Job Responsibilities: Design and Develop Data Pipelines: Development and optimisation of scalable data pipelines within Microsoft Fabric , leveraging fabric based notebooks, Dataflows Gen2 , Data Pipelines , and Lakehouse architecture . Build robust pipelines using both batch and real-time processing techniques. Integrate with Azure Data Factory or Fabric-native orchestration for seamless data movement. Microsoft Fabric Architecture: Work with the Data Architecture team to implement scalable, governed data architectures within OneLake and Microsoft Fabric's unified compute and storage platform. Align models with business needs, promoting performance, security, and cost-efficiency. Data Pipeline Optimisation: Continuously monitor, enhance, and optimise Fabric pipelines, notebooks, and lakehouse artifacts for performance, reliability, and cost. Implement best practices for managing large-scale datasets and transformations in a Fabric-first ecosystem. Collaboration with Cross-functional Teams: Work closely with analysts, BI developers, and data scientists to gather requirements and deliver high-quality, consumable datasets. Enable self-service analytics via certified and reusable Power BI datasets connected to Fabric Lakehouses. Documentation and Knowledge Sharing: Maintain clear, up-to-date documentation for all data pipelines, semantic models, and data products. Share knowledge of Fabric best practices and mentor junior team members to support adoption across teams. Microsoft Fabric Platform Expertise: Use your expertise in Microsoft Fabric , including Lakehouses , Notebooks , Data Pipelines , and Direct Lake , to build scalable solutions integrated with Business Intelligence layers , Azure Synapse , and other Microsoft data services. Required Skills and Qualifications: Experience in Microsoft Fabric / Azure Eco System : 7+ years working with Azure eco system , Relavent experience in Microsoft Fabric, including Lakehouse,oine lake, Data Engineering, and Data Pipelines components. Proficiency in Azure Data Factory and/or Dataflows Gen2 within Fabric for building and orchestrating data pipelines. Advanced Data Engineering Skills: Extensive experience in data ingestion, transformation, and ELT/ETL pipeline design . Ability to enforce data quality, testing, and monitoring standards in cloud platforms. Cloud Architecture Design: Experience designing modern data platforms using Microsoft Fabric , OneLake , and Synapse or equivalent. Strong / Indeapth SQL and Data Modelling: Expertise in SQL and data modelling (e.g., star/snowflake schemas) for Data intergation / ETL , reporting and analytics use cases. Collaboration and Communication: Proven ability to work across business and technical teams, translating business requirements into scalable data solutions. Cost Optimisation: Experience tuning pipelines and cloud resources (Fabric, Databricks, ADF) for cost-performance balance. Preferred Skills: Deep understanding of Azure , Microsoft Fabric ecosystem , including Power BI integration , Direct Lake , and Fabric-native security and governance . Familiarity with OneLake , Delta Lake , and Lakehouse architecture as part of a modern data platform strategy. Experience using Power BI with Fabric Lakehouses and DirectQuery/Direct Lake mode for enterprise reporting. Working knowledge of PySpark , strong SQL , and Python scripting within Fabric or Databricks notebooks. Understanding of Microsoft Purview , Unity Catalog , or Fabric-native governance tools for lineage, metadata, and access control. Experience with DevOps practices for Fabric or Power BI, including version control, deployment pipelines, and workspace management. Knowledge of Azure Databricks : Familiarity with building and optimising Spark-based pipelines and Delta Lake models as part of a modern data platform. is an added advantage.
Posted 2 weeks ago
16.0 years
2 - 6 Lacs
Gurgaon
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Design and develop applications and services running on Azure, with a strong emphasis on Azure Databricks, ensuring optimal performance, scalability, and security Build and maintain data pipelines using Azure Databricks and other Azure data integration tools Write, read, and debug Spark, Scala, and Python code to process and analyze large datasets Write extensive query in SQL and Snowflake Implement security and access control measures and regularly audit Azure platform and infrastructure to ensure compliance Create, understand, and validate design and estimated effort for given module/task, and be able to justify it Possess solid troubleshooting skills and perform troubleshooting of issues in different technologies and environments Implement and adhere to best engineering practices like design, unit testing, functional testing automation, continuous integration, and delivery Maintain code quality by writing clean, maintainable, and testable code Monitor performance and optimize resources to ensure cost-effectiveness and high availability Define and document best practices and strategies regarding application deployment and infrastructure maintenance Provide technical support and consultation for infrastructure questions Help develop, manage, and monitor continuous integration and delivery systems Take accountability and ownership of features and teamwork Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: B. Tech or MCA (16+ years of formal education) Overall 7+ years of experience 5+ years of experience in writing advanced level SQL 3+ years of experience in Azure (ADF), Databricks and DevOps 3+ years of experience in architecting, designing, developing, and implementing cloud solutions on Azure 2+ years of experience in writing, reading, and debugging Spark, Scala, and Python code Proficiency in programming languages and scripting tools Understanding of cloud data storage and database technologies such as SQL and NoSQL Familiarity with DevOps practices and tools, such as continuous integration and continuous deployment (CI/CD) and Teraform Proven ability to collaborate with multidisciplinary teams of business analysts, developers, data scientists, and subject-matter experts Proven proactive approach to spotting problems, areas for improvement, and performance bottlenecks Proven excellent communication, writing, and presentation skills Experience in interacting with international customers to gather requirements and convert them into solutions using relevant skills Preferred Qualifications: Experience and skills with Snowflake Knowledge of AI/ML or LLM (GenAI) Knowledge of US Healthcare domain and experience with healthcare data At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.
Posted 2 weeks ago
6.0 years
6 - 10 Lacs
Noida
On-site
Country/Region: IN Requisition ID: 27468 Work Model: Position Type: Salary Range: Location: INDIA - NOIDA- BIRLASOFT OFFICE Title: Technical Specialist-Data Engg Description: Area(s) of responsibility Required Skills & Experience 6 years of experience working with hands on development experience in DBT, Aptitude, Snowflake on Azure platform- Dataflow, Data Ingestion, Data Storage & Security Expertise in ETL tool DBT Design Data Integration (ETL) projects using the DBT Strong hands-on experience in build custom data models/semantic reporting layer in Snowflake to support customer reporting current platform requirements. Good to have experience in any other ETL tool Participate in the entire project lifecycle including design and development of ETL solutions Design data integration and conversion strategy, exception handing mechanism, data retention and archival strategy Ability to communicate platform features/development effectively to customer SME & Technical team. Drive UAT activities with business partners to ensure solutions meet business requirements Experience working with Power BI reporting, Dataflow, Data Lake, Databrick, ADF Pipeline, Security knowledge is added advantage
Posted 2 weeks ago
0 years
9 - 10 Lacs
Calcutta
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: · Requirement gathering and analysis · Design of data architecture and data model to ingest data · Experience with different databases like Synapse, SQL DB, Snowflake etc. · Design and implement data pipelines using Azure Data Factory, Databricks, Synapse · Create and manage Azure SQL Data Warehouses and Azure Cosmos DB databases · Extract, transform, and load (ETL) data from various sources into Azure Data Lake Storage · Implement data security and governance measures · Monitor and optimize data pipelines for performance and efficiency · Troubleshoot and resolve data engineering issues · Hands on experience on Azure functions and other components like realtime streaming etc · Oversee Azure billing processes, conducting analyses to ensure cost-effectiveness and efficiency in data operations. · Provide optimized solution for any problem related to data engineering · Ability to work with verity of sources like Relational DB, API, File System, Realtime streams, CDC etc. · Strong knowledge on Databricks, Delta tables Mandatory skill sets: SQL, ADF, ADLS, Synapse, Pyspark, Databricks, data modelling Preferred skill sets: Pyspark, Databricks Years of experience required: 7 – 10 yrs Education qualification: B.tech/MCA and MBA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Structured Query Language (SQL) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Coaching and Feedback, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion {+ 21 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 2 weeks ago
3.0 - 6.0 years
0 Lacs
Andhra Pradesh, India
On-site
We are seeking a Data Engineer with strong expertise in SQL and ETL processes to support banking data quality data pipelines, regulatory reporting, and data quality initiatives. The role involves building and optimizing data structures, implementing validation rules, and collaborating with governance and compliance teams. Experience in the banking domain and tools like Informatica and Azure Data Factory is essential. Strong proficiency in SQL for writing complex queries, joins, data transformations, and aggregations Proven experience in building tables, views, and data structures within enterprise Data Warehouses and Data Lakes Strong understanding of data warehousing concepts, such as Slowly Changing Dimensions (SCDs), data normalization, and star/snowflake schemas Practical experience in Azure Data Factory (ADF) for orchestrating data pipelines and managing ingestion workflows Exposure to data cataloging, metadata management, and lineage tracking using Informatica EDC or Axon Experience implementing Data Quality rules for banking use cases such as completeness, consistency, uniqueness, and validity Familiarity with banking systems and data domains such as Flexcube, HRMS, CRM, Risk, Compliance, and IBG reporting Understanding of regulatory and audit readiness needs for Central Bank and internal governance forums Write optimized SQL scripts to extract, transform, and load (ETL) data from multiple banking source systems Design and implement staging and reporting layer structures, aligned to business requirements and regulatory frameworks Apply data validation logic based on predefined business rules and data governance requirements Collaborate with Data Governance, Risk, and Compliance teams to embed lineage, ownership, and metadata into datasets Monitor scheduled jobs and resolve ETL failures to ensure SLA adherence for reporting and operational dashboards Support production deployment, UAT sign off, and issue resolution for data products across business units 3 to 6 years in banking-focused data engineering roles with hands on SQL, ETL, and DQ rule implementation Bachelors or Master's Degree in Computer Science, Information Systems, Data Engineering, or related fields Banking domain experience is mandatory, especially in areas related to regulatory reporting, compliance, and enterprise data governance
Posted 2 weeks ago
3.0 - 6.0 years
0 Lacs
Andhra Pradesh, India
On-site
We are seeking a Data Engineer with strong expertise in SQL and ETL processes to support banking data quality data pipelines, regulatory reporting, and data quality initiatives. The role involves building and optimizing data structures, implementing validation rules, and collaborating with governance and compliance teams. Experience in the banking domain and tools like Informatica and Azure Data Factory is essential. Strong proficiency in SQL for writing complex queries, joins, data transformations, and aggregations Proven experience in building tables, views, and data structures within enterprise Data Warehouses and Data Lakes Strong understanding of data warehousing concepts, such as Slowly Changing Dimensions (SCDs), data normalization, and star/snowflake schemas Practical experience in Azure Data Factory (ADF) for orchestrating data pipelines and managing ingestion workflows Exposure to data cataloging, metadata management, and lineage tracking using Informatica EDC or Axon Experience implementing Data Quality rules for banking use cases such as completeness, consistency, uniqueness, and validity Familiarity with banking systems and data domains such as Flexcube, HRMS, CRM, Risk, Compliance, and IBG reporting Understanding of regulatory and audit readiness needs for Central Bank and internal governance forums Write optimized SQL scripts to extract, transform, and load (ETL) data from multiple banking source systems Design and implement staging and reporting layer structures, aligned to business requirements and regulatory frameworks Apply data validation logic based on predefined business rules and data governance requirements Collaborate with Data Governance, Risk, and Compliance teams to embed lineage, ownership, and metadata into datasets Monitor scheduled jobs and resolve ETL failures to ensure SLA adherence for reporting and operational dashboards Support production deployment, UAT sign off, and issue resolution for data products across business units 3 to 6 years in banking-focused data engineering roles with hands on SQL, ETL, and DQ rule implementation Bachelors or Master's Degree in Computer Science, Information Systems, Data Engineering, or related fields Banking domain experience is mandatory, especially in areas related to regulatory reporting, compliance, and enterprise data governance
Posted 2 weeks ago
5.0 years
20 - 25 Lacs
Chennai, Tamil Nadu, India
On-site
Exp: 5 - 12 Yrs Work Mode: Hybrid Location: Bangalore, Chennai, Kolkata, Pune and Gurgaon Primary Skills: Snowflake, SQL, DWH, Power BI, ETL and Informatica. We are seeking a skilled Snowflake Developer with a strong background in Data Warehousing (DWH), SQL, Informatica, Power BI, and related tools to join our Data Engineering team. The ideal candidate will have 5+ years of experience in designing, developing, and maintaining data pipelines, integrating data across multiple platforms, and optimizing large-scale data architectures. This is an exciting opportunity to work with cutting-edge technologies in a collaborative environment and help build scalable, high-performance data solutions. Key Responsibilities Minimum of 5+ years of hands-on experience in Data Engineering, with a focus on Data Warehousing, Business Intelligence, and related technologies. Data Integration & Pipeline Development: Develop and maintain data pipelines using Snowflake, Fivetran, and DBT for efficient ELT processes (Extract, Load, Transform) across various data sources. SQL Query Development & Optimization: Write complex, scalable SQL queries, including stored procedures, to support data transformation, reporting, and analysis. Data Modeling & ELT Implementation: Implement advanced data modeling techniques, such as Slowly Changing Dimensions (SCD Type-2), using DBT. Design and optimize high-performance data architectures. Business Requirement Analysis: Collaborate with business stakeholders to understand data needs and translate business requirements into technical solutions. Troubleshooting & Data Quality: Perform root cause analysis on data-related issues, ensuring effective resolution and maintaining high data quality standards. Collaboration & Documentation: Work closely with cross-functional teams to integrate data solutions. Create and maintain clear documentation for data processes, data models, and pipelines. Skills & Qualifications Expertise in Snowflake for data warehousing and ELT processes. Strong proficiency in SQL for relational databases and writing complex queries. Experience with Informatica PowerCenter for data integration and ETL development. Experience using Power BI for data visualization and business intelligence reporting. Experience with Fivetran for automated ELT pipelines. Familiarity with Sigma Computing, Tableau, Oracle, and DBT. Strong data analysis, requirement gathering, and mapping skills. Familiarity with cloud services such as Azure (RDBMS, Data Bricks, ADF), with AWS or GCP Experience with workflow management tools such as Airflow, Azkaban, or Luigi. Proficiency in Python for data processing (other languages like Java, Scala are a plus). Education- Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or a related quantitative field Skills: data integration,data analysis,data warehousing,snowflake,etl,data modeling,workflow management tools,informatica,power bi,python,dbt,aws,sql,pipelines,azure,banking domain,dwh,gcp,fivetran
Posted 2 weeks ago
5.0 years
20 - 25 Lacs
Gurugram, Haryana, India
On-site
Exp: 5 - 12 Yrs Work Mode: Hybrid Location: Bangalore, Chennai, Kolkata, Pune and Gurgaon Primary Skills: Snowflake, SQL, DWH, Power BI, ETL and Informatica. We are seeking a skilled Snowflake Developer with a strong background in Data Warehousing (DWH), SQL, Informatica, Power BI, and related tools to join our Data Engineering team. The ideal candidate will have 5+ years of experience in designing, developing, and maintaining data pipelines, integrating data across multiple platforms, and optimizing large-scale data architectures. This is an exciting opportunity to work with cutting-edge technologies in a collaborative environment and help build scalable, high-performance data solutions. Key Responsibilities Minimum of 5+ years of hands-on experience in Data Engineering, with a focus on Data Warehousing, Business Intelligence, and related technologies. Data Integration & Pipeline Development: Develop and maintain data pipelines using Snowflake, Fivetran, and DBT for efficient ELT processes (Extract, Load, Transform) across various data sources. SQL Query Development & Optimization: Write complex, scalable SQL queries, including stored procedures, to support data transformation, reporting, and analysis. Data Modeling & ELT Implementation: Implement advanced data modeling techniques, such as Slowly Changing Dimensions (SCD Type-2), using DBT. Design and optimize high-performance data architectures. Business Requirement Analysis: Collaborate with business stakeholders to understand data needs and translate business requirements into technical solutions. Troubleshooting & Data Quality: Perform root cause analysis on data-related issues, ensuring effective resolution and maintaining high data quality standards. Collaboration & Documentation: Work closely with cross-functional teams to integrate data solutions. Create and maintain clear documentation for data processes, data models, and pipelines. Skills & Qualifications Expertise in Snowflake for data warehousing and ELT processes. Strong proficiency in SQL for relational databases and writing complex queries. Experience with Informatica PowerCenter for data integration and ETL development. Experience using Power BI for data visualization and business intelligence reporting. Experience with Fivetran for automated ELT pipelines. Familiarity with Sigma Computing, Tableau, Oracle, and DBT. Strong data analysis, requirement gathering, and mapping skills. Familiarity with cloud services such as Azure (RDBMS, Data Bricks, ADF), with AWS or GCP Experience with workflow management tools such as Airflow, Azkaban, or Luigi. Proficiency in Python for data processing (other languages like Java, Scala are a plus). Education- Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or a related quantitative field Skills: data integration,data analysis,data warehousing,snowflake,etl,data modeling,workflow management tools,informatica,power bi,python,dbt,aws,sql,pipelines,azure,banking domain,dwh,gcp,fivetran
Posted 2 weeks ago
5.0 years
20 - 25 Lacs
Greater Kolkata Area
On-site
Exp: 5 - 12 Yrs Work Mode: Hybrid Location: Bangalore, Chennai, Kolkata, Pune and Gurgaon Primary Skills: Snowflake, SQL, DWH, Power BI, ETL and Informatica. We are seeking a skilled Snowflake Developer with a strong background in Data Warehousing (DWH), SQL, Informatica, Power BI, and related tools to join our Data Engineering team. The ideal candidate will have 5+ years of experience in designing, developing, and maintaining data pipelines, integrating data across multiple platforms, and optimizing large-scale data architectures. This is an exciting opportunity to work with cutting-edge technologies in a collaborative environment and help build scalable, high-performance data solutions. Key Responsibilities Minimum of 5+ years of hands-on experience in Data Engineering, with a focus on Data Warehousing, Business Intelligence, and related technologies. Data Integration & Pipeline Development: Develop and maintain data pipelines using Snowflake, Fivetran, and DBT for efficient ELT processes (Extract, Load, Transform) across various data sources. SQL Query Development & Optimization: Write complex, scalable SQL queries, including stored procedures, to support data transformation, reporting, and analysis. Data Modeling & ELT Implementation: Implement advanced data modeling techniques, such as Slowly Changing Dimensions (SCD Type-2), using DBT. Design and optimize high-performance data architectures. Business Requirement Analysis: Collaborate with business stakeholders to understand data needs and translate business requirements into technical solutions. Troubleshooting & Data Quality: Perform root cause analysis on data-related issues, ensuring effective resolution and maintaining high data quality standards. Collaboration & Documentation: Work closely with cross-functional teams to integrate data solutions. Create and maintain clear documentation for data processes, data models, and pipelines. Skills & Qualifications Expertise in Snowflake for data warehousing and ELT processes. Strong proficiency in SQL for relational databases and writing complex queries. Experience with Informatica PowerCenter for data integration and ETL development. Experience using Power BI for data visualization and business intelligence reporting. Experience with Fivetran for automated ELT pipelines. Familiarity with Sigma Computing, Tableau, Oracle, and DBT. Strong data analysis, requirement gathering, and mapping skills. Familiarity with cloud services such as Azure (RDBMS, Data Bricks, ADF), with AWS or GCP Experience with workflow management tools such as Airflow, Azkaban, or Luigi. Proficiency in Python for data processing (other languages like Java, Scala are a plus). Education- Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or a related quantitative field Skills: data integration,data analysis,data warehousing,snowflake,etl,data modeling,workflow management tools,informatica,power bi,python,dbt,aws,sql,pipelines,azure,banking domain,dwh,gcp,fivetran
Posted 2 weeks ago
5.0 years
20 - 25 Lacs
Pune, Maharashtra, India
On-site
Exp: 5 - 12 Yrs Work Mode: Hybrid Location: Bangalore, Chennai, Kolkata, Pune and Gurgaon Primary Skills: Snowflake, SQL, DWH, Power BI, ETL and Informatica. We are seeking a skilled Snowflake Developer with a strong background in Data Warehousing (DWH), SQL, Informatica, Power BI, and related tools to join our Data Engineering team. The ideal candidate will have 5+ years of experience in designing, developing, and maintaining data pipelines, integrating data across multiple platforms, and optimizing large-scale data architectures. This is an exciting opportunity to work with cutting-edge technologies in a collaborative environment and help build scalable, high-performance data solutions. Key Responsibilities Minimum of 5+ years of hands-on experience in Data Engineering, with a focus on Data Warehousing, Business Intelligence, and related technologies. Data Integration & Pipeline Development: Develop and maintain data pipelines using Snowflake, Fivetran, and DBT for efficient ELT processes (Extract, Load, Transform) across various data sources. SQL Query Development & Optimization: Write complex, scalable SQL queries, including stored procedures, to support data transformation, reporting, and analysis. Data Modeling & ELT Implementation: Implement advanced data modeling techniques, such as Slowly Changing Dimensions (SCD Type-2), using DBT. Design and optimize high-performance data architectures. Business Requirement Analysis: Collaborate with business stakeholders to understand data needs and translate business requirements into technical solutions. Troubleshooting & Data Quality: Perform root cause analysis on data-related issues, ensuring effective resolution and maintaining high data quality standards. Collaboration & Documentation: Work closely with cross-functional teams to integrate data solutions. Create and maintain clear documentation for data processes, data models, and pipelines. Skills & Qualifications Expertise in Snowflake for data warehousing and ELT processes. Strong proficiency in SQL for relational databases and writing complex queries. Experience with Informatica PowerCenter for data integration and ETL development. Experience using Power BI for data visualization and business intelligence reporting. Experience with Fivetran for automated ELT pipelines. Familiarity with Sigma Computing, Tableau, Oracle, and DBT. Strong data analysis, requirement gathering, and mapping skills. Familiarity with cloud services such as Azure (RDBMS, Data Bricks, ADF), with AWS or GCP Experience with workflow management tools such as Airflow, Azkaban, or Luigi. Proficiency in Python for data processing (other languages like Java, Scala are a plus). Education- Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or a related quantitative field Skills: data integration,data analysis,data warehousing,snowflake,etl,data modeling,workflow management tools,informatica,power bi,python,dbt,aws,sql,pipelines,azure,banking domain,dwh,gcp,fivetran
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Data Warehouse Engineer at Myridius, you will be responsible for working with solid SQL language skills and possessing basic knowledge of data modeling. Your role will involve collaborating with Snowflake in Azure, CI/CD process using any tooling. Additionally, familiarity with Azure ADF and ETL/ELT frameworks would be beneficial for this position. It would be advantageous to have experience in ER/Studio and a good understanding of Healthcare/life sciences industry. Knowledge of GxP processes will be a plus in this role. For a Senior Data Warehouse Engineer position, you will be overseeing engineers while actively engaging in the same tasks. Your responsibilities will include conducting design reviews, code reviews, and deployment reviews with engineers. You should have expertise in solid data modeling, preferably using ER/Studio or an equivalent tool. Optimizing Snowflake SQL queries to enhance performance and familiarity with medallion architecture will be key aspects of this role. At Myridius, we are dedicated to transforming the way businesses operate by offering tailored solutions in AI, data analytics, digital engineering, and cloud innovation. With over 50 years of expertise, we drive a new vision to propel organizations through rapidly evolving technology and business landscapes. Our commitment to exceeding expectations ensures measurable impact and fosters sustainable innovation. Together with our clients, we co-create solutions that anticipate future trends and help businesses thrive in a world of continuous change. If you are passionate about driving significant growth and maintaining a competitive edge in the global market, join Myridius in crafting transformative outcomes and elevating businesses to new heights of innovation. Visit www.myridius.com to learn more about how we lead the change.,
Posted 2 weeks ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Company: They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society. Job Title: Power BI + Knime Location: Bangalore, Pune, Chennai, Hyderabad Work Mode: Hybrid Mode Experience: 7+ years (5 years Relevant) Job Type: Contract to hire (C2H) Notice Period: - Immediate joiners. Mandatory Skills: Power BI, Knime, Japanese language skills Additional Skills : • Develop PBi reports of Medium & High complexity independently. • Experience in developing ETL pipelines using Knime is required • Knowledge in Power BI to import data from various sources such as SQL Server, Excel etc • Experience in Power Platform – Power BI, Power Automate and Power Apps will be added advantage • Should be familiar for Power Bi Gen 2 • Write DAX queries, implement row level securities and configure gateways in Power BI services. • Experience in performance tuning of dashboards, refreshes is a must have • Experience in modeling with Azure Analytics Service would be nice to have • Experience of working in Azure platform is preferred • Power BI Administration & Configuration • Power BI Maintenance (workspace and security, data models and measures in datasets, deployment pipelines, refresh schedules) • Responsible in development and maintenance of the existing applications i and development of any changes or fixes to the current design • Knowledgeable in building data models for reporting analytics solution • Knowledgeable in integrations between back-end and front-end services (e.g., Gateways) • Familiar with cloud technologies primarily in MS Azure, which includes – Databricks, ADF, SQL DB, Storage Accounts, KeyVault, Application Gateways, Vnets, Azure Portal Management. Good to have skill
Posted 2 weeks ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Greetings from Infosys BPM We are hiring for Sr. Analyst - SQL + ETL Developer with Power BI Experience for Bangalore or Pune Location. Job Role: Senior Analyst Experience: 5+ yrs Job Location: Bangalore/Pune Mandatory skills: SQL + ETL (SSIS) + Power BI Job Description: We are seeking an experienced and highly skilled SQL + ETL Developer with intermediate Power BI experience to join our Reporting COE team. The ideal candidate will have extensive hands-on experience with SQL database development, ETL processes, and data visualization, as well as proficiency in using Power BI to transform business requirements into actionable insights. Key Responsibilities: SQL Database Development: Design, develop, and maintain complex SQL queries, stored procedures, views, and functions. Optimize SQL queries for performance and efficiency across large datasets. Ensure data quality, accuracy, and consistency across SQL databases. ETL Development: Design, implement, and manage Extract, Transform, Load (ETL) processes to move and transform data from various source systems into target data warehouses or databases. Develop, monitor, and troubleshoot ETL workflows to ensure smooth and accurate data integration. Work with business and technical teams to understand data requirements and ensure seamless integration of systems. Power BI Reporting and Visualization: Develop interactive and insightful Power BI reports and dashboards. Transform business requirements into dynamic Power BI reports that meet business needs. Troubleshoot and optimize Power BI reports for performance and usability. Required Skills and Experience: SQL Development: 5-8 years of experience in writing complex SQL queries, stored procedures, and optimizing database performance. ETL Tools: 5-8 years of Strong experience with ETL tools (such as SSIS or ADF) for data extraction, transformation, and loading. Power BI Experience: At least 2-3 years of hands-on experience with Power BI (intermediate level), including creating reports, dashboards, and data models. Proficient in Power BI DAX expressions, Power Query, and integrating Power BI with various data sources (SQL, Excel, etc.) Thanks & Regards Infosys BPM
Posted 2 weeks ago
6.0 - 8.0 years
0 Lacs
Vishakhapatnam, Andhra Pradesh, India
On-site
About company: Our client is prominent Indian multinational corporation specializing in information technology (IT), consulting, and business process services and its headquartered in Bengaluru with revenues of gross revenue of ₹222.1 billion with global work force of 234,054 and listed in NASDAQ and it operates in over 60 countries and serves clients across various industries, including financial services, healthcare, manufacturing, retail, and telecommunications. The company consolidated its cloud, data, analytics, AI, and related businesses under the tech services business line. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru, kochi, kolkatta, Noida. Job Title: Paas Developer Exp: 6-8 years Location : Vizag / Vishakhapatnam Salary: As Per Market Notice Period: 0-15 days/ Serving Mode of Hire : Contract JD: Candidate should have good experience in integrating and extending Oracle Fusion ERP applications by PaaS services Hands-on experience in creating custom user interfaces, integrating with existing web services and build custom business objects (tables) within Oracle Fusion applications Hands-on experience in design and development of PaaS extensions that can be used for data synchronization between Oracle Fusion and external systems Hands-on experience in building custom Java applications using PaaS services like Oracle Java Cloud Service (JCS) and Oracle Database Cloud Service (DBCS) Hands-on experience in developing PaaS extensions for the creation of custom business objects (tables) in DBaaS or ATP Hands-on experience in developing PaaS extensions using VBCS to create custom applications and UI elements that integrate with Oracle Fusion Hands-on experience in customization of existing pages and objects using Application Composer and Page Composer Possess knowledge and experience in PaaS extensions with appropriate security measures, such as using JWT UserToken, etc. Possess knowledge and experience in solution building in an environment where API Gateway, OIC, SOACS, WebServices are deployed for data interfacing Multiple implementation experience using SOA, Web Services, J2EE/JSF/Oracle ADF (Fusion Middleware) in EBS on premise projects. Provide leadership to onshore and offshore resources in the project team, perform customer facing activities including conducting discovery sessions, solution workshops, presentations, conduct POCs Resource should have 6-8 Years of Oracle Cloud technical experience Should have knowledge on reports using BI Publisher (RTF, XSL templates), OTBI analysis, and dashboards. Identifying and resolving technical issues related to Oracle Fusion applications and integrations. Performance Tuning: Monitoring and optimizing system performance. Resource should have basic Oracle SCM and Finance related functional knowledge
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Senior Data Engineer at Veersa, you will play a crucial role in architecting and implementing scalable data integration and pipeline solutions using Azure cloud services. Your expertise in ETL/ELT processes, data warehousing principles, and real-time and batch data integrations will be key in designing, developing, and maintaining efficient data processes. You will have the opportunity to work with a team of talented professionals across various emerging technologies such as SQL, Python, Airflow, and Bash scripting. Your responsibilities will include building and automating data workflows, collaborating with stakeholders to understand data requirements, and implementing best practices for data modeling, metadata management, and data governance. In addition to your technical responsibilities, you will also be expected to mentor junior engineers, lead code reviews, and contribute to shaping the engineering culture and standards of the team. Your ability to stay current with emerging technologies and recommend tools or processes to enhance the team's effectiveness will be highly valued. To qualify for this role, you must hold a B.Tech or B.E degree in Computer Science, Information Systems, or a related field, with a minimum of 3 years of experience in data engineering, focusing on Azure-based solutions. Proficiency in SQL and Python is essential, along with experience in developing and orchestrating pipelines using Airflow and writing automation scripts using Bash. Experience with Azure Data Factory, Azure Data Lake, Azure Synapse, and Databricks is preferred. If you have a strong problem-solving mindset, excellent communication skills, and a passion for data engineering, we encourage you to apply for this position and be a part of our innovative and dynamic team at Veersa.,
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
NTT DATA is looking for a Sr. ETL Developer to join their team in Bangalore, Karnataka, India. As a Sr. ETL Developer, you should have strong hands-on experience in SQLs, PL/SQLs including Procs and Functions. Expertise in ETL flows & Jobs, with ADF pipeline experience preferred, is essential for this role. Additionally, experience in MS-SQL (preferred), Oracle DB, PostgreSQL, MySQL, Data Warehouse/Data Mart, Data Structures/Models, Integrities constraints, Performance tuning, and knowledge in the Insurance Domain are required. The ideal candidate should have a total experience of 7-10 years. NTT DATA is a trusted global innovator of business and technology services, with a commitment to helping clients innovate, optimize, and transform for long-term success. As a Global Top Employer, they have diverse experts in more than 50 countries and a robust partner ecosystem. Their services range from business and technology consulting to data and artificial intelligence, industry solutions, as well as the development, implementation, and management of applications, infrastructure, and connectivity. NTT DATA is a leading provider of digital and AI infrastructure globally and is part of the NTT Group, which invests significantly in R&D to support organizations and society in transitioning confidently and sustainably into the digital future. For more information, visit us at us.nttdata.com.,
Posted 2 weeks ago
7.0 - 9.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 16,700 stores in 31 countries, serving more than 9 million customers each day. At Circle K, we are building a best-in-class global data engineering practice to support intelligent business decision-making and drive value across our retail ecosystem. As we scale our engineering capabilities, we’re seeking a Lead Data Engineer to serve as both a technical leader and people coach for our India-based Data Enablement pod. This role will oversee the design, delivery, and maintenance of critical cross-functional datasets and reusable data assets while also managing a group of talented engineers in India. This position plays a dual role: contributing hands-on to engineering execution while mentoring and developing engineers in their technical careers. About The Role The ideal candidate combines deep technical acumen, stakeholder awareness, and a people-first leadership mindset. You’ll collaborate with global tech leads, managers, platform teams, and business analysts to build trusted, performant data pipelines that serve use cases beyond traditional data domains. Responsibilities Design, develop, and maintain scalable pipelines across ADF, Databricks, Snowflake, and related platforms Lead the technical execution of non-domain specific initiatives (e.g. reusable dimensions, TLOG standardization, enablement pipelines) Architect data models and re-usable layers consumed by multiple downstream pods Guide platform-wide patterns like parameterization, CI/CD pipelines, pipeline recovery, and auditability frameworks Mentoring and coaching team Partner with product and platform leaders to ensure engineering consistency and delivery excellence Act as an L3 escalation point for operational data issues impacting foundational pipelines Own engineering best practices, sprint planning, and quality across the Enablement pod Contribute to platform discussions and architectural decisions across regions Job Requirements Education Bachelor’s or master’s degree in computer science, Engineering, or related field Relevant Experience 7-9 years of data engineering experience with strong hands-on delivery using ADF, SQL, Python, Databricks, and Spark Experience designing data pipelines, warehouse models, and processing frameworks using Snowflake or Azure Synapse Knowledge And Preferred Skills Proficient with CI/CD tools (Azure DevOps, GitHub) and observability practices. Solid grasp of data governance, metadata tagging, and role-based access control. Proven ability to mentor and grow engineers in a matrixed or global environment. Strong verbal and written communication skills, with the ability to operate cross-functionally. Certifications in Azure, Databricks, or Snowflake are a plus. Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management). Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools. Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Hands on experience in Databases like (Azure SQL DB, Snowflake, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting. ADF, Databricks and Azure certification is a plus. Technologies we use : Databricks, Azure SQL DW/Synapse, Snowflake, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI
Posted 2 weeks ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success. About The Role We are looking for a Senior Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Sr. Data Engineer who has experience architecting and implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by modeling and transforming data to achieve actionable business outcomes. The Sr. Data Engineer will create, troubleshoot and support ETL pipelines and the cloud infrastructure involved in the process, will be able to support the visualizations team. Roles and Responsibilities Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals. Demonstrate deep technical and domain knowledge of relational and non-relation databases, Data Warehouses, Data lakes among other structured and unstructured storage options. Determine solutions that are best suited to develop a pipeline for a particular data source. Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development. Efficient in ETL/ELT development using Azure cloud services and Snowflake, Testing and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics delivery. Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders. Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability). Stay current with and adopt new tools and applications to ensure high quality and efficient solutions. Build cross-platform data strategy to aggregate multiple sources and process development datasets. Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation. Job Requirements Bachelor’s Degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred. 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment. 5+ years of experience with setting up and operating data pipelines using Python or SQL 5+ years of advanced SQL Programming: PL/SQL, T-SQL 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization. Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads. 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data. 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions. 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring. Strong analytical abilities and a strong intellectual curiosity In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts Understanding of REST and good API design. Experience working with Apache Iceberg, Delta tables and distributed computing frameworks Strong collaboration and teamwork skills & excellent written and verbal communications skills. Self-starter and motivated with ability to work in a fast-paced development environment. Agile experience highly desirable. Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools. Knowledge Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management). Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques. Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks. Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools. Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting. ADF, Databricks and Azure certification is a plus. Technologies we use: Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake
Posted 2 weeks ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success. About The Role We are looking for a Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Data Engineer who has experience implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by transforming data to achieve actionable business outcomes. Roles and Responsibilities Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals Demonstrate technical and domain knowledge of relational and non-relational databases, Data Warehouses, Data lakes among other structured and unstructured storage options Determine solutions that are best suited to develop a pipeline for a particular data source Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development Efficient in ELT/ETL development using Azure cloud services and Snowflake, including Testing and operational support (RCA, Monitoring, Maintenance) Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics deliver Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability) Stay current with and adopt new tools and applications to ensure high quality and efficient solutions Build cross-platform data strategy to aggregate multiple sources and process development datasets Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation Job Requirements Bachelor’s degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred 3+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment 3+ years of experience with setting up and operating data pipelines using Python or SQL 3+ years of advanced SQL Programming: PL/SQL, T-SQL 3+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads 3+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data 3+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions 3+ years of experience in defining and enabling data quality standards for auditing, and monitoring Strong analytical abilities and a strong intellectual curiosity. In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts Understanding of REST and good API design Experience working with Apache Iceberg, Delta tables and distributed computing frameworks Strong collaboration, teamwork skills, excellent written and verbal communications skills Self-starter and motivated with ability to work in a fast-paced development environment Agile experience highly desirable Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools Preferred Skills Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management) Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance) Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting ADF, Databricks and Azure certification is a plus Technologies we use : Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
We are seeking a skilled and enthusiastic Oracle ERP Techno-Functional Consultant/Architect to be a part of our expanding team in Bangalore, India. Your primary responsibility will involve driving the implementation and enhancement of critical Oracle ERP solutions focusing on financials, supply chain, and HCM domains. This is a thrilling opportunity to spearhead intricate Oracle Cloud and EBS projects and contribute towards the digital transformation of our dynamic global organization. As an Oracle ERP Techno-Functional Consultant/Architect, you will be responsible for overseeing the complete technical execution of Oracle ERP implementations, encompassing both Oracle Fusion Cloud and EBS R12. Your role will entail designing and executing scalable solutions across various modules such as FIN, SCM, HCM, and Taleo. Moreover, you will be involved in developing and managing integrations using tools like Oracle Integration Cloud (OIC), SOAP/REST Web Services, ADF, and VBCS, along with delivering robust reporting solutions utilizing Oracle BI, OTBI, and ADFDI. Additionally, you will handle technical upgrades, cloud-to-on-prem integrations, data migration, and workflow development, in close collaboration with functional consultants, business stakeholders, and offshore teams to translate requirements into technical designs. Furthermore, your responsibilities will include managing customization, personalization, and performance tuning across multiple Oracle ERP modules. The ideal candidate for this role should possess at least 5 years of hands-on experience working with Oracle ERP (EBS and/or Fusion Cloud) and should demonstrate proficiency in areas such as Oracle Cloud Integration (OIC), PL/SQL, SQL Tuning, BI/OTBI, ADF, VBCS, APEX, SOAP/REST APIs, Oracle Workflow & Personalization. You should also have a proven track record of leading multiple full-cycle ERP implementations, a strong technical architecture background, and excellent stakeholder management skills. Familiarity with tools like JDeveloper, TOAD, Postman, SOAP UI, SVN/GIT will be an added advantage. Candidates with Oracle Certifications (e.g., 1Z0-1042-22, 1Z0-1072-22), experience across industries like healthcare, manufacturing, or retail, and exposure to Taleo integrations and HCM Extracts will be considered favorably. By joining our team, you will have the opportunity to play a key role in enterprise-wide ERP transformation projects, collaborate with experienced ERP professionals and solution architects, lead global initiatives, mentor teams, and avail competitive salary packages, upskilling programs, and long-term growth prospects. If you are passionate about Oracle ERP solutions and eager to contribute to building smarter, faster, and future-ready systems, we encourage you to apply now and be a part of our exciting journey in Bangalore.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
The Data Quality Monitoring Lead plays a crucial role in ensuring the accuracy, reliability, and integrity of data across various systems and platforms. You will lead an offshore team, establish robust data quality monitoring frameworks, and collaborate with cross-functional stakeholders to address data-related challenges effectively. Your responsibilities will include overseeing real-time monitoring of data pipelines, dashboards, and logs using tools like Log Analytics, KQL queries, and Azure Monitoring to detect anomalies promptly. You will configure alerting mechanisms for timely notifications of potential data discrepancies and collaborate with support teams to investigate and resolve system-related issues impacting data quality. Additionally, you will lead the team in identifying and categorizing data quality issues, perform root cause analysis to determine underlying causes, and collaborate with system support teams and data stewards to implement corrective measures. Developing strategies for rectifying data quality issues, designing monitoring tools, and conducting cross-system data analysis will also be part of your role. Moreover, you will evaluate existing data monitoring processes, refine monitoring tools, and promote best practices in data quality monitoring to ensure standardization across all data-related activities. You will also lead and mentor an offshore team, develop a centralized knowledge base, and serve as the primary liaison between the offshore team and the Lockton Data Quality Lead. In terms of technical skills, proficiency in data monitoring tools like Log Analytics, KQL, Azure Monitoring, and Power BI, strong command of SQL, experience in automation scripting using Python, familiarity with Azure services, and understanding of data flows involving Mulesoft and Salesforce platforms are required. Additionally, experience with Azure DevOps for issue tracking and version control is preferred. This role requires a proactive, detail-oriented individual with strong leadership and communication skills, along with a solid technical background in data monitoring, analytics, database querying, automation scripting, and Azure services.,
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough