Home
Jobs
Companies
Resume

189 Dbt Jobs - Page 2

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

10 - 20 Lacs

Pune

Work from Office

Naukri logo

Role & responsibilities : Senior Site Reliability Engineer - Data PlatformRole Summary The primary responsibility of the Senior Site Reliability Engineer (SRE) is to ensure reliability and performance of data systems while working on development, automation, and testing of data pipelines from extract to consumption layer population for the GPN Lakehouse. This role performs tasks connected with data analytics, testing, and system architecture to provide reliable data pipelines that enable business solutions. SRE engineers will be expected to perform at a minimum the following tasks: ETL process management, data modeling, data warehouse/lake architecture, ETL tool implementation, data pipeline development, system monitoring, incident response, data lineage tracking, and ETL unit testing. NOTICE PERIOD- Immediate Joiners only

Posted 6 days ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

ETL testers with Automation Testing experience in DBT and snowflake. DBT, experience in Talend and snowflake.

Posted 6 days ago

Apply

0.0 - 5.0 years

0 - 5 Lacs

Navi Mumbai, Maharashtra, India

On-site

Foundit logo

Role Overview In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers) , where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role and Responsibilities Data Strategy and Planning : Develop and implement data architecture strategies that align with organizational goals and objectives. Collaborate with business stakeholders to understand data requirements and translate them into actionable plans. Data Modeling : Design and implement logical and physical data models to support business needs. Ensure data models are scalable, efficient, and comply with industry best practices. Database Design and Management : Oversee the design and management of databases, selecting appropriate database technologies based on requirements. Optimize database performance and ensure data integrity and security. Data Integration : Define and implement data integration strategies to facilitate seamless flow of information across systems. Responsibilities: Experience in data architecture and engineering. Proven expertise with Snowflake data platform . Strong understanding of ETL/ELT processes and data integration . Experience with data modeling and data warehousing concepts. Familiarity with performance tuning and optimization techniques. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Required Education Bachelor's Degree Preferred Education Master's Degree Required Technical and Professional Expertise Cloud & Data Architecture : AWS, Snowflake ETL & Data Engineering : AWS Glue, Apache Spark, Step Functions Big Data & Analytics : Athena, Presto, Hadoop Database & Storage : SQL, Snow SQL Security & Compliance : IAM, KMS, Data Masking Preferred Technical and Professional Experience Cloud Data Warehousing : Snowflake (Data Modeling, Query Optimization) Data Transformation : DBT (Data Build Tool) for ELT pipeline management Metadata & Data Governance : Alation (Data Catalog, Lineage, Governance)

Posted 6 days ago

Apply

5.0 - 8.0 years

5 - 8 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

IBM Consulting Overview In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role and Responsibilities As an Associate Software Developer at IBM, you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in selecting the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer , you'll tackle challenges related to database integration and work with complex, unstructured data sets. Primary Responsibilities: Implement and validate predictive models, as well as create and maintain statistical models with a focus on big data, incorporating various machine learning techniques. Design and implement enterprise search applications such as Elasticsearch and Splunk for client requirements. Work in an Agile, collaborative environment, partnering with scientists, engineers, consultants, and database administrators across disciplines to apply analytical rigor to predictive modeling challenges. Develop teams or write programs to cleanse and integrate data efficiently and develop predictive or prescriptive models. Evaluate modeling results to ensure accuracy and effectiveness. Required Education Bachelor's Degree Preferred Education Master's Degree Required Technical and Professional Expertise Strong experience in SQL. Strong experience in DBT (Data Build Tool). Strong understanding of Data Warehousing concepts. Strong experience in AWS or other cloud platforms. Redshift experience is a plus. Preferred Technical and Professional Experience Ability to thrive in teamwork settings with excellent verbal and written communication skills. Capability to communicate with internal and external clients to understand business needs and provide analytical solutions. Ability to present results to both technical and non-technical audiences effectively.

Posted 6 days ago

Apply

0.0 - 5.0 years

0 - 5 Lacs

Pune, Maharashtra, India

On-site

Foundit logo

As a Data Engineer specializing in DBT, you'll be joining one of IBM Consulting's Client Innovation Centers here in Hyderabad. In this role, you'll contribute your deep technical and industry expertise to a variety of public and private sector clients, driving innovation and adopting new technologies. Your Responsibilities Establish and implement best practices for DBT workflows , ensuring efficiency, reliability, and maintainability. Collaborate with data analysts, engineers, and business teams to align data transformations with business needs. Monitor and troubleshoot data pipelines to ensure accuracy and performance. Work with Azure-based cloud technologies to support data storage, transformation, and processing. Required Qualifications Education : Bachelor's Degree Technical & Professional Expertise : Strong MS SQL and Azure Databricks experience. Ability to implement and manage data models in DBT , focusing on data transformation and alignment with business requirements. Experience ingesting raw, unstructured data into structured datasets within a cloud object store. Proficiency in utilizing DBT to convert raw, unstructured data into structured datasets , enabling efficient analysis and reporting. Skilled in writing and optimizing SQL queries within DBT to enhance data transformation processes and improve overall performance. Preferred Qualifications Education : Master's Degree Technical & Professional Expertise : Ability to establish best DBT processes to improve performance, scalability, and reliability. Experience in designing, developing, and maintaining scalable data models and transformations using DBT in conjunction with Databricks . Proven interpersonal skills, contributing to team efforts and achieving results as required.

Posted 6 days ago

Apply

4.0 - 8.0 years

4 - 8 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Experience in data architecture and engineering Proven expertise with Snowflake data platform Strong understanding of ETL/ELT processes and data integration Experience with data modeling and data warehousing concepts Familiarity with performance tuning and optimization techniques Excellent problem-solving skills and attention to detail Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Cloud & Data Architecture: AWS ,Snowflake ETL & Data Engineering: AWS Glue, Apache Spark, Step Functions Big Data & Analytics: Athena,Presto, Hadoop Database & Storage: SQL,Snow sql Security & Compliance: IAM, KMS, Data Masking Preferred technical and professional experience Cloud Data Warehousing: Snowflake (Data Modeling, Query Optimization) Data Transformation: DBT (Data Build Tool) for ELT pipeline management Metadata & Data Governance: Alation (Data Catalog, Lineage, Governance)

Posted 1 week ago

Apply

3.0 - 6.0 years

2 - 6 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities Establish and implement best practices for DBT workflows, ensuring efficiency, reliability, and maintainability. Collaborate with data analysts, engineers, and business teams to align data transformations with business needs. Monitor and troubleshoot data pipelines to ensure accuracy and performance. Work with Azure-based cloud technologies to support data storage, transformation, and processing Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong MS SQL, Azure Databricks experience Implement and manage data models in DBT, data transformation and alignment with business requirements. Ingest raw, unstructured data into structured datasets to cloud object store. Utilize DBT to convert raw, unstructured data into structured datasets, enabling efficient analysis and reporting. Write and optimize SQL queries within DBT to enhance data transformation processes and improve overall performance Preferred technical and professional experience Establish best DBT processes to improve performance, scalability, and reliability. Design, develop, and maintain scalable data models and transformations using DBT in conjunction with Databricks Proven interpersonal skills while contributing to team effort by accomplishing related results as required

Posted 1 week ago

Apply

1.0 - 4.0 years

1 - 4 Lacs

Pune, Maharashtra, India

On-site

Foundit logo

Your role and responsibilities As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong MS SQL, Azure Databricks experience Implement and manage data models in DBT, data transformation and alignment with business requirements. Ingest raw, unstructured data into structured datasets to cloud object store. Utilize DBT to convert raw, unstructured data into structured datasets, enabling efficient analysis and reporting. Write and optimize SQL queries within DBT to enhance data transformation processes and improve overall performance Preferred technical and professional experience Establish best DBT processes to improve performance, scalability, and reliability. Design, develop, and maintain scalable data models and transformations using DBT in conjunction with Databricks Proven interpersonal skills while contributing to team effort by accomplishing related results as required

Posted 1 week ago

Apply

5.0 - 8.0 years

5 - 8 Lacs

Cochin / Kochi / Ernakulam, Kerala, India

On-site

Foundit logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities Responsible to develop triggers, functions, stored procedures to support this effort Assist with impact analysis of changing upstream processes to Data Warehouse and Reporting systems. Assist with design, testing, support, and debugging of new and existing ETL and reporting processes. Perform data profiling and analysis using a variety of tools. Troubleshoot and support production processes. Create and maintain documentation Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise BE / B Tech in any stream, M.Sc. (Computer Science/IT) / M.C.A, with Minimum 5 plus years of experience Must Have : Snowflake, AWS, Complex SQL Experience with software architecture in cloud-based infrastructures Experience in ETL processes and data modelling techniques Experience in designing and managing large scale data warehouses Preferred technical and professional experience Good to have in addition to Must haves: DBT, Tableau, python, JavaScript Develop complex SQL queries for data analytics and business intelligence. Background working with data analytics, business intelligence, or related fields

Posted 1 week ago

Apply

6.0 - 11.0 years

40 Lacs

Chennai

Hybrid

Naukri logo

Data Architect/Engineer and implement data solutions across Retail industry(SCM, Marketing, Sales, and Customer Service , using technologies such as DBT , Snowflake , and Azure/AWS/GCP . Design and optimize data pipelines that integrate various data sources (1st party, 3rd party, operational) to support business intelligence and advanced analytics. Develop data models and data flows that enable personalized customer experiences and support omnichannel marketing and customer engagement. Lead efforts to ensure data governance , data quality , and data security , adhering to compliance with regulations such as GDPR and CCPA . Implement and maintain data warehousing solutions in Snowflake to handle large-scale data processing and analytics needs. Optimize workflows using DBT to streamline data transformation and modeling processes. Leverage Azure for cloud infrastructure, data storage, and real-time data analytics, while ensuring the architecture supports scalability and performance. Collaborate with cross-functional teams, including data engineers, analysts, and business stakeholders, to ensure data architectures meet business needs. Support both real-time and batch data integration , ensuring data is accessible for actionable insights and decision-making. Continuously assess and integrate new data technologies and methodologies to enhance the organizations data capabilities. Qualifications: 6+ years of experience in Data Architecture or Data Engineering, with specific expertise in DBT , Snowflake , and Azure/AWS/GCP . Strong understanding of data modeling , ETL/ELT processes , and modern data architecture frameworks. Experience designing scalable data architectures for personalization and customer analytics across marketing, sales, and customer service domains. Expertise with cloud data platforms (Azure preferred) and Big Data technologies for large-scale data processing. Hands-on experience with Python for data engineering tasks and scripting. Proven track record of building and managing data pipelines and data warehousing solutions using Snowflake . Familiarity with Customer Data Platforms (CDP) , Master Data Management (MDM) , and Customer 360 architectures. Strong problem-solving skills and ability to work with cross-functional teams to translate business requirements into scalable data solutions. Role & responsibilities

Posted 1 week ago

Apply

4.0 - 9.0 years

4 - 8 Lacs

Pune

Work from Office

Naukri logo

Role & responsibilities Complex SQL joins, group by, window functions Spark execution & tuning – tasks, joins/agg, partitionBy, AQE, pruning Airflow – job scheduling, task dependencies, debugging DBT + Data modeling – staging, intermediate, mart layers, SQL tests Cost/performance optimization – repartition, broadcast joins, caching Tooling basics – Git (merge conflicts), Linux, basic Python/Java, Docker

Posted 1 week ago

Apply

4.0 - 8.0 years

4 - 8 Lacs

Pune

Work from Office

Naukri logo

JD Complex SQL joins, group by, window functions Spark execution & tuning – tasks, joins/agg, partitionBy, AQE, pruning Airflow – job scheduling, task dependencies, debugging DBT + Data modeling – staging, intermediate, mart layers, SQL tests Cost/performance optimization – repartition, broadcast joins, caching J oling basics – Git (merge conflicts), Linux, basic Python/Java, DockerJ

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Senior Data Engineer - Enterprise Data Platform Get to know Data Engineering Okta s Business Operations team is on a mission to accelerate Okta s scale and growth. We bring world-class business acumen and technology expertise to every interaction. We also drive cross-functional collaboration and are focused on delivering measurable business outcomes. Business Operations strives to deliver amazing technology experiences for our employees, and ensure that our offices have all the technology that is needed for the future of work. The Data Engineering team is focused on building platforms and capabilities that are utilized across the organization by sales, marketing, engineering, finance, product, and operations. The ideal candidate will have a strong engineering background with the ability to tie engineering initiatives to business impact. You will be part of a team doing detailed technical designs, development, and implementation of applications using cutting-edge technology stacks. The Senior Data Engineer Opportunity A Senior Data Engineer is responsible for designing, building, and maintaining scalable solutions. This role involves collaborating with data engineers, analysts, scientists and other engineers to ensure data availability, integrity, and security. The ideal candidate will have a strong background in cloud platforms, data warehousing, infrastructure as code, and continuous integration/continuous deployment (CI/CD) practices. What you ll be doing: Design, develop, and maintain scalable data platforms using AWS, Snowflake, dbt, and Databricks. Use Terraform to manage infrastructure as code, ensuring consistent and reproducible environments. Develop and maintain CI/CD pipelines for data platform applications using GitHub and GitLab. Troubleshoot and resolve issues related to data infrastructure and workflows. Containerize applications and services using Docker to ensure portability and scalability. Conduct vulnerability scans and apply necessary patches to ensure the security and integrity of the data platform. Work with data engineers to design and implement Secure Development Lifecycle practices and security tooling (DAST, SAST, SCA, Secret Scanning) into automated CI/CD pipelines. Ensure data security and compliance with industry standards and regulations. Stay updated with the latest trends and technologies in data engineering and cloud platforms. What we are looking for: BS in Computer Science, Engineering or another quantitative field of study 5+ years in a data engineering role 5+ years experience working with SQL, ETL tools such as Airflow and dbt, with relational and columnar MPP databases like Snowflake or Redshift, hands-on experience with AWS (e.g., S3, Lambda, EMR, EC2, EKS) 2+ years of experience managing CI/CD infrastructures, with strong proficiency in tools like GitHub Actions, Jenkins, ArgoCD, GitLab, or any CI/CD tool to streamline deployment pipelines and ensure efficient software delivery. 2+ years of experience with Java, Python, Go, or similar backend languages. Experience with Terraform for infrastructure as code. Experience with Docker and containerization technologies. Experience working with lakehouse architectures such as Databricks and file formats like Iceberg and Delta Experience in designing, building, and managing complex deployment pipelines.

Posted 1 week ago

Apply

5.0 - 10.0 years

18 - 22 Lacs

Bengaluru

Hybrid

Naukri logo

We are looking for a candidate seasoned in handling Data Warehousing challenges. Someone who enjoys learning new technologies and does not hesitate to bring his/her perspective to the table. We are looking for someone who is enthusiastic about working in a team, can own and deliver long-term projects to completion. Responsibilities: • Contribute to the teams vision and articulate strategies to have fundamental impact at our massive scale. • You will need a product-focused mindset. It is essential for you to understand business requirements and architect systems that will scale and extend to accommodate those needs. • Diagnose and solve complex problems in distributed systems, develop and document technical solutions and sequence work to make fast, iterative deliveries and improvements. • Build and maintain high-performance, fault-tolerant, and scalable distributed systems that can handle our massive scale. • Provide solid leadership within your very own problem space, through data-driven approach, robust software designs, and effective delegation. • Participate in, or spearhead design reviews with peers and stakeholders to adopt what’s best suited amongst available technologies. • Review code developed by other developers and provided feedback to ensure best practices (e.g., checking code in, accuracy, testability, and efficiency) • Automate cloud infrastructure, services, and observability. • Develop CI/CD pipelines and testing automation (nice to have) • Establish and uphold best engineering practices through thorough code and design reviews and improved processes and tools. • Groom junior engineers through mentoring and delegation • Drive a culture of trust, respect, and inclusion within your team. Minimum Qualifications: • Bachelor’s degree in Computer Science, Engineering or related field, or equivalent training, fellowship, or work experience • Min 5 years of experience curating data and hands on experience working on ETL/ELT tools. • Strong overall programming skills, able to write modular, maintainable code, preferably Python & SQL • Strong Data warehousing concepts and SQL skills. Understanding of SQL, dimensional modelling, and at least one relational database • Experience with AWS • Exposure to Snowflake and ingesting data in it or exposure to similar tools • Humble, collaborative, team player, willing to step up and support your colleagues. • Effective communication, problem solving and interpersonal skills. • Commit to grow deeper in the knowledge and understanding of how to improve our existing applications. Preferred Qualifications: • Experience on following tools – DBT, Fivetran, Airflow • Knowledge and experience in Spark, Hadoop 2.0, and its ecosystem. • Experience with automation frameworks/tools like Git, Jenkins Primary Skills Snowflake, Python, SQL, DBT Secondary Skills Fivetran, Airflow,Git, Jenkins, AWS, SQL DBM

Posted 1 week ago

Apply

5.0 - 9.0 years

5 - 7 Lacs

Chennai

Work from Office

Naukri logo

Roles & Responsibilities:- Developed, documented, and maintained detailed test cases, user scenarios and test artifacts. Develop and execute test plans, test cases, and test scripts for DBT data models and pipelines. Validate data transformation logic, SQL models, and end-to-end data flows. Work closely with data engineers and analysts to ensure data accuracy and consistency across platforms. Perform data validation and reconciliation between source and target systems. Collaborate with data governance teams to ensure Collibra metadata is correctly mapped, complete, and up to date. Validate business glossaries, data lineage, and metadata assets within Collibra. Identify, document, and track data quality issues, providing recommendations for remediation. Participate in code reviews and DBT model validation processes. Automate QA processes where applicable, using Python, SQL, or testing frameworks. Support data compliance, privacy, and governance initiatives through rigorous QA practices. Knowledge and Experience:- Minimum of 5 years' experience as a software tester with proven experience in defining and leading QA cycles Strong experience with DBT (Data Build Tool) and writing/validating SQL models. Hands-on experience with Collibra for metadata management and data governance validation. Solid understanding of data warehousing concepts and ETL/ELT processes. Proficiency in SQL for data validation and transformation testing. Familiarity with version control tools like Git. Understanding of data governance, metadata, and data quality principles. Strong analytical and problem-solving skills with attention to detail.

Posted 1 week ago

Apply

0.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Consultant - Sr.Data Engineer ( DBT+Snowflake ) ! In this role, the Sr.Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Job Description: Develop, implement, and optimize data pipelines using Snowflake, with a focus on Cortex AI capabilities. Extract, transform, and load (ETL) data from various sources into Snowflake, ensuring data integrity and accuracy. Implement Conversational AI solutions using Snowflake Cortex AI to facilitate data interaction through ChatBot agents. Collaborate with data scientists and AI developers to integrate predictive analytics and AI models into data workflows. Monitor and troubleshoot data pipelines to resolve data discrepancies and optimize performance. Utilize Snowflake%27s advanced features, including Snowpark, Streams, and Tasks, to enable data processing and analysis. Develop and maintain data documentation, best practices, and data governance protocols. Ensure data security, privacy, and compliance with organizational and regulatory guidelines. Responsibilities: . Bachelor&rsquos degree in Computer Science, Data Engineering, or a related field. . experience in data engineering, with experience working with Snowflake. . Proven experience in Snowflake Cortex AI, focusing on data extraction, chatbot development, and Conversational AI. . Strong proficiency in SQL, Python, and data modeling . . Experience with data integration tools (e.g., Matillion , Talend, Informatica). . Knowledge of cloud platforms such as AWS, Azure, or GCP. . Excellent problem-solving skills, with a focus on data quality and performance optimization. . Strong communication skills and the ability to work effectively in a cross-functional team. Proficiency in using DBT%27s testing and documentation features to ensure the accuracy and reliability of data transformations. Understanding of data lineage and metadata management concepts, and ability to track and document data transformations using DBT%27s lineage capabilities. Understanding of software engineering best practices and ability to apply these principles to DBT development, including version control, code reviews, and automated testing. Should have experience building data ingestion pipeline. Should have experience with Snowflake utilities such as SnowSQL , SnowPipe , bulk copy, Snowpark, tables, Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs, Snowsight . Should have good experience in implementing CDC or SCD type 2 Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in repository tools like Github /Gitlab, Azure repo Qualifications/Minimum qualifications B.E./ Masters in Computer Science , Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant of working experience as a Sr. Data Engineer with DBT+Snowflake skillsets Skill Matrix: DBT (Core or Cloud), Snowflake, AWS/Azure, SQL, ETL concepts, Airflow or any orchestration tools, Data Warehousing concepts Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 week ago

Apply

5.0 - 10.0 years

3 - 7 Lacs

Pune, Bengaluru

Hybrid

Naukri logo

Key Responsibilities: Develop, maintain, and optimize complex SQL queries and DBT models for business analytics and reporting. Analyze large datasets stored in Snowflake to extract actionable insights and support data-driven decision-making. Design and implement robust data pipelines using Python , ensuring data quality, integrity, and availability. Collaborate with cross-functional teams to gather business requirements and translate them into technical solutions. Leverage tools like Fivetran and Airflow to orchestrate and automate data workflows. Contribute to version control and CI/CD processes using Git and Jenkins . Support data infrastructure hosted on AWS , ensuring scalability and security. Document data models, processes, and best practices using tools such as SQL DBM . Required Skills and Qualifications: Primary Skills: Proficiency in Snowflake , Python , and SQL for data analysis and transformation. Experience with DBT for building scalable and modular analytics workflows. Secondary Skills: Familiarity with Fivetran for data ingestion and Airflow for workflow orchestration. Knowledge of Git and Jenkins for version control and automation. Experience with AWS cloud services for data storage and compute. Understanding of SQL DBM or similar tools for data modeling and documentation. Bachelors or Master’s degree in Computer Science, Data Science, Information Systems, or a related field.

Posted 1 week ago

Apply

0.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Consultant - Sr.Data Engineer ( DBT+Snowflake ) ! In this role, the Sr.Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Job Description: Develop, implement, and optimize data pipelines using Snowflake, with a focus on Cortex AI capabilities. Extract, transform, and load (ETL) data from various sources into Snowflake, ensuring data integrity and accuracy. Implement Conversational AI solutions using Snowflake Cortex AI to facilitate data interaction through ChatBot agents. Collaborate with data scientists and AI developers to integrate predictive analytics and AI models into data workflows. Monitor and troubleshoot data pipelines to resolve data discrepancies and optimize performance. Utilize Snowflake%27s advanced features, including Snowpark, Streams, and Tasks, to enable data processing and analysis. Develop and maintain data documentation, best practices, and data governance protocols. Ensure data security, privacy, and compliance with organizational and regulatory guidelines. Responsibilities: . Bachelor&rsquos degree in Computer Science, Data Engineering, or a related field. . experience in data engineering, with experience working with Snowflake. . Proven experience in Snowflake Cortex AI, focusing on data extraction, chatbot development, and Conversational AI. . Strong proficiency in SQL, Python, and data modeling . . Experience with data integration tools (e.g., Matillion , Talend, Informatica). . Knowledge of cloud platforms such as AWS, Azure, or GCP. . Excellent problem-solving skills, with a focus on data quality and performance optimization. . Strong communication skills and the ability to work effectively in a cross-functional team. Proficiency in using DBT%27s testing and documentation features to ensure the accuracy and reliability of data transformations. Understanding of data lineage and metadata management concepts, and ability to track and document data transformations using DBT%27s lineage capabilities. Understanding of software engineering best practices and ability to apply these principles to DBT development, including version control, code reviews, and automated testing. Should have experience building data ingestion pipeline. Should have experience with Snowflake utilities such as SnowSQL , SnowPipe , bulk copy, Snowpark, tables, Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs, Snowsight . Should have good experience in implementing CDC or SCD type 2 Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in repository tools like Github /Gitlab, Azure repo Qualifications/Minimum qualifications B.E./ Masters in Computer Science , Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant of working experience as a Sr. Data Engineer with DBT+Snowflake skillsets Skill Matrix: DBT (Core or Cloud), Snowflake, AWS/Azure, SQL, ETL concepts, Airflow or any orchestration tools, Data Warehousing concepts Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 week ago

Apply

9.0 - 14.0 years

22 - 35 Lacs

Gurugram, Bengaluru, Delhi / NCR

Work from Office

Naukri logo

Requirement : Data Architect & Business Intelligence Experience: 9+ Years Location: Gurgaon (Remote) Preferred: Immediate Joiners Job Summary: We are looking for a Data Architect & Business Intelligence Expert who will be responsible for designing and implementing enterprise-level data architecture solutions. The ideal candidate will have extensive experience in data warehousing, data modeling, and BI frameworks , with a strong focus on Salesforce, Informatica, DBT, IICS, and Snowflake . Key Responsibilities: Design and implement scalable and efficient data architecture solutions for enterprise applications. Develop and maintain robust data models that support business intelligence and analytics. Build data warehouses to support structured and unstructured data storage needs. Optimize data pipelines, ETL processes, and real-time data processing. Work with business stakeholders to define data strategies that support analytics and reporting. Ensure seamless integration of Salesforce, Informatica, DBT, IICS, and Snowflake into the data ecosystem. Establish and enforce data governance, security policies, and best practices . Conduct performance tuning and optimization for large-scale databases and data processing systems. Provide technical leadership and mentorship to development teams. Key Skills & Requirements: Strong experience in data architecture, data warehousing, and data modeling . Hands-on expertise with Salesforce, Informatica, DBT, IICS, and Snowflake . Deep understanding of ETL pipelines, real-time data streaming, and cloud-based data solutions . Experience in designing scalable, high-performance, and secure data environments . Ability to work with big data frameworks and BI tools for reporting and visualization. Strong analytical, problem-solving, and communication skills.

Posted 1 week ago

Apply

10.0 - 15.0 years

20 - 35 Lacs

Noida, Gurugram, Delhi / NCR

Work from Office

Naukri logo

Requirement : Senior Business Analyst (Data Application & Integration)Experience: 10+ Years Location: Gurgaon (Hybrid) Budget Max:35 LPA Preferred: Immediate Joiners Job Summary: We are seeking an experienced Senior Business Analyst (Data Application & Integration) to drive key data and integration initiatives. The ideal candidate will have a strong business analysis background and a deep understanding of data applications, API integrations, and cloud-based platforms like Salesforce, Informatica, DBT, IICS, and Snowflake . Key Responsibilities: Gather, document, and analyze business requirements for data application and integration projects. Work closely with business stakeholders to translate business needs into technical solutions. Design and oversee API integrations to ensure seamless data flow across platforms. Collaborate with cross-functional teams including developers, data engineers, and architects. Define and maintain data integration strategies, ensuring high availability and security. Work on Salesforce, Informatica, and Snowflake to streamline data management and analytics. Develop use cases, process flows, and documentation to support business and technical teams. Ensure compliance with data governance and security best practices. Act as a liaison between business teams and technical teams, providing insights and recommendations. Key Skills & Requirements: Strong expertise in business analysis methodologies and data-driven decision-making. Hands-on experience with API integration and data application management. Proficiency in Salesforce, Informatica, DBT, IICS, and Snowflake . Strong analytical and problem-solving skills. Ability to work in an Agile environment and collaborate with multi-functional teams. Excellent communication and stakeholder management skills

Posted 1 week ago

Apply

8.0 - 13.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Naukri logo

Data Engineer Location: Bangalore - Onsite Experience: 8 - 15 years Type: Full-time Role Overview We are seeking an experienced Data Engineer to build and maintain scalable, high-performance data pipelines and infrastructure for our next-generation data platform. The platform ingests and processes real-time and historical data from diverse industrial sources such as airport systems, sensors, cameras, and APIs. You will work closely with AI/ML engineers, data scientists, and DevOps to enable reliable analytics, forecasting, and anomaly detection use cases. Key Responsibilities Design and implement real-time (Kafka, Spark/Flink) and batch (Airflow, Spark) pipelines for high-throughput data ingestion, processing, and transformation. Develop data models and manage data lakes and warehouses (Delta Lake, Iceberg, etc) to support both analytical and ML workloads. Integrate data from diverse sources: IoT sensors, databases (SQL/NoSQL), REST APIs, and flat files. Ensure pipeline scalability, observability, and data quality through monitoring, alerting, validation, and lineage tracking. Collaborate with AI/ML teams to provision clean and ML-ready datasets for training and inference. Deploy, optimize, and manage pipelines and data infrastructure across on-premise and hybrid environments. Participate in architectural decisions to ensure resilient, cost-effective, and secure data flows. Contribute to infrastructure-as-code and automation for data deployment using Terraform, Ansible, or similar tools. Qualifications & Required Skills Bachelors or Master’s in Computer Science, Engineering, or related field. 6+ years in data engineering roles, with at least 2 years handling real-time or streaming pipelines. Strong programming skills in Python/Java and SQL. Experience with Apache Kafka, Apache Spark, or Apache Flink for real-time and batch processing. Hands-on with Airflow, dbt, or other orchestration tools. Familiarity with data modeling (OLAP/OLTP), schema evolution, and format handling (Parquet, Avro, ORC). Experience with hybrid/on-prem and cloud platforms (AWS/GCP/Azure) deployments. Proficient in working with data lakes/warehouses like Snowflake, BigQuery, Redshift, or Delta Lake. Knowledge of DevOps practices, Docker/Kubernetes, Terraform or Ansible. Exposure to data observability, data cataloging, and quality tools (e.g., Great Expectations, OpenMetadata). Good-to-Have Experience with time-series databases (e.g., InfluxDB, TimescaleDB) and sensor data. Prior experience in domains such as aviation, manufacturing, or logistics is a plus. Role & responsibilities Preferred candidate profile

Posted 1 week ago

Apply

10.0 - 14.0 years

35 - 45 Lacs

Hyderabad

Work from Office

Naukri logo

About the Team At DAZN, the Analytics Engineering team is at the heart of turning hundreds of data points into meaningful insights that power strategic decisions across the business. From content strategy to product engagement, marketing optimization to revenue intelligence we enable scalable, accurate, and accessible data for every team. The Role We’re looking for a Lead Analytics Engineer to take ownership of our analytics data Pipeline and play a pivotal role in designing and scaling our modern data stack. This is a hands-on technical leadership role where you'll shape the data models in dbt/ Snowflake , orchestrate pipelines using Airflow , and enable high-quality, trusted data for reporting. Key Responsibilities Lead the development and governance of DAZN’s semantic data models to support consistent, reusable reporting metrics. Architect efficient, scalable data transformations on Snowflake using SQL/DBT and best practices in data warehousing. Manage and enhance pipeline orchestration with Airflow , ensuring timely and reliable data delivery. Collaborate with stakeholders across Product, Finance, Marketing, and Technology to translate requirements into robust data models. Define and drive best practices in version control, testing, CI/CD for analytics workflows. Mentor and support junior engineers, fostering a culture of technical excellence and continuous improvement. Champion data quality, documentation, and observability across the analytics layer. You’ll Need to Have 10+ years of experience in data/analytics engineering, with 2+ years leading or mentoring engineers . Deep expertise in SQL and cloud data warehouses (preferably Snowflake ) and Cloud Services(AWS /GCP/AZURE) Proven experience with dbt for data modeling and transformation. Hands-on experience with Airflow (or similar orchestrators like Prefect, Luigi). Strong understanding of dimensional modeling, ELT best practices, and data governance principles. Ability to balance hands-on development with leadership and stakeholder management. Clear communication skills — you can explain technical concepts to both technical and non-technical teams. Nice to Have Experience in the media, OTT, or sports tech domain. Familiarity with BI tools like Looker or PowerBI. Exposure to testing frameworks like dbt tests or Great Expectations

Posted 1 week ago

Apply

8.0 - 13.0 years

15 - 30 Lacs

Hyderabad

Remote

Naukri logo

Position: Lead Data Engineer Experience: 7+ Years Location: Hyderabad | Chennai | Remote Summary We are seeking a Lead Data Engineer with 7+ years of experience to lead the development of ETL pipelines, data warehouse solutions, and analytics infrastructure. The ideal candidate will have strong experience in Snowflake , Azure Data Factory , dbt , and Fivetran , with a background in managing data for analytics and reporting, particularly within the healthcare domain. Responsibilities Design and develop ETL pipelines using Fivetran , dbt , and Azure Data Factory for internal and client projects involving platforms such as Azure , Salesforce , and AWS . Monitor and manage production ETL workflows and resolve operational issues proactively. Document data lineage and maintain architecture artifacts for both existing and new systems. Collaborate with QA and UAT teams to produce clear, testable mapping and design documentation. Assess and recommend data integration tools and transformation approaches. Identify opportunities for process optimization and deduplication in data workflows. Implement data quality checks in collaboration with Data Quality Analysts. Contribute to the design and development of large-scale Data Warehouses , MDM solutions , Data Lakes , and Data Vaults . Required Skills & Qualifications Bachelor's Degree in Computer Science, Software Engineering, Mathematics, or a related field. 6+ years of experience in data engineering, software development, or business analytics. 5+ years of strong hands-on SQL development experience. Proven expertise in: Snowflake Azure Data Factory (ADF) ETL tools such as Informatica , Talend , dbt , or similar. Experience in the healthcare industry , with understanding of PHI/PII requirements. Strong analytical and critical thinking skills. Excellent communication and interpersonal abilities. Proficient in scripting or programming languages such as Python , Perl , Java , or Shell scripting on Linux/Unix environments. Familiarity with BI/reporting tools like Power BI , Tableau , or Cognos . Experience with Big Data technologies such as: Snowflake (Snowpark) Apache Spark , Hadoop , MapReduce , Sqoop , Hive , Pig , HBase , Flume .

Posted 1 week ago

Apply

10.0 - 18.0 years

22 - 27 Lacs

Hyderabad

Remote

Naukri logo

Role: Data Architect / Data Modeler - ETL, Snowflake, DBT Location: Remote Duration: 14+ Months Timings: 5:30pm IST to 1:30am IST Note: Looking for Immediate Joiners Job Summary: We are seeking a seasoned Data Architect / Modeler with deep expertise in Snowflake , DBT , and modern data architectures including Data Lake , Lakehouse , and Databricks platforms. The ideal candidate will be responsible for designing scalable, performant, and reliable data models and architectures that support analytics, reporting, and machine learning needs across the organization. Key Responsibilities: Architect and design data solutions using Snowflake , Databricks , and cloud-native lakehouse principles . Lead the implementation of data modeling best practices (star/snowflake schemas, dimensional models) using DBT . Build and maintain robust ETL/ELT pipelines supporting both batch and real-time data processing. Develop data governance and metadata management strategies to ensure high data quality and compliance. Define data architecture frameworks, standards, and principles for enterprise-wide adoption. Work closely with business stakeholders, data engineers, analysts, and platform teams to translate business needs into scalable data solutions. Provide guidance on data lake and data warehouse integration , helping bridge structured and unstructured data needs. Establish data lineage, documentation, and maintain architecture diagrams and data dictionaries. Stay up to date with industry trends and emerging technologies in cloud data platforms and recommend improvements. Required Skills & Qualifications: 10+ years of experience in data architecture, data engineering, or data modeling roles. Strong experience with Snowflake including performance tuning, security, and architecture. Hands-on experience with DBT (Data Build Tool) for building and maintaining data transformation workflows. Deep understanding of Lakehouse Architecture , Data Lake implementations, and Databricks . Solid grasp of dimensional modeling , normalization/denormalization strategies, and data warehouse design principles. Experience with cloud platforms (e.g., AWS, Azure, or GCP) Proficiency in SQL and scripting languages (e.g., Python). Familiarity with data governance frameworks , data catalogs, and metadata management tools.

Posted 1 week ago

Apply

4.0 - 9.0 years

8 - 11 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Hybrid

Naukri logo

Job Title: DBT Admin Location: Pan India Job Description: Job Summary: DBT Administrator with minimum 8+ years of experience in IT and relevant experience of 4+ years in DBT. The ideal candidate will be responsible for managing and optimizing DBT environment, deployment activities along with ensuring efficient data transformation processes, and supporting data analytics initiatives. Required Skills: Experience: DBT Administrator with minimum 8+ years of experience in IT and relevant experience of 4+ years in DBT Technical Skills: Proficiency in SQL, DBT Cloud, DBT Core, JINJA Templating, CI/CD tools and Git Analytical Skills: Strong analytical and problem-solving skills with the ability to interpret complex data sets. Communication: Excellent communication skills, both written and verbal, with the ability to collaborate effectively with cross-functional teams. Attention to Detail: High level of accuracy and attention to detail in managing data and processes. Certifications: Relevant certifications in DBT, SQL, or cloud platforms. Responsibilities: DBT Cloud and Core Environment Management: Install, configure, and maintain DBT environments, ensuring optimal performance and reliability at an enterprise level. Data Transformation: Develop, test, and deploy DBT models to transform raw data into actionable insights. Performance Tuning: Monitor and optimize DBT processes to improve performance and reduce execution time. CI/CD Pipeline Management: Configure and manage CI/CD pipelines for seamless code deployment, particularly for DBT models and related scripts Version Control: Enforce Git best practices, including branching strategies, version control, and merge request processes Upgrades and Rollouts: Manage upgrades and rollouts to the DBT platform, ensuring minimal disruption and seamless integration of new features Collaboration: Work closely with data engineers, analysts, and other stakeholders to understand data requirements and deliver solutions. Documentation: Maintain comprehensive documentation of core concepts of DBT models, configurations, and processes. Troubleshooting: Identify and resolve issues related to DBT processes and data transformations. Security: Implement and manage security protocols to protect data integrity and privacy. Training: Provide training and support to team members on DBT best practices and usage.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies