Home
Jobs
Companies
Resume

189 Dbt Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

2 - 11 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Data Warehouse Solution Design & Development : Lead the design and implementation of batch and real-time ingestion architectures for data warehouses. Ensure that solutions are scalable, reliable, and optimized for performance. Team Leadership & Mentoring : Lead and mentor a team of data engineers , fostering a collaborative environment to encourage knowledge sharing and continuous improvement. Ensure that the team meets high standards of quality and performance. Hands-on Technical Delivery : Actively engage in hands-on development and ensure seamless delivery of data solutions. Provide technical direction and hands-on support for complex issues. Issue Resolution & Troubleshooting : Capable of troubleshooting issues that arise during runtime and providing quick resolutions to minimize disruptions and maintain system stability. API Management : Oversee the integration and management of APIs using APIM for seamless communication between internal and external systems. Implement and maintain API gateways and monitor API performance. Client Communication : Interact directly with clients , ensuring clear and convincing communication of technical ideas and project progress. Translate customer requirements into technical solutions and drive the implementation process. Cloud & DevOps : Ensure that the data solutions are designed with cloud-native technologies such as Azure , Snowflake , and DBT . Use Azure DevOps for continuous integration and deployment pipelines. Mentoring & Best Practices : Guide the team on best practices for data engineering , code reviews , and performance optimization . Ensure the adoption of modern tools and techniques to improve delivery efficiency. Mandatory Skills : Python for data engineering Snowflake and Postgres development experience Proficient in API Management (APIM) and DBT Strong experience with Azure DevOps for CI/CD Proven experience in data warehouse solutions design, development, and implementation Desired Skills : Experience with Apache Kafka , Azure Event Hub , Apache Airflow , Apache Flink Familiarity with Grafana , Prometheus , Terraform , Kubernetes Power BI for reporting and data visualization

Posted 16 hours ago

Apply

5.0 - 10.0 years

3 - 14 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Key Responsibilities : Design & Implement Data Architecture : Design, implement, and maintain the overall data platform architecture ensuring the scalability, security, and performance of the platform. Data Technologies Integration : Select, integrate, and configure data technologies (cloud platforms like AWS , Azure , GCP , data lakes , data warehouses , streaming platforms like Kafka , containerization technologies ). Infrastructure Management : Setup and manage the infrastructure for data pipelines , data storage , and data processing across platforms like Kubernetes and Airflow . Develop Frameworks & Tools : Develop internal frameworks to improve the efficiency and usability of the platform for other teams like Data Engineers and Data Scientists . Data Platform Monitoring & Observability : Implement and manage monitoring and observability for the data platform, ensuring high availability and fault tolerance. Collaboration : Work closely with software engineering teams to integrate the data platform with other business systems and applications. Capacity & Cost Optimization : Involved in capacity planning and cost optimization for data infrastructure, ensuring efficient utilization of resources. Tech Stack Requirements : Apache Iceberg (version 0.13.2): Experience in managing table formats for scalable data storage. Apache Spark (version 3.4 and above): Expertise in building and maintaining batch processing and streaming data processing capabilities. Apache Kafka (version 3.9 and above): Proficiency in managing messaging platforms for real-time data streaming. Role-Based Access Control (RBAC) : Experience with Apache Ranger (version 2.6.0) for implementing and administering security and access controls. RDBMS : Experience working with near real-time data storage solutions , specifically Oracle (version 19c). Great Expectations (version 1.3.4): Familiarity with implementing Data Quality (DQ) frameworks to ensure data integrity and consistency. Data Lineage & Cataloging : Experience with Open Lineage and DataHub (version 0.15.0) for managing data lineage and catalog solutions. Trino (version 4.7.0): Proficiency with query engines for batch processing. Container Platforms : Hands-on experience in managing container platforms such as SKE (version 1.29 on AKS ). Airflow (version 2.10.4): Experience using workflow and scheduling tools for orchestrating and managing data pipelines. DBT (Data Build Tool): Proficiency in using ETL/ELT frameworks like DBT for data transformation and automation. Data Tokenization : Experience with data tokenization technologies like Protegrity (version 9.2) for ensuring data security. Desired Skills : Domain Expertise : Familiarity with the Banking domain is a plus, including working with financial data and regulatory requirements.

Posted 17 hours ago

Apply

7.0 - 10.0 years

15 - 30 Lacs

Pune

Hybrid

Naukri logo

Looking for 7–10 yrs exp (4+ in data modeling, 2–3 in Data Vault 2.0). Must know DBT, Dagster/Airflow, GCP (BigQuery, CloudSQL), and data modeling. DV 2.0 hands-on is a must. Docker is a plus.

Posted 2 days ago

Apply

1.0 - 3.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

About Zeta Zeta is a Next-Gen Banking Tech company that empowers banks and fintechs to launch banking products for the future. It was founded by and Ramki Gaddipati in f lagship processing platform - Zeta Tachyon - is the industry's first modern, cloud-native, and fully API-enabled stack that brings together issuance, processing, lending, core banking, fraud & risk, and many more capabilities as a single-vendor stack. 15M+ cards have been issued on our platform is actively working with the largest Banks and Fintechs in multiple global markets transforming customer experience for multi-million card portfolios. Zeta has over 1,700+ employees across the US, EMEA, and Asia, with 70%+ roles in R&D . Backed by SoftBank, Mastercard, and other investors , we raised $330M at a $2B valuation in 2025 more @ , , , About the Role In this role, you'll design robust data models using SQL, dbt, and Redshift, while driving best practices across development, deployment, and monitoring. You'll also collaborate closely with product and engineering to ensure data quality and impactful delivery Responsibilities Create optimized data models with SQL, DBT and Redshift Write functional and column level test for Models Build reports from the data models Collaborate with product to clarify requirement and create design document Get design reviewed from Architect/Principal/Lead Engineer Contribute to code reviews Set up and monitor Airflow DAGs Set up and use CI/CD pipelines Leverage Kubernetes operators for deployment automation Ensure data quality Drive best practices in Data models development, deployment, and monitoring Skills Bachelor's/Master's degree in engineering Strong expertise in SQL for complex data querying and optimization Hands-on experience with Apache Airflow for orchestration and scheduling Good understanding of data modeling and data warehousing concepts Experience with dbt (Data Build Tool) for data transformation and modeling Exposure to Amazon Redshift or other cloud data warehouses Familiarity with CI/CD tools such as Jenkins Experience using Bitbucket for version control Working knowledge of JIRA for agile project tracking Ability to work with cross-functional and dependent teams, think and own on delivering end to end. Excellent problem-solving skills and ability to work independently or as part of a team. Strong communication and interpersonal skills to collaborate effectively with cross-functional teams. Experience and Qualifications Bachelor's/Master's degree in engineering (computer science, information systems) At least 1-2years of experience in working on data, especially on reporting, data analysis

Posted 3 days ago

Apply

9.0 - 14.0 years

9 - 14 Lacs

Gurgaon / Gurugram, Haryana, India

On-site

Foundit logo

Data is at the core of the Aladdin platform, and increasingly, our ability to consume, store, analyze, and gain insight from data is a keycomponentof what differentiates us. As part of Aladdin Studio, The Aladdin Data Cloud (ADC) Engineering teamis responsible forbuilding andmaintainingdata-as-a-service solution for all the data management and transformation needs. We engineer high performance data pipelines, provide a fabric to discover and consume data, and continually evolve our data surface capabilities. As aData engineer in theADCEngineering team,you will: - Work alongside our engineers to help design and build scalable data pipelines while evolving the data surface. Help prove out anddeliverCloud Native Infrastructure and tooling to support scalabledata cloud. Have fun as part of anamazingteam. Specific Responsibilities: Leading and working as part of a multi-disciplinary squad toestablishour next generation of data pipelines and tools. Be involved frominceptionof projects, understanding requirements, designing&developing solutions,and incorporating them into the designs of our platforms. Mentor team members on technology andstandard processes. Maintainexcellent knowledge of the technical landscapefor data & cloud tooling Assistinsolvingissues, support the operation of production software. Designsolutions and document it. Desirable Skills 8+years of industry experience in data engineering area. Passionfor engineeringandoptimizing data sets, data pipelines and architecture . Ability to build processes that support data transformation, workload management, data structures,lineage,and metadata. Knowledge of SQL and performance tuning.Experience with Snowflake is preferred. Good understandingoflanguages such as Python/Java Understandingofsoftware deployment and orchestration technologies such asairflowetc. Experience withdbtishelpful. Working knowledge of building and deploying distributed systems Experience in creating and evolving CI/CD pipelines with GitLab orAzure Dev Ops . Experience in handling multi-disciplinaryteamand mentoring them.

Posted 4 days ago

Apply

2.0 - 5.0 years

12 - 15 Lacs

Mumbai, Maharashtra, India

On-site

Foundit logo

Responsibilities Actively participate in chapter ceremony meetings and contribute to project planning and estimation. Coordinate work with product managers, data owners, platform teams, and other stakeholders throughout the SDLC cycle. Use Airflow, Python, Snowflake, dbt, and related technologies to enhance and maintain EDP acquisition, ingestion, processing, orchestration and DQ frameworks. Adopt new tools and technologies to enhance framework capabilities. Build and conduct end-to-end tests to ensure production operations run successfully after every release cycle. Document and present accomplishments and challenges to internal and external stakeholders. Demonstrate deep understanding of modern data engineering tools and best practices. Design and build solutions which are performant, consistent, and scalable. Contribute to design decisions for complex systems. Provide L2 / L3 support for technical and/or operational issues. Qualifications At least 5+ years experience as a data engineer Expertise with SQL, stored procedures, UDFs Advanced level Python programming or Advanced Core Java programming. Experience with Snowflake or similar cloud native databases Experience with orchestration tools, especially Airflow Experience with declarative transformation tools like dbt Experience in Azure services, especially ADLS (or equivalent) Exposure to real time streaming platforms and message brokers (e.g., Snowpipe Streaming, Kafka) Experience with Agile development concepts and related tools (ADO, Aha) Experience conducting root cause analysis and solve issues Experience with performance tuning Excellent written and verbal communication skills Ability to operate in a matrixed organization and fast-paced environment Strong interpersonal skills with a can-do attitude under challenging circumstances Bachelors degree in computer science is strongly preferred

Posted 4 days ago

Apply

2.0 - 5.0 years

2 - 5 Lacs

Mumbai, Maharashtra, India

On-site

Foundit logo

Responsibilities Actively participate in chapter ceremony meetings and contribute to project planning and estimation. Coordinate work with product managers, data owners, platform teams, and other stakeholders throughout the SDLC cycle. Use Airflow, Python, Snowflake, dbt, and related technologies to enhance and maintain EDP acquisition, ingestion, processing, orchestration and DQ frameworks. Adopt new tools and technologies to enhance framework capabilities. Build and conduct end-to-end tests to ensure production operations run successfully after every release cycle. Document and present accomplishments and challenges to internal and external stakeholders. Demonstrate deep understanding of modern data engineering tools and best practices. Design and build solutions which are performant, consistent, and scalable. Contribute to design decisions for complex systems. Provide L2 / L3 support for technical and/or operational issues. Qualifications At least 5+ years experience as a data engineer Expertise with SQL, stored procedures, UDFs Advanced level Python programming or Advanced Core Java programming. Experience with Snowflake or similar cloud native databases Experience with orchestration tools, especially Airflow Experience with declarative transformation tools like dbt Experience in Azure services, especially ADLS (or equivalent) Exposure to real time streaming platforms and message brokers (e.g., Snowpipe Streaming, Kafka) Experience with Agile development concepts and related tools (ADO, Aha) Experience conducting root cause analysis and solve issues Experience with performance tuning Excellent written and verbal communication skills Ability to operate in a matrixed organization and fast-paced environment Strong interpersonal skills with a can-do attitude under challenging circumstances Bachelors degree in computer science is strongly preferred

Posted 4 days ago

Apply

0.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Foundit logo

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant- Sr. Snowflake Data Engineer ( Snowflake+ Python+Cloud ) ! In this role, the Sr. Snowflake Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Job Description : E xperience in IT industry W orking experience with building productionized data ingestion and processing data pipelines in Snowflake Strong understanding on Snowflake Architecture Fully well-versed with data warehousing concepts. Expertise and excellent understanding of Snowflake features and integration of Snowflake with other data processing. Able to create the data pipeline for ETL /ELT Good to have DBT experience Excellent presentation and communication skills, both written and verbal Ability to problem solve and architect in an environment with unclear requirements. Able to create the high level and low-level design document based on requirement. Hands on experience in configuration, troubleshooting, testing and managing data platforms, on premises or in the cloud. Awareness on data visualisation tools and methodologies Work independently on business problems and generate meaningful insights Good to have some experience/knowledge on Snowpark or Streamlit or GenAI but not mandatory. Should have experience on implementing Snowflake Best Practices Snowflake SnowPro Core Certification will be add ed an advantage Roles and Responsibilities : Requirement gathering, creating design document, providing solutions to customer, work with offshore team etc. Writing SQL queries against Snowflake , developing scripts to do Extract, Load, and Transform data. Hands-on experience with Snowflake utilities such as SnowSQL , Bulk copy, Snow p ipe , Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs , Snowsight . Have experience with Snowflake cloud data warehouse and AWS S3 bucket or Azure blob storage container for integrating data from multiple source system . Should have have some exp on AWS services (S3, Glue, Lambda) or Azure services ( Blob Storage, ADLS gen2, ADF) Should have good experience in Python / Pyspark . integration with Snowflake and cloud (AWS/Azure) with ability to leverage cloud services for data processing and storage. Proficiency in Python programming language, including knowledge of data types, variables, functions, loops, conditionals, and other Python-specific concepts. Knowledge of ETL (Extract, Transform, Load) processes and tools, and ability to design and develop efficient ETL jobs using Python and Pyspark . Should have some experience on Snowflake RBAC and data security . Should have good experience in implementing CDC or SCD type - 2 . Should have good experience in implementing Snowflake Best Practices In-depth understanding of Data Warehouse, ETL concepts and Data Modelling Experience in requirement gathering, analys is, designing, development, and deployment . Should Have experience building data ingestion pipeline Optimize and tune data pipelines for performance and scalability Able to communicate with clients and lead team. Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in deployment using CI/CD tools and exp in repositories like Azure repo , Github etc. Qualifications we seek in you! Minimum qualifications B.E./ Masters in Computer Science , Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant as Senior Snowflake Data Engineer . Skill Metrix: Snowflake, Python/ PySpark , AWS/Azure, ETL concepts, Data Modeling & Data Warehousing concepts Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 5 days ago

Apply

3.0 - 5.0 years

0 Lacs

Gurgaon / Gurugram, Haryana, India

On-site

Foundit logo

Data Engineer - Azure This is a hands on data platform engineering role that places significant emphasis on consultative data engineering engagements with a wide range of customer stakeholders Business Owners, Business Analytics, Data Engineering teams, Application Development, End Users and Management teams. You Will: .Design and build resilient and efficient data pipelines for batch and real-time streaming .Collaborate with product managers, software engineers, data analysts, and data scientists to build scalable and data-driven platforms and tools. .Provide technical product expertise, advise on deployment architectures, and handle in-depth technical questions around data infrastructure, PaaS services, design patterns and implementation approaches. .Collaborate with enterprise architects, data architects, ETL developers & engineers, data scientists, and information designers to lead the identification and definition of required data structures, formats, pipelines, metadata, and workload orchestration capabilities .Address aspects such as data privacy & security, data ingestion & processing, data storage & compute, analytical & operational consumption, data modeling, data virtualization, self-service data preparation & analytics, AI enablement, and API integrations. .Execute projects with an Agile mindset. .Build software frameworks to solve data problems at scale. Technical Requirements: .3+ years of data engineering experience leading implementations of large-scale lakehouses on Databricks, Snowflake, or Synapse. Prior experience using DBT and PowerBI will be a plus. .Extensive experience with Azure data services (Databricks, Synapse, ADF) and related azure infrastructure services like firewall, storage, key vault etc. is required. .Strong programming / scripting experience using SQL and python and Spark. .Knowledge of software configuration management environments and tools such as JIRA, Git, Jenkins, TFS, Shell, PowerShell, Bitbucket. .Experience with Agile development methods in data-oriented projects Other Requirements: .Highly motivated self-starter and team player and demonstrated success in prior roles. .Track record of success working through technical challenges within enterprise organizations .Ability to prioritize deals, training, and initiatives through highly effective time management .Excellent problem solving, analytical, presentation, and whiteboarding skills .Track record of success dealing with ambiguity (internal and external) and working collaboratively with other departments and organizations to solve challenging problems .Strong knowledge of technology and industry trends that affect data analytics decisions for enterprise organizations .Certifications on Azure Data Engineering and related technologies.

Posted 5 days ago

Apply

13.0 - 20.0 years

40 - 45 Lacs

Bengaluru

Work from Office

Naukri logo

Principal Architect - Platform & Application Architect Experience 15+ years in software/data platform architecture 5+ years in architectural leadership roles Architecture & Data Platform Expertise Education Bachelors/Master’s in CS, Engineering, or related field Title: Principal Architect Location: Onsite Bangalore Experience: 15+ years in software & data platform architecture and technology strategy Role Overview We are seeking a Platform & Application Architect to lead the design and implementation of a next-generation, multi-domain data platform and its ecosystem of applications. In this strategic and hands-on role, you will define the overall architecture, select and evolve the technology stack, and establish best practices for governance, scalability, and performance. Your responsibilities will span across the full data lifecycle—ingestion, processing, storage, and analytics—while ensuring the platform is adaptable to diverse and evolving customer needs. This role requires close collaboration with product and business teams to translate strategy into actionable, high-impact platform & products. Key Responsibilities 1. Architecture & Strategy Design the end-to-end architecture for a On-prem / hybrid data platform (data lake/lakehouse, data warehouse, streaming, and analytics components). Define and document data blueprints, data domain models, and architectural standards. Lead build vs. buy evaluations for platform components and recommend best-fit tools and technologies. 2. Data Ingestion & Processing Architect batch and real-time ingestion pipelines using tools like Kafka, Apache NiFi, Flink, or Airbyte. Oversee scalable ETL/ELT processes and orchestrators (Airflow, dbt, Dagster). Support diverse data sources: IoT, operational databases, APIs, flat files, unstructured data. 3. Storage & Modeling Define strategies for data storage and partitioning (data lakes, warehouses, Delta Lake, Iceberg, or Hudi). Develop efficient data strategies for both OLAP and OLTP workloads. Guide schema evolution, data versioning, and performance tuning. 4. Governance, Security, and Compliance Establish data governance , cataloging , and lineage tracking frameworks. Implement access controls , encryption , and audit trails to ensure compliance with DPDPA, GDPR, HIPAA, etc. Promote standardization and best practices across business units. 5. Platform Engineering & DevOps Collaborate with infrastructure and DevOps teams to define CI/CD , monitoring , and DataOps pipelines. Ensure observability, reliability, and cost efficiency of the platform. Define SLAs, capacity planning, and disaster recovery plans. 6. Collaboration & Mentorship Work closely with data engineers, scientists, analysts, and product owners to align platform capabilities with business goals. Mentor teams on architecture principles, technology choices, and operational excellence. Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 12+ years of experience in software engineering, including 5+ years in architectural leadership roles. Proven expertise in designing and scaling distributed systems, microservices, APIs, and event-driven architectures using Java, Python, or Node.js. Strong hands-on experience with building scalable data platforms on premise/Hybrid/cloud environments. Deep knowledge of modern data lake and warehouse technologies (e.g., Snowflake, BigQuery, Redshift) and table formats like Delta Lake or Iceberg. Familiarity with data mesh, data fabric, and lakehouse paradigms. Strong understanding of system reliability, observability, DevSecOps practices, and platform engineering principles. Demonstrated success in leading large-scale architectural initiatives across enterprise-grade or consumer-facing platforms. Excellent communication, documentation, and presentation skills, with the ability to simplify complex concepts and influence at executive levels. Certifications such as TOGAF or AWS Solutions Architect (Professional) and experience in regulated domains (e.g., finance, healthcare, aviation) are desirable.

Posted 5 days ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Job Title: Senior Data Engineer Key Responsibilities As a Senior Data Engineer, you will: Data Pipeline Development: Design, build, and maintain scalable data pipelines using PySpark and Python. AWS Cloud Integration: Work with AWS cloud services (S3, Lambda, Glue, EMR, Redshift) for data ingestion, processing, and storage. ETL Workflow Management: Implement and maintain ETL workflows using DBT and orchestration tools (e.g., Airflow). Data Warehousing: Design and manage data models in Snowflake, ensuring performance and reliability. SQL Optimization: Utilize SQL for querying and optimizing datasets across different databases. Data Integration: Integrate and manage data from MongoDB, Kafka, and other streaming or NoSQL sources. Collaboration & Support: Collaborate with data scientists, analysts, and other engineers to support advanced analytics and Machine Learning (ML) initiatives. Data Quality & Governance: Ensure data quality, lineage, and governance through best practices and tools. Mandatory Skills & Experience Strong programming skills in Python and PySpark . Hands-on experience with AWS data services (S3, Lambda, Glue, EMR, Redshift). Proficiency in SQL and experience with DBT for data transformation. Experience with Snowflake for data warehousing. Knowledge of MongoDB , Kafka , and data streaming concepts. Good understanding of data architecture, data modeling, and data governance . Familiarity with large-scale data platforms. Essential Professional Skills Excellent problem-solving skills . Ability to work independently or as part of a team . Experience with CI/CD and DevOps practices in a data engineering environment (Plus). Qualifications Proven hands-on experience working with large-scale data platforms . Strong background in Python, PySpark, AWS , and modern data warehousing tools such as Snowflake and DBT . Familiarity with NoSQL databases like MongoDB and real-time streaming platforms like Kafka.

Posted 5 days ago

Apply

9.0 - 10.0 years

9 - 10 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Qualification Total 9 years of experience with minimum 5 years of experience working as DBT administrator DBT Core Cloud Manage DBT projects, models, tests, snapshots, and deployments in both DBT Core and DBT Cloud Administer and manage DBT Cloud environments including users, permissions, job scheduling, and Git integration Onboarding and enablement of DBT users on DBT Cloud platform Work closely with users to support DBT adoption and usage SQL Warehousing Write optimized SQL and work with data warehouses like Snowflake, BigQuery, Redshift, or Databricks Cloud Platforms Use AWS, GCP, or Azure for data storage (e.g., S3, GCS), compute, and resource management Orchestration Tools Automate DBT runs using Airflow, Prefect, or DBT Cloud job scheduling Version Control CI CD Integrate DBT with Git and manage CI/CD pipelines for model promotion and testing Monitoring Logging Track job performance and errors using tools like dbt-artifacts, Datadog, or cloud-native logging Access Security Configure IAM roles, secrets, and permissions for secure DBT and data warehouse access Documentation Collaboration Maintain model documentation, use dbt docs, and collaborate with data teams

Posted 5 days ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Design, develop, and maintain data pipelines and ETL processes using AWS and Snowflake. Implement data transformation workflows using DBT (Data Build Tool). Write efficient, reusable, and reliable code in Python. Optimize and tune data solutions for performance and scalability. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Ensure data quality and integrity through rigorous testing and validation. Stay updated with the latest industry trends and technologies in data engineering. Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Proven experience as a Data Engineer or similar role. Strong proficiency in AWS and Snowflake. Expertise in DBT and Python programming. Experience with data modeling, ETL processes, and data warehousing. Familiarity with cloud platforms and services. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities.

Posted 5 days ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Design, develop, and maintain data pipelines and ETL processes using AWS and Snowflake. Implement data transformation workflows using DBT (Data Build Tool). Write efficient, reusable, and reliable code in Python. Optimize and tune data solutions for performance and scalability. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Ensure data quality and integrity through rigorous testing and validation. Stay updated with the latest industry trends and technologies in data engineering. Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Proven experience as a Data Engineer or similar role. Strong proficiency in AWS and Snowflake. Expertise in DBT and Python programming. Experience with data modeling, ETL processes, and data warehousing. Familiarity with cloud platforms and services. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities.

Posted 5 days ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Key Responsibilities As a Senior Data Engineer, you will: Data Pipeline Development: Design, build, and maintain scalable data pipelines using PySpark and Python. AWS Cloud Integration: Work with AWS cloud services (S3, Lambda, Glue, EMR, Redshift) for data ingestion, processing, and storage. ETL Workflow Management: Implement and maintain ETL workflows using DBT and orchestration tools (e.g., Airflow). Data Warehousing: Design and manage data models in Snowflake, ensuring performance and reliability. SQL Optimization: Utilize SQL for querying and optimizing datasets across different databases. Data Integration: Integrate and manage data from MongoDB, Kafka, and other streaming or NoSQL sources. Collaboration & Support: Collaborate with data scientists, analysts, and other engineers to support advanced analytics and Machine Learning (ML) initiatives. Data Quality & Governance: Ensure data quality, lineage, and governance through best practices and tools. Mandatory Skills & Experience Strong programming skills in Python and PySpark . Hands-on experience with AWS data services (S3, Lambda, Glue, EMR, Redshift). Proficiency in SQL and experience with DBT for data transformation. Experience with Snowflake for data warehousing. Knowledge of MongoDB , Kafka , and data streaming concepts. Good understanding of data architecture, data modeling, and data governance . Familiarity with large-scale data platforms. Essential Professional Skills Excellent problem-solving skills . Ability to work independently or as part of a team . Experience with CI/CD and DevOps practices in a data engineering environment (Plus). Qualifications Proven hands-on experience working with large-scale data platforms . Strong background in Python, PySpark, AWS , and modern data warehousing tools such as Snowflake and DBT . Familiarity with NoSQL databases like MongoDB and real-time streaming platforms like Kafka.

Posted 5 days ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Key Responsibilities As a Senior Data Engineer, you will: Data Pipeline Development: Design, build, and maintain scalable data pipelines using PySpark and Python. AWS Cloud Integration: Work with AWS cloud services (S3, Lambda, Glue, EMR, Redshift) for data ingestion, processing, and storage. ETL Workflow Management: Implement and maintain ETL workflows using DBT and orchestration tools (e.g., Airflow). Data Warehousing: Design and manage data models in Snowflake, ensuring performance and reliability. SQL Optimization: Utilize SQL for querying and optimizing datasets across different databases. Data Integration: Integrate and manage data from MongoDB, Kafka, and other streaming or NoSQL sources. Collaboration & Support: Collaborate with data scientists, analysts, and other engineers to support advanced analytics and Machine Learning (ML) initiatives. Data Quality & Governance: Ensure data quality, lineage, and governance through best practices and tools. Mandatory Skills & Experience Strong programming skills in Python and PySpark . Hands-on experience with AWS data services (S3, Lambda, Glue, EMR, Redshift). Proficiency in SQL and experience with DBT for data transformation. Experience with Snowflake for data warehousing. Knowledge of MongoDB , Kafka , and data streaming concepts. Good understanding of data architecture, data modeling, and data governance . Familiarity with large-scale data platforms. Essential Professional Skills Excellent problem-solving skills . Ability to work independently or as part of a team . Experience with CI/CD and DevOps practices in a data engineering environment (Plus). Qualifications Proven hands-on experience working with large-scale data platforms . Strong background in Python, PySpark, AWS , and modern data warehousing tools such as Snowflake and DBT . Familiarity with NoSQL databases like MongoDB and real-time streaming platforms like Kafka.

Posted 5 days ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Job Title: Senior Data Engineer Key Responsibilities As a Senior Data Engineer, you will: Data Pipeline Development: Design, build, and maintain scalable data pipelines using PySpark and Python. AWS Cloud Integration: Work with AWS cloud services (S3, Lambda, Glue, EMR, Redshift) for data ingestion, processing, and storage. ETL Workflow Management: Implement and maintain ETL workflows using DBT and orchestration tools (e.g., Airflow). Data Warehousing: Design and manage data models in Snowflake, ensuring performance and reliability. SQL Optimization: Utilize SQL for querying and optimizing datasets across different databases. Data Integration: Integrate and manage data from MongoDB, Kafka, and other streaming or NoSQL sources. Collaboration & Support: Collaborate with data scientists, analysts, and other engineers to support advanced analytics and Machine Learning (ML) initiatives. Data Quality & Governance: Ensure data quality, lineage, and governance through best practices and tools. Mandatory Skills & Experience Strong programming skills in Python and PySpark . Hands-on experience with AWS data services (S3, Lambda, Glue, EMR, Redshift). Proficiency in SQL and experience with DBT for data transformation. Experience with Snowflake for data warehousing. Knowledge of MongoDB , Kafka , and data streaming concepts. Good understanding of data architecture, data modeling, and data governance . Familiarity with large-scale data platforms. Essential Professional Skills Excellent problem-solving skills . Ability to work independently or as part of a team . Experience with CI/CD and DevOps practices in a data engineering environment (Plus). Qualifications Proven hands-on experience working with large-scale data platforms . Strong background in Python, PySpark, AWS , and modern data warehousing tools such as Snowflake and DBT . Familiarity with NoSQL databases like MongoDB and real-time streaming platforms like Kafka.

Posted 5 days ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Job Title: Senior Data Engineer Key Responsibilities As a Senior Data Engineer, you will: Data Pipeline Development: Design, build, and maintain scalable data pipelines using PySpark and Python. AWS Cloud Integration: Work with AWS cloud services (S3, Lambda, Glue, EMR, Redshift) for data ingestion, processing, and storage. ETL Workflow Management: Implement and maintain ETL workflows using DBT and orchestration tools (e.g., Airflow). Data Warehousing: Design and manage data models in Snowflake, ensuring performance and reliability. SQL Optimization: Utilize SQL for querying and optimizing datasets across different databases. Data Integration: Integrate and manage data from MongoDB, Kafka, and other streaming or NoSQL sources. Collaboration & Support: Collaborate with data scientists, analysts, and other engineers to support advanced analytics and Machine Learning (ML) initiatives. Data Quality & Governance: Ensure data quality, lineage, and governance through best practices and tools. Mandatory Skills & Experience Strong programming skills in Python and PySpark . Hands-on experience with AWS data services (S3, Lambda, Glue, EMR, Redshift). Proficiency in SQL and experience with DBT for data transformation. Experience with Snowflake for data warehousing. Knowledge of MongoDB , Kafka , and data streaming concepts. Good understanding of data architecture, data modeling, and data governance . Familiarity with large-scale data platforms. Essential Professional Skills Excellent problem-solving skills . Ability to work independently or as part of a team . Experience with CI/CD and DevOps practices in a data engineering environment (Plus). Qualifications Proven hands-on experience working with large-scale data platforms . Strong background in Python, PySpark, AWS , and modern data warehousing tools such as Snowflake and DBT . Familiarity with NoSQL databases like MongoDB and real-time streaming platforms like Kafka.

Posted 5 days ago

Apply

4.0 - 8.0 years

3 - 13 Lacs

Pune, Maharashtra, India

On-site

Foundit logo

What Your Responsibilities Will Be You will Design, develop, and maintain efficient ETL pipelines using DBT,Airflow to move and transform data from multiple sources into a data warehouse. You will Lead the development and optimization of data models (e.g., star, snowflake schemas) and data structures to support reporting. You will Leverage cloud platforms (e.g., AWS, Azure, Google Cloud) to manage and scale data storage, processing, and transformation processes. You will Work with business teams, marketing, and sales departments to understand data requirements and translate them into actionable insights and efficient data structures. You will Use advanced SQL and Python skills to query, manipulate, and transform data for multiple use cases and reporting needs. You will Implement data quality checks and ensure that the data adheres to governance best practices, maintaining consistency and integrity across datasets. You will Experience using Git for version control and collaborating on data engineering projects. What Youll Need to be Successful Bachelors degree with 6+ years of experience in Data Engineering. ETL/ELT Expertise : experience in building, improving ETL/ELT processes. Data Modeling : experience with designing and implementing data models such as star and snowflake schemas, and working with denormalized tables to optimize reporting performance. Experience with cloud-based data platforms (AWS, Azure, Google Cloud) SQL and Python Proficiency : Advanced SQL skills for querying large datasets and Python for automation, data processing, and integration tasks. DBT Experience : Hands-on experience with DBT (Data Build Tool) for transforming and managing data models. Good to have Skills: Familiarity with AI concepts such as machine learning (ML), (NLP), and generative AI. Work with AI-driven tools and models for data analysis, reporting, and automation. Oversee and implement DBT models to improve the data transformation process. Experience in the marketing and sales domain, with lead management, marketing analytics, and sales data integration. Familiarity with business intelligence reporting tools, Power BI, for building data models and generating insights.

Posted 5 days ago

Apply

8.0 - 13.0 years

2 - 11 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

What Your Responsibilities Will Be Avalara is looking for data analytics engineer who can solve and scale real world big data challenges. Have end to end analytics experience and a complex data story with data models and reliable and applicable metrics. Build and deploy data science models using complex SQL, Python, DBT data modelling and re-useable visualization components (PowerBI/Tableau/Hex/R-shiny etc.) Expert level experience in PowerBI, SQL and Snowflake Solve needs on a large scale by applying your software engineering and complex data. Lead and help develop a roadmap for the area and the team. Analyze fault tolerance and high availability issues, performance, and scale challenges, and solve them. Lead programs and collaborate with engineers, product managers, and technical program managers across teams. Understand the trade-offs between consistency, durability, and costs to build solutions that can meet the demands of growing services. Ensure the operational readiness of the services and meet the commitments to our customers regarding availability and performance. Manage end-to-end project plans and ensure on-time delivery. Communicate the status and big picture to the project team and management. Work with business and engineering teams to identify scope, constraints, dependencies, and risks. Identify risks and opportunities across the business and guide solutions. What Youll Need to be Successful What Youll Need to be Successful Bachelors Engineering degree in Computer Science or a related field. 8+ years of experience of enterprise-class experience with large-scale cloud solutions in data science/analytics projects and engineering projects. Expert level experience in PowerBI, SQL and Snowflake Experience with data visualization, Python, Data Modeling and data storytelling. Experience architecting complex data marts applying DBT. Architect and build data solutions that use data quality and anomaly detection best practices. Experience building production analytics using the Snowflake data platform. Experience in AWS and Snowflake tools and services Good to have: Certificate in Snowflake is plus Relevant certifications in data warehousing or cloud platform. Experience architecting complex data marts applying DBT and Airflow.

Posted 5 days ago

Apply

8.0 - 13.0 years

3 - 11 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Foundit logo

What Your Responsibilities Will Be Avalara is looking for data analytics engineer who can solve and scale real world big data challenges. Have end to end analytics experience and a complex data story with data models and reliable and applicable metrics. Build and deploy data science models using complex SQL, Python, DBT data modelling and re-useable visualization components (PowerBI/Tableau/Hex/R-shiny etc.) Expert level experience in PowerBI, SQL and Snowflake Solve needs on a large scale by applying your software engineering and complex data. Lead and help develop a roadmap for the area and the team. Analyze fault tolerance and high availability issues, performance, and scale challenges, and solve them. Lead programs and collaborate with engineers, product managers, and technical program managers across teams. Understand the trade-offs between consistency, durability, and costs to build solutions that can meet the demands of growing services. Ensure the operational readiness of the services and meet the commitments to our customers regarding availability and performance. Manage end-to-end project plans and ensure on-time delivery. Communicate the status and big picture to the project team and management. Work with business and engineering teams to identify scope, constraints, dependencies, and risks. Identify risks and opportunities across the business and guide solutions. What Youll Need to be Successful What Youll Need to be Successful Bachelors Engineering degree in Computer Science or a related field. 8+ years of experience of enterprise-class experience with large-scale cloud solutions in data science/analytics projects and engineering projects. Expert level experience in PowerBI, SQL and Snowflake Experience with data visualization, Python, Data Modeling and data storytelling. Experience architecting complex data marts applying DBT. Architect and build data solutions that use data quality and anomaly detection best practices. Experience building production analytics using the Snowflake data platform. Experience in AWS and Snowflake tools and services Good to have: Certificate in Snowflake is plus Relevant certifications in data warehousing or cloud platform. Experience architecting complex data marts applying DBT and Airflow.

Posted 5 days ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Job Title: Senior Data Engineer Key Responsibilities As a Senior Data Engineer, you will: Data Pipeline Development: Design, build, and maintain scalable data pipelines using PySpark and Python. AWS Cloud Integration: Work with AWS cloud services (S3, Lambda, Glue, EMR, Redshift) for data ingestion, processing, and storage. ETL Workflow Management: Implement and maintain ETL workflows using DBT and orchestration tools (e.g., Airflow). Data Warehousing: Design and manage data models in Snowflake, ensuring performance and reliability. SQL Optimization: Utilize SQL for querying and optimizing datasets across different databases. Data Integration: Integrate and manage data from MongoDB, Kafka, and other streaming or NoSQL sources. Collaboration & Support: Collaborate with data scientists, analysts, and other engineers to support advanced analytics and Machine Learning (ML) initiatives. Data Quality & Governance: Ensure data quality, lineage, and governance through best practices and tools. Mandatory Skills & Experience Strong programming skills in Python and PySpark . Hands-on experience with AWS data services (S3, Lambda, Glue, EMR, Redshift). Proficiency in SQL and experience with DBT for data transformation. Experience with Snowflake for data warehousing. Knowledge of MongoDB , Kafka , and data streaming concepts. Good understanding of data architecture, data modeling, and data governance . Familiarity with large-scale data platforms. Essential Professional Skills Excellent problem-solving skills . Ability to work independently or as part of a team . Experience with CI/CD and DevOps practices in a data engineering environment (Plus). Qualifications Proven hands-on experience working with large-scale data platforms . Strong background in Python, PySpark, AWS , and modern data warehousing tools such as Snowflake and DBT . Familiarity with NoSQL databases like MongoDB and real-time streaming platforms like Kafka.

Posted 5 days ago

Apply

7.0 - 8.0 years

1 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

As a Technical Lead Azure Snowflake DBT, you will be a part of an Agile team to build healthcare applications and implement new features while adhering to the best coding development standards. Responsibilities: - Design, develop, and maintain data processing systems using Azure Snowflake. Design and develop robust data integration solutions using Data Build Tool (DBT) and other data pipeline tools. Work with complex SQL functions and transform large data sets to meet business requirements. Drive creation and maintenance of data models that support analytics use cases and business objectives. Collaborate with various stakeholders, including technical teams, functional SMEs, and business users, to understand and address the data needs. Create low-level design documents and unit test strategies and plans in adherence to defined processes and guidelines. Perform code reviews and unit test plan reviews to ensure high quality of code and deliverables. Ensure data quality and integrity through validation, cleansing, and enrichment processes. Support end-to-end testing and validation, including UAT and product testing. Take ownership of problems, demonstrate a proactive approach to problem solving, and Lead solutions to completion. Educational Qualifications: Engineering Degree BE/ME/BTech/MTech/BSc/MSc. Technical certification in multiple technologies is desirable. Skills: Mandatory Technical Skills:- Over 5 years of experience in Cloud data architecture and analytics Proficient in Azure, Snowflake, SQL, and DBT Extensive experience in designing and developing data integration solutions using DBT and other data pipeline tools Excellent communication and teamwork skills Self-initiated, problem solver with a strong sense of ownership Good to Have Skills: Experience in other data processing tools and technologies Familiarity with agile development methodologies Strong analytical and problem-solving skills Experience in the healthcare domain

Posted 5 days ago

Apply

5.0 - 10.0 years

12 - 18 Lacs

Pune, Bengaluru, Delhi / NCR

Work from Office

Naukri logo

SQL, SNOWFLAKE, TABLEAU SQL, SNOWFLAKE,DBT, Datawarehousing SQL, SNOWFLAKE, Python, DBT, Datawarehousing SQL, SNOWFLAKE, Datawarehousing, any ETL tool(preffered is Matillion) SQL, SNOWFLAKE, TABLEAU

Posted 6 days ago

Apply

8.0 - 10.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Cloud Solution Delivery Lead Consultant to join our team in bangalore, Karn?taka (IN-KA), India (IN). Data Engineer Lead Robust hands-on experience with industry standard tooling and techniques, including SQL, Git and CI/CD pipelines mandiroty Management, administration, and maintenance with data streaming tools such as Kafka/Confluent Kafka, Flink Experienced with software support for applications written in Python & SQL Administration, configuration and maintenance of Snowflake & DBT Experience with data product environments that use tools such as Kafka Connect, Synk, Confluent Schema Registry, Atlan, IBM MQ, Sonarcube, Apache Airflow, Apache Iceberg, Dynamo DB, Terraform and GitHub Debugging issues, root cause analysis, and applying fixes Management and maintenance of ETL processes (bug fixing and batch job monitoring) Training & Certification . Apache Kafka Administration Snowflake Fundamentals/Advanced Training . Experience 8 years of experience in a technical role working with AWS At least 2 years in a leadership or management role About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies