Home
Jobs
Companies
Resume

189 Dbt Jobs - Page 4

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

20 - 35 Lacs

Chennai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

As part of the team, you will be responsible for building and running the data pipelines and services that are required to support business functions/reports/dashboard.. We are heavily dependent on BigQuery/Snowflake, Airflow, Stitch/Fivetran, dbt , Tableau/Looker for our business intelligence and embrace AWS with some GCP. As a Data Engineer youll be: Developing end to end ETL/ELT Pipeline working with Data Analysts of business Function. Designing, developing, and implementing scalable, automated processes for data extraction, processing, and analysis in a Data Mesh architecture Mentoring Fother Junior Engineers in the Team Be a go-to expert for data technologies and solutions Ability to provide on the ground troubleshooting and diagnosis to architecture and design challenges Troubleshooting and resolving technical issues as they arise Looking for ways of improving both what and how data pipelines are delivered by the department Translating business requirements into technical requirements, such as entities that need to be modelled, DBT models that need to be build, timings, tests and reports Owning the delivery of data models and reports end to end Perform exploratory data analysis in order to identify data quality issues early in the process and implement tests to ensure prevent them in the future Working with Data Analysts to ensure that all data feeds are optimised and available at the required times. This can include Change Capture, Change Data Control and other delta loading approaches Discovering, transforming, testing, deploying and documenting data sources Applying, help defining, and championing data warehouse governance: data quality, testing, coding best practises, and peer review Building Looker Dashboard for use cases if required What makes you a great fit: Having 3+ years of extensive development experience using snowflake or similar data warehouse technology Having working experience with dbt and other technologies of the modern data stack, such as Snowflake, Apache Airflow, Fivetran, AWS, git, Looker Experience in agile processes, such as SCRUM Extensive experience in writing advanced SQL statements and performance tuning them Experience in Data Ingestion techniques using custom or SAAS tool like fivetran Experience in data modelling and can optimise existing/new data models Experience in data mining, data warehouse solutions, and ETL, and using databases in a business environment with large-scale, complex datasets Additional Information Maximum official notice period acceptable for this role is 30 days This is remote opportunity. Looker/Power BI, DBT, SQL, snowflake are mandatory for this role Location: Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Posted 2 weeks ago

Apply

3.0 - 5.0 years

8 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

We are looking for an experienced and results-driven Data Engineer to join our growing Data Engineering team. The ideal candidate will be proficient in building scalable, high-performance data transformation pipelines using Snowflake and dbt and be able to effectively work in a consulting setup. In this role, you will be instrumental in ingesting, transforming, and delivering high-quality data to enable data-driven decision-making across the clients organization. Key Responsibilities 1. Design and implement scalable ELT pipelines using dbt on Snowflake, following industry accepted best practices. 2. Build ingestion pipelines from various sources including relational databases, APIs, cloud storage and flat files into Snowflake. 3. Implement data modelling and transformation logic to support layered architecture (e.g., staging, intermediate, and mart layers or medallion architecture) to enable reliable and reusable data assets.. 4. Leverage orchestration tools (e.g., Airflow,dbt Cloud, or Azure Data Factory) to schedule and monitor data workflows. 5. Apply dbt best practices: modular SQL development, testing, documentation, and version control. 6. Perform performance optimizations in dbt/Snowflake through clustering, query profiling, materialization, partitioning, and efficient SQL design. 7. Apply CI/CD and Git-based workflows for version-controlled deployments. 8. Contribute to growing internal knowledge base of dbt macros, conventions, and testing frameworks. 9. Collaborate with multiple stakeholders such as data analysts, data scientists, and data architects to understand requirements and deliver clean, validated datasets. 10. Write well-documented, maintainable code using Git for version control and CI/CD processes. 11. Participate in Agile ceremonies including sprint planning, stand-ups, and retrospectives. 12. Support consulting engagements through clear documentation, demos, and delivery of client-ready solutions. Required Qualifications 3 to 5 years of experience in data engineering roles, with 2+ years of hands-on experience in Snowflake and DBT. Experience building and deploying DBT models in a production environment. Expert-level SQL and strong understanding of ELT principles. Strong understanding of ELT patterns and data modelling (Kimball/Dimensional preferred). Familiarity with data quality and validation techniques: dbt tests, dbt docs etc. Experience with Git, CI/CD, and deployment workflows in a team setting Familiarity with orchestrating workflows using tools like dbt Cloud, Airflow, or Azure Data Factory. Core Competencies: o Data Engineering and ELT Development: Building robust and modular data pipelines using dbt. Writing efficient SQL for data transformation and performance tuning in Snowflake. Managing environments, sources, and deployment pipelines in dbt. o Cloud Data Platform Expertise: Strong proficiency with Snowflake: warehouse sizing, query profiling, data loading, and performance optimization. Experience working with cloud storage (Azure Data Lake, AWS S3, or GCS) for ingestion and external stages. ' Technical Toolset: o Languages & Frameworks: Python: For data transformation, notebook development, automation. SQL: Strong grasp of SQL for querying and performance tuning. Best Practices and Standards: o Knowledge of modern data architecture concepts including layered architecture (e.g., staging ? intermediate ? marts, Medallion architecture). Familiarity with data quality, unit testing (dbt tests), and documentation (dbt docs). Security & Governance: o Access and Permissions: Understanding of access control within Snowflake (RBAC), role hierarchies, and secure data handling. Familiar with data privacy policies (GDPR basics), encryption at rest/in transit. Deployment & Monitoring: o DevOps and Automation: Version control using Git, experience with CI/CD practices in a data context. Monitoring and logging of pipeline executions, alerting on failures. Soft Skills: o Communication & Collaboration: Ability to present solutions and handle client demos/discussions. Work closely with onshore and offshore team of analysts, data scientists, and architects. Ability to document pipelines and transformations clearly. Basic Agile/Scrum familiarity working in sprints and logging tasks. Comfort with ambiguity, competing priorities and fast-changing client environment. Education: o Bachelors or masters degree in computer science, Data Engineering, or a related field. o Certifications such as Snowflake SnowPro, dbt Certified Developer Data Engineering are a plus.

Posted 2 weeks ago

Apply

6.0 - 10.0 years

16 - 25 Lacs

Hyderabad

Work from Office

Naukri logo

Key Responsibilities Architect and implement modular, test-driven ELT pipelines using dbt on Snowflake. Design layered data models (e.g., staging, intermediate, mart layers / medallion architecture) aligned with dbt best practices. Lead ingestion of structured and semi-structured data from APIs, flat files, cloud storage (Azure Data Lake, AWS S3), and databases into Snowflake. Optimize Snowflake for performance and cost: warehouse sizing, clustering, materializations, query profiling, and credit monitoring. Apply advanced dbt capabilities including macros, packages, custom tests, sources, exposures, and documentation using dbt docs. Orchestrate workflows using dbt Cloud, Airflow, or Azure Data Factory, integrated with CI/CD pipelines. Define and enforce data governance and compliance practices using Snowflake RBAC, secure data sharing, and encryption strategies. Collaborate with analysts, data scientists, architects, and business stakeholders to deliver validated, business-ready data assets. Mentor junior engineers, lead architectural/code reviews, and help establish reusable frameworks and standards. Engage with clients to gather requirements, present solutions, and manage end-to-end project delivery in a consulting setup Required Qualifications 5 to 8 years of experience in data engineering roles, with 3+ years of hands-on experience working with Snowflake and dbt in production environments. Technical Skills: o Cloud Data Warehouse & Transformation Stack: Expert-level knowledge of SQL and Snowflake, including performance optimization, storage layers, query profiling, clustering, and cost management. Experience in dbt development: modular model design, macros, tests, documentation, and version control using Git. o Orchestration and Integration: Proficiency in orchestrating workflows using dbt Cloud, Airflow, or Azure Data Factory. Comfortable working with data ingestion from cloud storage (e.g., Azure Data Lake, AWS S3) and APIs. Data Modelling and Architecture: Dimensional modelling (Star/Snowflake schemas), Slowly changing dimensions. ' Knowledge of modern data warehousing principles. Experience implementing Medallion Architecture (Bronze/Silver/Gold layers). Experience working with Parquet, JSON, CSV, or other data formats. o Programming Languages: Python: For data transformation, notebook development, automation. SQL: Strong grasp of SQL for querying and performance tuning. Jinja (nice to have): Exposure to Jinja for advanced dbt development. o Data Engineering & Analytical Skills: ETL/ELT pipeline design and optimization. Exposure to AI/ML data pipelines, feature stores, or MLflow for model tracking (good to have). Exposure to data quality and validation frameworks. o Security & Governance: Experience implementing data quality checks using dbt tests. Data encryption, secure key management and security best practices for Snowflake and dbt. Soft Skills & Leadership: Ability to thrive in client-facing roles with competing/changing priorities and fast-paced delivery cycles. Stakeholder Communication: Collaborate with business stakeholders to understand objectives and convert them into actionable data engineering designs. Project Ownership: End-to-end delivery including design, implementation, and monitoring. Mentorship: Guide junior engineers and establish best practices; Build new skill in the team. Agile Practices: Work in sprints, participate in scrum ceremonies, story estimation. Education: Bachelors or masters degree in computer science, Data Engineering, or a related field. Certifications such as Snowflake SnowPro Advanced, dbt Certified Developer are a plus.

Posted 2 weeks ago

Apply

7.0 - 12.0 years

18 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

We are Hiring Senior Data Management Specialist Level 3 for US based IT Company based in Hyderabad. Candidates with experience in Data Management and Snowflake can app. Job Title : Senior Data Management Specialist Level 3 Location : Hyderabad Experience : 7+ Years CTC : 18 LPA - 20 LPA Working shift : Day shift Description: We are looking for an experienced and highly skilled Data Management Specialist (Level 3) to contribute to enterprise-level data solutions with an emphasis on cloud data platforms and modern data engineering tools . The ideal candidate will possess hands-on expertise with Snowflake , combined with a solid foundation in data integration, modeling , and cloud-based database technologies . This role is a key part of a high-impact data team dedicated to ensuring the quality, availability, and governance of enterprise data assets. As a Level 3 specialist , the individual will be expected to lead and execute complex data management tasks , while collaborating closely with data architects, analysts, and business stakeholders . Key Responsibilities: Design, develop, and maintain scalable data pipelines and integrations using Snowflake and other cloud data technologies Handle structured and unstructured data to support analytics, reporting, and operational workloads Develop and optimize complex SQL queries and data transformation logic Collaborate with data stewards and governance teams to uphold data quality, consistency, and compliance Perform data profiling, cleansing, and validation across multiple source systems Support ETL/ELT development and data migration initiatives using tools like Informatica, Talend , or dbt Design and maintain data models , including star and snowflake schemas Ensure performance tuning, monitoring, and troubleshooting of Snowflake environments Document data processes, data lineage, and metadata within the data governance framework Act as a technical SME , offering guidance and support to junior team members Required Skills & Qualifications: Minimum 5 years of experience in data engineering, data management, or similar roles Strong hands-on experience with Snowflake (development, administration, performance optimization) Proficiency in SQL , data modeling , and cloud-native data architectures Experience working on cloud platforms such as AWS, Azure , or Google Cloud (with Snowflake) Familiarity with ETL tools like Informatica, Talend , or dbt Solid understanding of data governance , metadata management , and data quality best practices Experience with Python or Shell scripting for automation and data operations Strong analytical and problem-solving abilities Excellent communication and documentation skills For further assistance contact/whatsapp : 9354909512 or write to pankhuri@gist.org.in

Posted 2 weeks ago

Apply

5.0 - 10.0 years

12 - 22 Lacs

Hyderabad, Delhi / NCR

Hybrid

Naukri logo

8+ years of experience in data engineering or a related field. Strong expertise in Snowflake including schema design, performance tuning, and security. Proficiency in Python for data manipulation and automation. Solid understanding of data modeling concepts (star/snowflake schema, normalization, etc.). Experience with DBT for data transformation and documentation. Hands-on experience with ETL/ELT tools and orchestration frameworks (e.g., Airflow, Prefect). Strong SQL skills and experience with large-scale data sets. Familiarity with cloud platforms (AWS, Azure, or GCP) and data services.

Posted 2 weeks ago

Apply

3.0 - 7.0 years

14 - 20 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Work from Office

Naukri logo

Role: ETL Testing Notice Period: Immediate Joiners Work Mode: Remote Interested Candidate can share there CV to Devika_P@trigent.com Role & responsibilities Responsibilities Develop and execute comprehensive test plans and test cases for data solutions, including data pipelines, ETL processes, and data warehouses. Perform data validation and verification to ensure data accuracy, completeness, and consistency. Identify, document, and track defects and issues, and work with development teams to resolve them. Collaborate with data engineers, data scientists, and other stakeholders to understand data requirements and ensure that testing covers all necessary scenarios. Automate data testing processes using appropriate tools and frameworks. Conduct performance testing to ensure data solutions can handle expected workloads. Participate in code reviews and provide feedback on data quality and testing practices. Continuously improve testing processes and methodologies to enhance the efficiency and effectiveness of data testing. Requirements and Experience Proven experience in data testing, quality engineering Strong understanding of data engineering practices, including ETL processes, data pipelines, and data warehousing. Knowledge of SSIS, SSAS. Proficiency in SQL and experience with database management systems (e.g., MS SQL Server) Experience with data testing tools and frameworks (e.g., pytest, dbt). Familiarity with cloud data platforms (e.g., Snowflake, Azure Data Factory). Strong analytical and problem-solving skills. Excellent communication and collaboration skills. Ability to work independently and as part of a team.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

22 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

Hands on experience with Snowflake and Python a must. Hands on experience with Apache Spark a must. Hands on experience with DBT preferred. Experience with performance tuning SQL queries, Spark job, and stored procedures. An understanding of E-R data models (conceptual, logical, and physical).

Posted 2 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Foundit logo

Introduction In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. Youll work with visionaries across multiple industries to improve the hybrid cloud and Al journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, youll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience. In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities Minimum 3 years of experience in developing applications programs to implement the ETL workflow by creating the ETL jobs, data models in datamarts using Snowflake, DBT, Unix, SQL technologies. Redesign Control M Batch processing for the ETL job build to run efficiently in Production. Study existing system to evaluate effectiveness and developed new system to improve efficiency and workflow. Responsibilities: Perform requirements identification conduct business program analysis, testing, and system enhancements while providing production support. Developer should have good understanding of working in Agile environment, Good understanding of JIRA, Sharepoint tools. Good written and verbal communication skills are a MUST as the candidate is expected to work directly with client counterpart. Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise Intuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications Preferred technical and professional experience Responsible to develop triggers, functions, stored procedures to support this effort Assist with impact analysis of changing upstream processes to Data Warehouse and Reporting systems. Assist with design, testing, support, and debugging of new and existing ETL and reporting processes. Perform data profiling and analysis using a variety of tools. Troubleshoot and support production processes. Create and maintain documentation.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

50 - 60 Lacs

Bengaluru

Work from Office

Naukri logo

Staff Data Engineer Experience: 3 - 5 Years Exp Salary : INR 50-60 Lacs per annum Preferred Notice Period : Within 30 Days Shift : 4:00PM to 1:00AM IST Opportunity Type: Remote Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : ClickHouse, DuckDB, AWS, Python, SQL Good to have skills : DBT, Iceberg, Kestra, Parquet, SQLGlot Rill Data (One of Uplers' Clients) is Looking for: Staff Data Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Rill is the worlds fastest BI tool, designed from the ground up for real-time databases like DuckDB and ClickHouse. Our platform combines last-mile ETL, an in-memory database, and interactive dashboards into a full-stack solution thats easy to deploy and manage. With a BI-as-code approach, Rill empowers developers to define and collaborate on metrics using SQL and YAML. Trusted by leading companies in e-commerce, digital marketing, and financial services, Rill provides the speed and scalability needed for operational analytics and partner-facing reporting. Job Summary Overview Rill is looking for a Staff Data Engineer to join our Field Engineering team. In this role, you will work closely with enterprise customers to design and optimize high-performance data pipelines powered by DuckDB and ClickHouse. You will also collaborate with our platform engineering team to evolve our incremental ingestion architectures and support proof-of-concept sales engagements. The ideal candidate has strong SQL fluency, experience with orchestration frameworks (e.g., Kestra, dbt, SQLGlot), familiarity with data lake table formats (e.g., Iceberg, Parquet), and an understanding of cloud databases (e.g., Snowflake, BigQuery). Most importantly, you should have a passion for solving real-world data engineering challenges at scale. Key Responsibilities Collaborate with enterprise customers to optimize data models for performance and cost efficiency. Work with the platform engineering team to enhance and refine our incremental ingestion architectures. Partner with account executives and solution architects to rapidly prototype solutions for proof-of-concept sales engagements. Qualifications (required) Fluency in SQL and competency in Python. Bachelors degree in a STEM discipline or equivalent industry experience. 3+ years of experience in a data engineering or related role. Familiarity with major cloud environments (AWS, Google Cloud, Azure) Benefits Competitive salary Health insurance Flexible vacation policy How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Rill is an operational BI tool that provides fast dashboards that your team will actually use. Data teams build fewer, more flexible dashboards for business users, while business users make faster decisions and perform root cause analysis, with fewer ad hoc requests. Rills unique architecture combines a last-mile ETL service, an in-memory database, and operational dashboards - all in a single solution. Our customers are leading media & advertising platforms, including Comcast's Freewheel, tvScientific, AT&T's DishTV, and more. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 weeks ago

Apply

5.0 - 10.0 years

3 - 11 Lacs

Pune, Maharashtra, India

On-site

Foundit logo

Project Role : Application Designer Project Role Description : Assist in defining requirements and designing applications to meet business process and application requirements. Must have skills : Data Building Tool DBT Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years of full time education Summary: As an Application Designer, you will assist in defining requirements and designing applications to meet business process and application requirements. You will play a crucial role in ensuring the successful implementation of applications. Your typical day will involve collaborating with stakeholders to gather requirements, designing application solutions, and ensuring that the applications meet the desired functionality and performance standards. Roles & Responsibilities: Expected to be an SME, collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Design and develop application solutions based on business requirements. Collaborate with stakeholders to gather and analyze requirements. Create technical specifications and design documents. Ensure that the applications meet the desired functionality and performance standards. Professional & Technical Skills: Must To Have Skills: Proficiency in Microsoft Azure Data Services, Snowflake Data Warehouse. Strong understanding of cloud-based data warehousing concepts and architecture. Experience in designing and implementing data solutions using Microsoft Azure Data Services. Hands-on experience with Snowflake Data Warehouse. Hands-on experience with Streamset tool. Hands-on experience with DBT tool. Out of Azure, Snowflake, Streamset, DBT , should have atleast strong development skills in 3 areas. Solid grasp of Technical architecture and database design principles. Experience with ETL (Extract, Transform, Load) processes and tools. Additional Information: The candidate should have a minimum of 5 years of experience in Data Building Tool DBT. This position is based in Pune. A 15 years of full time education is required.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Role Responsibilities: Design and deploy cloud-integrated application solutions Build and manage cloud infrastructure environments Conduct performance and security testing Collaborate with cross-functional teams for delivery Job Requirements: Minimum 3 years of DBT experience Strong knowledge of cloud platforms and data tools Skilled in architecture best practices Bachelor's degree with 15 years of full-time education

Posted 2 weeks ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Role Responsibilities: Design and deploy cloud-integrated application solutions Build and manage cloud infrastructure environments Conduct performance and security testing Collaborate with cross-functional teams for delivery Job Requirements: Minimum 3 years of DBT experience Strong knowledge of cloud platforms and data tools Skilled in architecture best practices Bachelor's degree with 15 years of full-time education

Posted 2 weeks ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Role Responsibilities: Design and deploy cloud-integrated application solutions Build and manage cloud infrastructure environments Conduct performance and security testing Collaborate with cross-functional teams for delivery Job Requirements: Minimum 3 years of DBT experience Strong knowledge of cloud platforms and data tools Skilled in architecture best practices Bachelor's degree with 15 years of full-time education

Posted 2 weeks ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Navi Mumbai, Maharashtra, India

On-site

Foundit logo

Role Responsibilities: Design and deploy cloud-integrated application solutions Build and manage cloud infrastructure environments Conduct performance and security testing Collaborate with cross-functional teams for delivery Job Requirements: Minimum 3 years of DBT experience Strong knowledge of cloud platforms and data tools Skilled in architecture best practices Bachelor's degree with 15 years of full-time education

Posted 2 weeks ago

Apply

11.0 - 21.0 years

11 - 21 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Oversees and designs the information architecture for the data warehouse, including all information structures i.e. staging area, data warehouse, data marts, operational data stores, oversees standardization of data definition, Oversees development of Physical and Logical modelling. Deep understanding in Data Warehousing, Enterprise Architectures, Dimensional Modelling, Star & Snow-flake schema design, Reference DW Architectures, ETL Architect, ETL (Extract, Transform, Load), Data Analysis, Data Conversion, Transformation, Database Design, Data Warehouse Optimization, Data Mart Development, and Enterprise Data Warehouse Maintenance and Support etc. Significant experience in working as a Data Architect with depth in data integration and data architecture for Enterprise Data Warehouse implementations (conceptual, logical, physical & dimensional models). Maintain in-depth and current knowledge of the Cloud architecture, Data lake, Data warehouse, BI platform, analytics models platforms and ETL tools Cloud knowledge, especially AWS knowledge is necessary Well-versed with best practices around Data Governance, Data Stewardship and overall Data Quality initiatives Inventories existing data design, including data flows and systems, Designs data Model that integrates new and existing environment, Develops Conducts architecture meetings with Team leads and Solution Architects to communicate complex ideas, issues, and system processes along with Architecture discussions in their current projects Design the data warehouse and provide guidance to the team in implementation using Snowflake SnowSQL. Good hands-on experience in converting Source Independent Load, Post Load Process, Stored Procedure, SQL to Snowflake Strong understanding of ELT/ETL and integration concepts and design best practices. Experience in performance tuning of the snow pipelines and should be able to trouble shoot the issue quickly 1 yr Work experience in DBT. Data modeling and data integration. Advance SQL skills for analysis, standardizing queries Mandatory Skillset: Snowflake, DBT, Data Architecture Design experience in Data Warehouse.

Posted 2 weeks ago

Apply

11.0 - 21.0 years

11 - 21 Lacs

Pune, Maharashtra, India

On-site

Foundit logo

Oversees and designs the information architecture for the data warehouse, including all information structures i.e. staging area, data warehouse, data marts, operational data stores, oversees standardization of data definition, Oversees development of Physical and Logical modelling. Deep understanding in Data Warehousing, Enterprise Architectures, Dimensional Modelling, Star & Snow-flake schema design, Reference DW Architectures, ETL Architect, ETL (Extract, Transform, Load), Data Analysis, Data Conversion, Transformation, Database Design, Data Warehouse Optimization, Data Mart Development, and Enterprise Data Warehouse Maintenance and Support etc. Significant experience in working as a Data Architect with depth in data integration and data architecture for Enterprise Data Warehouse implementations (conceptual, logical, physical & dimensional models). Maintain in-depth and current knowledge of the Cloud architecture, Data lake, Data warehouse, BI platform, analytics models platforms and ETL tools Cloud knowledge, especially AWS knowledge is necessary Well-versed with best practices around Data Governance, Data Stewardship and overall Data Quality initiatives Inventories existing data design, including data flows and systems, Designs data Model that integrates new and existing environment, Develops Conducts architecture meetings with Team leads and Solution Architects to communicate complex ideas, issues, and system processes along with Architecture discussions in their current projects Design the data warehouse and provide guidance to the team in implementation using Snowflake SnowSQL. Good hands-on experience in converting Source Independent Load, Post Load Process, Stored Procedure, SQL to Snowflake Strong understanding of ELT/ETL and integration concepts and design best practices. Experience in performance tuning of the snow pipelines and should be able to trouble shoot the issue quickly 1 yr Work experience in DBT. Data modeling and data integration. Advance SQL skills for analysis, standardizing queries Mandatory Skillset: Snowflake, DBT, Data Architecture Design experience in Data Warehouse.

Posted 2 weeks ago

Apply

11.0 - 21.0 years

25 - 37 Lacs

Noida, Pune, Bengaluru

Hybrid

Naukri logo

Oversees and designs the information architecture for the data warehouse, including all information structures i.e. staging area, data warehouse, data marts, operational data stores, oversees standardization of data definition, Oversees development of Physical and Logical modelling. Deep understanding in Data Warehousing, Enterprise Architectures, Dimensional Modelling, Star & Snow-flake schema design, Reference DW Architectures, ETL Architect, ETL (Extract, Transform, Load), Data Analysis, Data Conversion, Transformation, Database Design, Data Warehouse Optimization, Data Mart Development, and Enterprise Data Warehouse Maintenance and Support etc. Significant experience in working as a Data Architect with depth in data integration and data architecture for Enterprise Data Warehouse implementations (conceptual, logical, physical & dimensional models). Maintain in-depth and current knowledge of the Cloud architecture, Data lake, Data warehouse, BI platform, analytics models platforms and ETL tools Cloud knowledge, especially AWS knowledge is necessary Well-versed with best practices around Data Governance, Data Stewardship and overall Data Quality initiatives Inventories existing data design, including data flows and systems, Designs data Model that integrates new and existing environment, Develops Conducts architecture meetings with Team leads and Solution Architects to communicate complex ideas, issues, and system processes along with Architecture discussions in their current projects Design the data warehouse and provide guidance to the team in implementation using Snowflake SnowSQL. Good hands-on experience in converting Source Independent Load, Post Load Process, Stored Procedure, SQL to Snowflake Strong understanding of ELT/ETL and integration concepts and design best practices. Experience in performance tuning of the snow pipelines and should be able to trouble shoot the issue quickly 1 yr Work experience in DBT. Data modeling and data integration. Advance SQL skills for analysis, standardizing queries Mandatory Skillset: Snowflake, DBT, Data Architecture Design experience in Data Warehouse.

Posted 2 weeks ago

Apply

6.0 - 10.0 years

12 - 18 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

Requirement is as follows: • Design and implement scalable data storage solutions using Snowflake • Writing SQL queries against Snowflake, developing scripts to do Extract, Load, and Transform data. • Write, optimize, and troubleshoot complex SQL queries within Snowflake • Hands-on experience with Snowflake utilities such as SnowSQL, SnowPipe, Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs. • Develop and maintain ETL processes using Informatica PowerCenter • Integrate Snowflake with various data sources and third-party applications • Experience in Data Lineage Analysis, Data Profiling, ETL Design and development, Unit Testing, Production batch support and UAT support. • Involve SQL performance tuning, Root causing failures and bifurcating them into different technical issues and resolving them. • In-depth understanding of Data Warehouse, ETL concepts and Data Modelling. • Experience in requirement gathering, analysis, designing, development, and deployment. • Good working knowledge of any ETL tool (preferably Informatica powercenter, DBT) • Should have proficiency in SQL. • Have experience in client facing projects. • Have experience on Snowflake Best Practices. • Should have experience working on Unix shell scripting. • Good to have working experience in python Expected skillset: Should have 6 8 years of IT experience. Minimum 4+ years of experience in designing and implementing a fully operational solution on Snowflake Data Warehouse. Proven experience as a Snowflake and Informatica developer Strong expertise in Snowflake architecture, design, and implementation. Proficient in SQL, ETL tools, and data modelling concepts. Excellent leadership, communication, and problem-solving skills. Certifications in Snowflake and relevant cloud platforms are desirable.

Posted 2 weeks ago

Apply

6.0 - 8.0 years

15 - 25 Lacs

Noida, Greater Noida, Delhi / NCR

Hybrid

Naukri logo

Role & responsibilities SQL Dbt Python Snowflake Data Quality & Data modelling Good to have Snowpipe, Fivetran Mandatory Skills Data Engineering, Data Modeling, SQL, dbt, Python, Data Quality, snowflake Desirable Skills Data Engineering, Data Modeling, SQL, dbt, Python, Data Quality, snowflake, snowpipes, fivetran Roles & Responsibilities 1. Ensure reliable and scalable data pipelines to support healthcare operations. 2. Maintain data availability with proactive exception handling and recovery mechanisms. 3. Perform data quality checks to ensure accuracy, completeness, and consistency. 4. Detect and handle alerts early to prevent data discrepancies and processing failures. 5. Develop and optimize data models for efficient storage, retrieval, and analytics. 6. Prepare and structure data for reporting, compliance, and decision-making. 7. Work with Snowflake to manage data warehousing and performance tuning. 8. Implement and optimize DBT workflows for data transformation and governance. 9. Leverage SQL and Python for data processing, automation, and validation. 10. Experience with Snowpipe and Fivetran is a plus for automating data ingestion. Skills to be evaluated on Data Engineering-Data-ModelingPythonData QualitySQLdbtsnowflake Years Of Experience 6 to 10 Years Education/Qualification B. Tech

Posted 2 weeks ago

Apply

12.0 - 20.0 years

22 - 37 Lacs

Bengaluru

Hybrid

Naukri logo

12+ yrs of experience in Data Architecture Strong in Azure Data Services & Databricks, including Delta Lake & Unity Catalog Experience in Azure Synapse, Purview, ADF, DBT, Apache Spark,DWH,Data Lakes, NoSQL,OLTP NP-Immediate sachin@assertivebs.com

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Consultant - Sr.Data Engineer ( DBT+Snowflake ) ! In this role, the Sr.Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Job Description: Develop, implement, and optimize data pipelines using Snowflake, with a focus on Cortex AI capabilities. Extract, transform, and load (ETL) data from various sources into Snowflake, ensuring data integrity and accuracy. Implement Conversational AI solutions using Snowflake Cortex AI to facilitate data interaction through ChatBot agents. Collaborate with data scientists and AI developers to integrate predictive analytics and AI models into data workflows. Monitor and troubleshoot data pipelines to resolve data discrepancies and optimize performance. Utilize Snowflake%27s advanced features, including Snowpark, Streams, and Tasks, to enable data processing and analysis. Develop and maintain data documentation, best practices, and data governance protocols. Ensure data security, privacy, and compliance with organizational and regulatory guidelines. Responsibilities: . Bachelor&rsquos degree in Computer Science, Data Engineering, or a related field. . experience in data engineering, with experience working with Snowflake. . Proven experience in Snowflake Cortex AI, focusing on data extraction, chatbot development, and Conversational AI. . Strong proficiency in SQL, Python, and data modeling . . Experience with data integration tools (e.g., Matillion , Talend, Informatica). . Knowledge of cloud platforms such as AWS, Azure, or GCP. . Excellent problem-solving skills, with a focus on data quality and performance optimization. . Strong communication skills and the ability to work effectively in a cross-functional team. Proficiency in using DBT%27s testing and documentation features to ensure the accuracy and reliability of data transformations. Understanding of data lineage and metadata management concepts, and ability to track and document data transformations using DBT%27s lineage capabilities. Understanding of software engineering best practices and ability to apply these principles to DBT development, including version control, code reviews, and automated testing. Should have experience building data ingestion pipeline. Should have experience with Snowflake utilities such as SnowSQL , SnowPipe , bulk copy, Snowpark, tables, Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs, Snowsight . Should have good experience in implementing CDC or SCD type 2 Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in repository tools like Github /Gitlab, Azure repo Qualifications/Minimum qualifications B.E./ Masters in Computer Science , Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant of working experience as a Sr. Data Engineer with DBT+Snowflake skillsets Skill Matrix: DBT (Core or Cloud), Snowflake, AWS/Azure, SQL, ETL concepts, Airflow or any orchestration tools, Data Warehousing concepts Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 2 weeks ago

Apply

9.0 - 14.0 years

25 - 40 Lacs

Bengaluru

Hybrid

Naukri logo

Greetings from tsworks Technologies India Pvt . We are hiring for Sr. Data Engineer - Snowflake with AWS If you are interested, please share your CV to mohan.kumar@tsworks.io Position: Senior Data Engineer Experience: 9+ Years Location: Bengaluru, India (Hybrid) Mandatory Required Qualification Strong proficiency in AWS data services such as S3 buckets, Glue and Glue Catalog, EMR, Athena, Redshift, DynamoDB, Quick Sight, etc. Strong hands-on experience building Data Lake-House solutions on Snowflake, and using features such as streams, tasks, dynamic tables, data masking, data exchange etc. Hands-on experience using scheduling tools such as Apache Airflow, DBT, AWS Step Functions and data governance products such as Collibra Expertise in DevOps and CI/CD implementation Excellent Communication Skills In This Role, You Will Design, implement, and manage scalable and efficient data architecture on the AWS cloud platform. Develop and maintain data pipelines for efficient data extraction, transformation, and loading (ETL) processes. Perform complex data transformations and processing using PySpark (AWS Glue, EMR or Databricks), Snowflake's data processing capabilities, or other relevant tools. Hands-on experience working with Data Lake solutions such as Apache Hudi, Delta Lake or Iceberg. Develop and maintain data models within Snowflake and related tools to support reporting, analytics, and business intelligence needs. Collaborate with cross-functional teams to understand data requirements and design appropriate data integration solutions. Integrate data from various sources, both internal and external, ensuring data quality and consistency. Skills & Knowledge Bachelor's degree in computer science, Engineering, or a related field. 9 + Years of experience in Information Technology, designing, developing and executing solutions. 4+ Years of hands-on experience in designing and executing data solutions on AWS and Snowflake cloud platforms as a Data Engineer. Strong proficiency in AWS services such as Glue, EMR, Athena, Databricks, with file formats such as Parquet and Avro. Hands-on experience in data modelling, batch and real-time pipelines, using Python, Java or JavaScript and experience working with Restful APIs are required. Hands-on experience in handling real-time data streams from Kafka or Kinesis is required. Expertise in DevOps and CI/CD implementation. Hands-on experience with SQL and NoSQL databases. Hands-on experience in data modelling, implementation, and management of OLTP and OLAP systems. Knowledge of data quality, governance, and security best practices. Familiarity with machine learning concepts and integration of ML pipelines into data workflows Hands-on experience working in an Agile setting. Is self-driven, naturally curious, and able to adapt to a fast-paced work environment. Can articulate, create, and maintain technical and non-technical documentation. AWS and Snowflake Certifications are preferred.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

20 - 27 Lacs

Hyderabad

Work from Office

Naukri logo

Position: Experienced Data Engineer We are seeking a skilled and experienced Data Engineer to join our fast-paced and innovative Data Science team. This role involves building and maintaining data pipelines across multiple cloud-based data platforms. Requirements: A minimum of 5 years of total experience, with at least 3-4 years specifically in Data Engineering on a cloud platform. Key Skills & Experience: Proficiency with AWS services such as Glue, Redshift, S3, Lambda, RDS , Amazon Aurora ,DynamoDB ,EMR, Athena, Data Pipeline , Batch Job. Strong expertise in: SQL and Python DBT and Snowflake OpenSearch, Apache NiFi, and Apache Kafka In-depth knowledge of ETL data patterns and Spark-based ETL pipelines. Advanced skills in infrastructure provisioning using Terraform and other Infrastructure-as-Code (IaC) tools. Hands-on experience with cloud-native delivery models, including PaaS, IaaS, and SaaS. Proficiency in Kubernetes, container orchestration, and CI/CD pipelines. Familiarity with GitHub Actions, GitLab, and other leading DevOps and CI/CD solutions. Experience with orchestration tools such as Apache Airflow and serverless/FaaS services. Exposure to NoSQL databases is a plus

Posted 2 weeks ago

Apply

4.0 - 6.0 years

0 Lacs

, India

On-site

Foundit logo

At Roche you can show up as yourself, embraced for the unique qualities you bring. Our culture encourages personal expression, open dialogue, and genuine connections, where you are valued, accepted and respected for who you are, allowing you to thrive both personally and professionally. This is how we aim to prevent, stop and cure diseases and ensure everyone has access to healthcare today and for generations to come. Join Roche, where every voice matters. The Position In Roche Informatics, we build on Roche's 125-year history as one of the world's largest biotech companies, globally recognized for providing transformative innovative solutions across major disease areas. We combine human capabilities with cutting-edge technological innovations to do now what our patients need next. Our commitment to our patients needs motivates us to deliver technology that evolves the practice of medicine. Be part of our inclusive team at Roche Informatics, where we're driven by a shared passion for technological novelties and optimal IT solutions. About the position Data Engineer, who will work closely with multi-disciplinary and multi-cultural teams to build structured, high-quality data solutions. The person may be leading technical squads. These solutions will be leveraged across Enterprise , Pharma and Diagnostics solutions to help our teams fulfill our mission: to do now what patients need next. In this position, you will require hands-on expertise in ETL pipeline development, data engineering. You should also be able to provide direction and guidance to developers, oversee the development and unit testing, as well as document the developed solution. Building strong customer relationships for ongoing business is also a key aspect of this role. To succeed in this position, you should have experience with Cloud-based Data Solution Architectures, the Software Development Life Cycle (including both Agile and waterfall methodologies), Data Engineering and ETL tools/platforms, and data modeling practices. Your key responsibilities: Building and optimizing data ETL pipelines to support data analytics Developing and implementing data integrations with other systems and platforms Maintaining documentation for data pipelines and related processes Logical and physical modeling of datasets and applications Making Roche data assets accessible and findable across the organization Explore new ways of building, processing, and analyzing data in order to deliver insights to our business partners Continuously refine data quality with testing, tooling and performance evaluation Work with business and functional stakeholders to understand data requirements and downstream analytics needs. Partner with business to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements. Foster a data-driven culture throughout the team and lead data engineering projects that will have an impact throughout the organization. Work with data and analytics experts to strive for greater functionality in our data systems and products and help to grow our data team with exceptional engineers. Your qualifications and experience: Education in related fields (Computer Science, Computer Engineering, Mathematical Engineering, Information Systems) or job experience preferably within multiple Data Engineering technologies. 4+ years experience with ETL development, data engineering and data quality assurance. Good Experience on Snowflake and its features. Hands on experience as Data Engineering in Cloud Data Solutions using Snowflake . Experienced working with Cloud Platform Services (AWS/Azure/GCP) . Experienced in ETL/ETL technologies like Talend/DBT or other ETL platforms . Experience in preparing and reviewing new data flows patterns. Excellent Python Skills Strong RDBMS concepts and SQL development skills Strong focus on data pipelines automation Exposure in quality assurance and data quality activities are an added advantage. DevOps/ DataOps experience (especially Data operations preferred) Readiness to work with multiple tech domains and streams Passionate about new technologies and experimentation Experience with Inmuta and Montecarlo is a plus What you get: Good and stable working environment with attractive compensation and rewards package (according to local regulations) Annual bonus payment based on performance Access to various internal and external training platforms (e.g. Linkedin Learning) Experienced and professional colleagues and workplace that supports innovation Multiple Savings Plans with Employer Match Company's emphasis on employees wellness and work-life balance ( (e.g. generous vacation days and OneRoche Wellness Days ), Workplace flexibility policy State of art working environment and facilities And many more that the Talent Acquisition Partner will be happy to talk about! Who we are A healthier future drives us to innovate. Together, more than 100'000 employees across the globe are dedicated to advance science, ensuring everyone has access to healthcare today and for generations to come. Our efforts result in more than 26 million people treated with our medicines and over 30 billion tests conducted using our Diagnostics products. We empower each other to explore new possibilities, foster creativity, and keep our ambitions high, so we can deliver life-changing healthcare solutions that make a global impact. Let's build a healthier future, together. Roche is an Equal Opportunity Employer.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Role & responsibilities Design, develop, and optimize scalable data pipelines for ETL/ELT processes. Develop and maintain Python-based data processing scripts and automation tools. Write and optimize complex SQL queries (preferably in Snowflake) for data transformation and analytics. Experience with Jenkins or other CI/CD tools. Experience developing with Snowflake as the data platform. Experience with ETL/ELT tools (preferably Fivetran, dbt). Implement version control best practices using Git or other tools to manage code changes. Collaborate with cross-functional teams (analysts, product managers, and engineers) to understand business needs and translate them into technical data solutions. Ensure data integrity, security, and governance across multiple data sources. Optimize query performance and database architecture for efficiency and scalability. Lead troubleshooting and debugging efforts for data-related issues. Document data workflows, architectures, and best practices to ensure maintainability and knowledge sharing. Preferred candidate profile 5+ years of experience in Data Engineering, Software Engineering, or a related field. Bachelors or masters degree in computer science, Computer Engineering, or a related discipline High proficiency in SQL (preferably Snowflake) for data modeling, performance tuning, and optimization. Strong expertise in Python for data processing and automation. Experience with Git or other version control tools in a collaborative development environment. Strong communication skills and ability to collaborate with cross-functional teams for requirements gathering and solution design. Experience working with large-scale, distributed data systems and cloud data warehouse.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies