Jobs
Interviews

405 Dbt Jobs - Page 10

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 6.0 years

2 - 11 Lacs

Bengaluru, Karnataka, India

On-site

Key Skills: Proficient in writing SQL queries against Snowflake and working with Snowflake utilities such as Snow SQL, Snow Pipe, Python, Tasks, Streams, Time travel, Optimizer, Metadata Manager, data sharing, and stored procedures. Hands-on experience with Data Build Tool (DBT) and expertise in developing scripts using Unix, Python, etc. for Extract, Load, and Transform (ETL) processes. In-depth understanding of Data Warehouse /ODS concepts, ETL modelling principles, and experience in Data Warehousing - OLTP, OLAP, Dimensions, Facts, and Data modelling. Proven experience in gathering and analysing system requirements, with good working knowledge of XML Shred and installation. Familiarity with any ETL tool (Informatica or SSIS) and exposure to data visualization tools like Tableau or Power BI. Ability to effectively collaborate in a cross-functional team environment. Good to have exposure to AWS/Azure Data ecosystem.

Posted 1 month ago

Apply

5.0 - 6.0 years

3 - 10 Lacs

Hyderabad, Telangana, India

On-site

Key Skills: Proficient in writing SQL queries against Snowflake and working with Snowflake utilities such as Snow SQL, Snow Pipe, Python, Tasks, Streams, Time travel, Optimizer, Metadata Manager, data sharing, and stored procedures. Hands-on experience with Data Build Tool (DBT) and expertise in developing scripts using Unix, Python, etc. for Extract, Load, and Transform (ETL) processes. In-depth understanding of Data Warehouse /ODS concepts, ETL modelling principles, and experience in Data Warehousing - OLTP, OLAP, Dimensions, Facts, and Data modelling. Proven experience in gathering and analysing system requirements, with good working knowledge of XML Shred and installation. Familiarity with any ETL tool (Informatica or SSIS) and exposure to data visualization tools like Tableau or Power BI. Ability to effectively collaborate in a cross-functional team environment. Good to have exposure to AWS/Azure Data ecosystem.

Posted 1 month ago

Apply

5.0 - 6.0 years

3 - 13 Lacs

Delhi, India

On-site

Key Skills: Proficient in writing SQL queries against Snowflake and working with Snowflake utilities such as Snow SQL, Snow Pipe, Python, Tasks, Streams, Time travel, Optimizer, Metadata Manager, data sharing, and stored procedures. Hands-on experience with Data Build Tool (DBT) and expertise in developing scripts using Unix, Python, etc. for Extract, Load, and Transform (ETL) processes. In-depth understanding of Data Warehouse /ODS concepts, ETL modelling principles, and experience in Data Warehousing - OLTP, OLAP, Dimensions, Facts, and Data modelling. Proven experience in gathering and analysing system requirements, with good working knowledge of XML Shred and installation. Familiarity with any ETL tool (Informatica or SSIS) and exposure to data visualization tools like Tableau or Power BI. Ability to effectively collaborate in a cross-functional team environment. Good to have exposure to AWS/Azure Data ecosystem.

Posted 1 month ago

Apply

5.0 - 8.0 years

5 - 8 Lacs

Bengaluru, Karnataka, India

On-site

We are seeking a highly skilled Senior Data Engineer to join our dynamic team in Bangalore. You will be responsible for designing, developing, and maintaining scalable data ingestion frameworks and ELT pipelines . The ideal candidate will have deep technical expertise in cloud platforms (especially AWS) , data architecture, and orchestration tools like DBT , Apache Airflow , and Prefect . Key Responsibilities Design, develop, and maintain scalable data ingestion frameworks. Build and manage ELT pipelines using tools such as DBT , Apache Airflow , and Prefect . Work with modern cloud data warehouses like Snowflake , Redshift , or Databricks . Integrate data pipelines with AWS services like S3 , Lambda , Step Functions , and Glue . Utilize strong SQL and scripting skills for data manipulation and automation. Implement CI/CD practices for data pipelines. Ensure data integrity, quality, and performance. Collaborate with cross-functional teams to understand data requirements. Required Skills Expertise in ELT pipelines and data ingestion frameworks. Strong knowledge of DBT , Apache Airflow , and/or Prefect . Deep technical expertise in AWS services. Experience with cloud data warehouses (e.g., Snowflake , Redshift , Databricks ). Proficiency in SQL and scripting. Experience with CI/CD practices . Knowledge of data systems in the manufacturing industry is a plus. Strong problem-solving and communication skills.

Posted 1 month ago

Apply

6.0 - 11.0 years

6 - 11 Lacs

Noida, Uttar Pradesh, India

On-site

We are looking for a Snowflake Developer with deep expertise in Snowflake and DBT or SQL to help us build and scale our modern data platform. Key Responsibilities: Design and build scalable ELT pipelines in Snowflake using DBT/SQL . Develop efficient, well-tested DBT models (staging, intermediate, and marts layers). Implement data quality, testing, and monitoring frameworks to ensure data reliability and accuracy. Optimize Snowflake queries, storage, and compute resources for performance and cost-efficiency. Collaborate with cross-functional teams to gather data requirements and deliver data solutions. Required Qualifications: 5+ years of experience as a Data Engineer, with at least 4 years working with Snowflake . Proficient with DBT (Data Build Tool) including Jinja templating, macros, and model dependency management. Strong understanding of ELT patterns and modern data stack principles. Advanced SQL skills and experience with performance tuning in Snowflake.

Posted 1 month ago

Apply

8.0 - 12.0 years

30 - 45 Lacs

Pune

Hybrid

What Youll Do The Global Analytics & Insights (GAI) team is seeking a Data & Analytics Engineering Manager to lead our team in designing, developing, and maintaining data pipelines and analytics infrastructure. As a Data & Analytics Engineering Manager, you will play a pivotal role in empowering a team of engineers to build and enhance analytics applications and a modern data platform using Snowflake, dbt (Data Build Tool), Python, Terraform, and Airflow. You will become an expert in Avalaras financial, marketing, sales, and operations data. The ideal candidate will have deep SQL experience, an understanding of modern data stacks and technology, demonstrated leadership and mentoring experience, and an ability to drive innovation and manage complex projects. This position will report to Senior Manager. Responsibilities Mentor a team of data engineers, providing guidance and support to ensure a high level of quality and career growth Lead a team of data engineers in the development and maintenance of data pipelines, data modelling, code reviews and data products Collaborate with cross-functional teams to understand requirements and translate them into scalable data solutions Drive innovation and continuous improvements within the data engineering team Build maintainable and scalable processes and playbooks to ensure consistent delivery and quality across projects Drive adoption of best practices in data engineering and data modelling Be the visible lead of the team- coordinate communication, releases, and status to various stakeholders Must Have Bachelor's degree in Computer Science, Engineering, or related field 10+ years experience in data engineering field, with deep SQL knowledge 2+ years management experience, including direct technical reports 5+ years experience with data warehousing concepts and technologies 4+ years of working with Git, and demonstrated experience using these tools to facilitate growth of engineers 4+ years working with Snowflake 3+ years working with dbt (dbt core preferred) Good to have Snowflake, Dbt, AWS Certified 3+ years working with Infrastructure as Code, preferably Terraform 2+ years working with CI / CD, and demonstrated ability to build and operate pipelines Experience and understanding of Snowflake administration and security principles Demonstrated experience with Airflow

Posted 1 month ago

Apply

7.0 - 12.0 years

30 - 40 Lacs

Indore, Pune, Bengaluru

Hybrid

Support enhancements to the MDM platform Develop pipelines using snowflake python SQL and airflow Track System Performance Troubleshoot issues Resolve production issues Required Candidate profile 5+ years of hands on expert level Snowflake, Python, orchestration tools like Airflow Good understanding of investment domain Experience with dbt, Cloud experience (AWS, Azure) DevOps

Posted 1 month ago

Apply

0.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Senior Associate - Sr.Data Engineer ( DBT+Snowflake ) ! In this role, the Sr.Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Job Description: Develop, implement, and optimize data pipelines using Snowflake, with a focus on Cortex AI capabilities. Extract, transform, and load (ETL) data from various sources into Snowflake, ensuring data integrity and accuracy. Implement Conversational AI solutions using Snowflake Cortex AI to facilitate data interaction through ChatBot agents. Collaborate with data scientists and AI developers to integrate predictive analytics and AI models into data workflows. Monitor and troubleshoot data pipelines to resolve data discrepancies and optimize performance. Utilize Snowflake%27s advanced features, including Snowpark, Streams, and Tasks, to enable data processing and analysis. Develop and maintain data documentation, best practices, and data governance protocols. Ensure data security, privacy, and compliance with organizational and regulatory guidelines. Responsibilities: . Bachelor&rsquos degree in Computer Science, Data Engineering, or a related field. . experience in data engineering, with experience working with Snowflake. . Proven experience in Snowflake Cortex AI, focusing on data extraction, chatbot development, and Conversational AI. . Strong proficiency in SQL, Python, and data modeling . . Experience with data integration tools (e.g., Matillion , Talend, Informatica). . Knowledge of cloud platforms such as AWS, Azure, or GCP. . Excellent problem-solving skills, with a focus on data quality and performance optimization. . Strong communication skills and the ability to work effectively in a cross-functional team. Proficiency in using DBT%27s testing and documentation features to ensure the accuracy and reliability of data transformations. Understanding of data lineage and metadata management concepts, and ability to track and document data transformations using DBT%27s lineage capabilities. Understanding of software engineering best practices and ability to apply these principles to DBT development, including version control, code reviews, and automated testing. Should have experience building data ingestion pipeline. Should have experience with Snowflake utilities such as SnowSQL , SnowPipe , bulk copy, Snowpark, tables, Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs, Snowsight . Should have good experience in implementing CDC or SCD type 2 Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in repository tools like Github /Gitlab, Azure repo Qualifications/Minimum qualifications B.E./ Masters in Computer Science , Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant of working experience as a Sr. Data Engineer with DBT+Snowflake skillsets Skill Matrix: DBT (Core or Cloud), Snowflake, AWS/Azure, SQL, ETL concepts, Airflow or any orchestration tools, Data Warehousing concepts Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 month ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Lead Consultant - Sr.Data Enginee r ( DBT +Snowfl ake ) ! In this role, the Sr.Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Develop, implement, and optimize data pipelines using Snowflake, with a focus on Cortex AI capabilities. Extract, transform, and load (ETL) data from various sources into Snowflake, ensuring data integrity and accuracy. Implement Conversational AI solutions using Snowflake Cortex AI to facilitate data interaction through ChatBot agents. Collaborate with data scientists and AI developers to integrate predictive analytics and AI models into data workflows. Monitor and troubleshoot data pipelines to resolve data discrepancies and optimize performance. Utilize Snowflake%27s advanced features, including Snowpark, Streams, and Tasks, to enable data processing and analysis. Develop and maintain data documentation, best practices, and data governance protocols. Ensure data security, privacy, and compliance with organizational and regulatory guidelines. Responsibilities : . Bachelor&rsquos degree in Computer Science, Data Engineering, or a related field. e xperience in data engineering, wit h experience working with Snowflake. . Proven experience in Snowflake Cortex AI, focusing on data extraction, chatbot development, and Conversational AI. . Strong proficiency in SQL, Python, and data modeling . . Experience with data integration tools (e.g., Matillion , Talend, Informatica). . Knowledge of cloud platforms such as AWS, Azure, or GCP. . Excellent problem-solving skills, with a focus on data quality and performance optimization. . Strong communication skills and the ability to work effectively in a cross-functional team. Proficiency in using DBT%27s testing and documentation features to ensure the accuracy and reliability of data transformations. Understanding of data lineage and metadata management concepts, and ability to track and document data transformations using DBT%27s lineage capabilities. Understanding of software engineering best practices and ability to apply these principles to DBT development, including version control, code reviews, and automated testing. Should have experience building data ingestion pipeline. Should have experience with Snowflake utilities such as SnowSQL , SnowPipe , bulk copy, Snowpark, tables, Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs, Snowsight . Should have good experience in implementing CDC or SCD type 2 Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in repository tool s like Github /Gitlab, Azure repo Qualifications/Minimum qualifications B.E./ Masters in Computer Science , Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant of working experience as a Sr. Data Engineer with DBT+Snowflake skillsets Skill Matrix: DBT (Core or Cloud), Snowflake, AWS/Azure, SQL, ETL concepts, Airflow or any orchestration tools, Data Warehousing concepts Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 month ago

Apply

6.0 - 11.0 years

6 - 15 Lacs

Pune

Work from Office

Role & responsibilities Design, build, and maintain scalable data pipelines using DBT and Airflow. Develop and optimize SQL queries and data models in Snowflake. Implement ETL/ELT workflows, ensuring data quality, performance, and reliability. Work with Python for data processing, automation, and integration tasks. Handle JSON data structures for data ingestion, transformation, and APIs. Leverage AWS services (e.g., S3, Lambda, Glue, Redshift) for cloud-based data solutions. Collaborate with data analysts, engineers, and business teams to deliver high-quality data products. Preferred candidate profile Strong expertise in SQL, Snowflake, and DBT for data modeling and transformation. Proficiency in Python and Airflow for workflow automation. Experience working with AWS cloud services. Ability to handle JSON data formats and integrate APIs. Strong problem-solving skills and experience in optimizing data pipelines

Posted 1 month ago

Apply

5.0 - 7.0 years

12 - 13 Lacs

Mumbai, New Delhi, Bengaluru

Work from Office

We are looking for an experienced Data Engineer with a strong background in data engineering, storage, and cloud technologies. The role involves designing, building, and optimizing scalable data pipelines, ETL/ELT workflows, and data models for efficient analytics and reporting. The ideal candidate must have strong SQL expertise, including complex joins, stored procedures, and certificate-auth-based queries. Experience with NoSQL databases such as Firestore, DynamoDB, or MongoDB is required, along with proficiency in data modeling and warehousing solutions like BigQuery (preferred), Redshift, or Snowflake. The candidate should have hands-on experience working with ETL/ELT pipelines using Airflow, dbt, Kafka, or Spark. Proficiency in scripting languages such as PySpark, Python, or Scala is essential. Strong hands-on experience with Google Cloud Platform (GCP) is a must. Additionally, experience with visualization tools such as Google Looker Studio, LookerML, Power BI, or Tableau is preferred. Good-to-have skills include exposure to Master Data Management (MDM) systems and an interest in Web3 data and blockchain analytics. Location - Remote, Hyderabad , ahmedabad , pune , chennai , kolkata.

Posted 1 month ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Bengaluru

Work from Office

We are seeking a highly skilled Senior Data Engineer to join our dynamic team in Bangalore. You will design, develop, and maintain scalable data ingestion frameworks and ELT pipelines using tools such as DBT, Apache Airflow, and Prefect. The ideal candidate will have deep technical expertise in cloud platforms (especially AWS), data architecture, and orchestration tools. You will work with modern cloud data warehouses like Snowflake, Redshift, or Databricks and integrate pipelines with AWS services such as S3, Lambda, Step Functions, and Glue. A strong background in SQL, scripting, and CI/CD practices is essential. Experience with data systems in manufacturing is a plus.

Posted 1 month ago

Apply

8.0 - 12.0 years

12 - 22 Lacs

Hyderabad

Work from Office

We are seeking a highly experienced and self-driven Senior Data Engineer to design, build, and optimize modern data pipelines and infrastructure. This role requires deep expertise in Snowflake, DBT, Python, and cloud data ecosystems. You will play a critical role in enabling data-driven decision-making across the organization by ensuring the availability, quality, and integrity of data. Key Responsibilities: Design and implement robust, scalable, and efficient data pipelines using ETL/ELT frameworks. Develop and manage data models and data warehouse architecture within Snowflake . Create and maintain DBT models for transformation, lineage tracking, and documentation. Write modular, reusable, and optimized Python scripts for data ingestion, transformation, and automation. Collaborate closely with data analysts, data scientists, and business teams to gather and fulfill data requirements. Ensure data integrity, consistency, and governance across all stages of the data lifecycle. Monitor pipeline performance and implement optimization strategies for queries and storage. Follow best practices for data engineering including version control (Git), testing, and CI/CD integration. Required Skills and Qualifications: 8+ years of experience in Data Engineering or related roles. Deep expertise in Snowflake : schema design, performance tuning, security, and access controls. Proficiency in Python , particularly for scripting, data transformation, and workflow automation. Strong understanding of data modeling techniques (e.g., star/snowflake schema, normalization). Proven experience with DBT for building modular, tested, and documented data pipelines. Familiarity with ETL/ELT tools and orchestration platforms like Apache Airflow or Prefect . Advanced SQL skills with experience handling large and complex data sets. Exposure to cloud platforms such as AWS , Azure , or GCP and their data services. Preferred Qualifications: Experience implementing data quality checks and governance frameworks. Understanding of modern data stack and CI/CD pipelines for data workflows. Contributions to data engineering best practices, open-source projects, or thought leadership.

Posted 1 month ago

Apply

7.0 - 20.0 years

10 - 40 Lacs

Hyderabad, Pune, Delhi / NCR

Work from Office

Roles and Responsibilities : Lead the development of data warehousing solutions using Snowflake, ensuring timely delivery and high-quality results. Collaborate with cross-functional teams to design, develop, test, and deploy ETL processes for large-scale data migration projects. Provide technical guidance and mentorship to junior team members on best practices in data modeling, query optimization, and performance tuning. Ensure compliance with industry standards and company policies regarding data security, privacy, and governance. Job Requirements : 7-20 years of experience in Data Warehousing/Business Intelligence domain with expertise in Snowflake technology. Strong understanding of building complex ETL processes using various tools such as Informatica PowerCenter or similar technologies. Experience working with large datasets (Terabytes) and ability to optimize queries for improved performance. Proven track record of leading teams or managing multiple projects simultaneously.

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Req ID: 325282 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Snowflake DBT Developer to join our team in Hyderabad, Telangana (IN-TG), India (IN). Snowflake and Data Vault 2 (optional) Consultant Extensive expertise in DBT , including macros, modeling, and automation techniques. Proficiency in SQL, Python , or other scripting languages for automation. Experience leveraging Snowflake for scalable data solutions. Familiarity with Data Vault 2.0 methodologies is an advantage. Strong capability in optimizing database performance and managing large datasets. Excellent problem-solving and analytical skills. Minimum of 3+ years of relevant experience, with a total of 5+ years of overall experience. About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.

Posted 1 month ago

Apply

4.0 - 6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description : YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we're a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth - bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hireSnowFlake Professionals in the following areas : Senior Snowflake Developer Job description: Responsible for designing and implementing data pipelines, ETL processes, and data modeling in Snowflake Responsible to translate business requirements into ELT pipelines using data replication tools and data transformation tools (such as DBT) or advanced SQL scripting (views, Snowflake Store Procedure, UDF). Deep understanding of Snowflake architecture and processing Exp with performance tuning of Snowflake data warehouse, Experience working with Snowflake Functions, hands on exp with Snowflake utilities, stage and file upload features, time travel, fail safe Responsible for development, deployment, code reviews, and production support. Maintain and implement best practices for Snowflake infrastructure Hands-on in complex SQL, parsing complex data sets Primary Skills: Must have 4 to 6 yrs. in IT, 3+ years working as a Snowflake Developer, and 5+ years in Data warehouse, ETL, and BI projects. Must have experience in at least one complex implementation of a Snowflake Data Warehouse and DBT hands-on experience Expertise in Snowflake data modeling, ELT using Snowflake SQL or Modern Data Replication tools Snowflake Store Procedures / UDF / advanced SQL scripting, and standard Data Lake / Data Warehouse concepts. Expertise in Snowflake advanced concepts like setting up resource monitors, RBAC controls, virtual warehouse sizing, query performance tuning, Zero copy clone, and time travel. Expertise in deploying Snowflake features such as data sharing, events, and lake-house patterns. Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe and Big Data model techniques. Deep understanding of relational data stores, methods, and approaches (star and snowflake, dimensional modeling). Hands-on experience with DBT Core or DBT Cloud, including dev and prod deployment using CI/CD (BitBucket) is a plus. Should be able to develop and maintain documentation of the data architecture, data flow, and data models of the data warehouse. Good communication skills Python and API experience is a plus At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment.We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture

Posted 1 month ago

Apply

7.0 - 10.0 years

20 - 30 Lacs

Hyderabad, Chennai

Work from Office

Prof in designing and delivering data pipelines in Cloud Data Warehouses (e.g., Snowflake, Redshift), using various ETL/ELT tools such as Matillion, dbt, Striim, etc. Solid understanding of database systems (relational / NoSQL) ,data modeling tech Required Candidate profile looking for candidates with strong experience in data architecture Potential companies: Tiger Analytics, Tredence, Quantiphi, Data Engineering Group within Infosys/TCS/Cognizant, Deloitte Consulting Perks and benefits 5 working days - Onsite

Posted 1 month ago

Apply

7.0 - 12.0 years

20 - 25 Lacs

Ahmedabad

Hybrid

Experience : 7+ years Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Hybrid (Ahmedabad) Must have Skills required: Snowflake, dbt, Airflow Inferenz is Looking for: Position: Senior Data Engineer (Snowflake+ dbt+ Airflow) Location: Pune, Ahmedabad Required Experience: 5+ years Preferred: Immediate Joiners Job Overview: We are looking for a highly skilled Senior Data Engineer (Snowflake) to join our team. The ideal candidate will have extensive experience Snowflake, and cloud platforms, with a strong understanding of ETL processes, data warehousing concepts, and programming languages. If you have a passion for working with large datasets, designing scalable database schemas, and solving complex data problems. Key Responsibilities: Design, implement, and optimize data pipelines and workflows using Apache Airflow Develop incremental and full-load strategies with monitoring, retries, and logging Build scalable data models and transformations in dbt, ensuring modularity, documentation, and test coverage Develop and maintain data warehouses in Snowflake Ensure data quality, integrity, and reliability through validation frameworks and automated testing Tune performance through clustering keys, warehouse scaling, materialized views, and query optimization. Monitor job performance and resolve data pipeline issues proactively Build and maintain data quality frameworks (null checks, type checks, threshold alerts). Partner with data analysts, scientists, and business stakeholders to translate reporting and analytics requirements into technical specifications. Qualifications: Snowflake (data modeling, performance tuning, access control, external tables, streams & tasks) Apache Airflow (DAG design, task dependencies, dynamic tasks, error handling) dbt (Data Build Tool) (modular SQL development, jinja templating, testing, documentation) Proficiency in SQL, Spark and Python Experience building data pipelines on cloud platforms like AWS, GCP, or Azure Strong knowledge of data warehousing concepts and ELT best practices Familiarity with version control systems (e.g., Git) and CI/CD practices Familiarity with infrastructure-as-code tools like Terraform for provisioning Snowflake or Airflow environments. Excellent problem-solving Skills and the ability to work independently Ability to work collaboratively in a team environment. Skills Snowflake, dbt, Airflow

Posted 1 month ago

Apply

5.0 - 10.0 years

19 - 25 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Role & responsibilities Snowflake DBT SQL PYTHON PYSPARK AWS

Posted 1 month ago

Apply

6.0 - 8.0 years

12 - 19 Lacs

Pune, Chennai, Bengaluru

Hybrid

Experience: 6+ Years as Data Engineer Location: Anywhere in India (Flexible to work in Hybrid Mode) Job Description: • Extensive expertise in DBT, including macros, modeling, and automation techniques. • Proficiency in SQL, Python, or other scripting languages for automation. • Experience leveraging Snowflake for scalable data solutions. • Familiarity with Data Vault 2.0 methodologies is an advantage. • Strong capability in optimizing database performance and managing large datasets. • Excellent problem-solving and analytical skills. • Minimum of 3+ years of relevant experience, with a total of 6+ years of overall experience

Posted 1 month ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Lucknow

Remote

Job Title: Data Engineer (DBT & Airflow) Type: Contract (8 hrs/day) Experience: 6+ years Location: Remote/WFH Duration: 3 - 6 months ( Possibility of extension) Job Summary: We are seeking an experienced Data Engineer with strong expertise in DBT and Apache Airflow to join our team on a contract basis. The ideal candidate will have a proven track record of building scalable data pipelines, transforming raw datasets into analytics-ready models, and orchestrating workflows in a modern data stack. You will play a key role in designing, developing, and maintaining data infrastructure that supports business intelligence, analytics, and machine learning initiatives. Key Responsibilities: Design, build, and maintain robust data pipelines and workflows using Apache Airflow Develop and manage modular, testable, and well-documented SQL models using DBT Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements Implement and monitor data quality checks, alerts, and lineage tracking Work with cloud-based data warehouses such as Snowflake, BigQuery, or Redshift Optimize ETL/ELT processes for performance and scalability Participate in code reviews, documentation, and process improvement initiatives Required Qualifications: 6+ years of professional experience in data engineering or ETL development Strong hands-on experience with DBT (Data Build Tool) for data transformation Proven experience designing and managing DAGs using Apache Airflow Advanced proficiency in SQL and working with cloud data warehouses (Snowflake, BigQuery, Redshift, etc.) Solid programming skills in Python Experience with data modeling, data warehousing, and performance tuning Familiarity with version control systems (e.g., Git) and CI/CD practices Strong problem-solving skills and attention to detail

Posted 1 month ago

Apply

5.0 - 8.0 years

22 - 25 Lacs

Pune, Gurugram, Bengaluru

Hybrid

Key Responsibilities: Design and develop ETL/ELT pipelines using Azure Data Factory , Snowflake , and DBT . Build and maintain data integration workflows from various data sources to Snowflake. Write efficient and optimized SQL queries for data extraction and transformation. Work with stakeholders to understand business requirements and translate them into technical solutions. Monitor, troubleshoot, and optimize data pipelines for performance and reliability. Maintain and enforce data quality, governance, and documentation standards. Collaborate with data analysts, architects, and DevOps teams in a cloud-native environment. Must-Have Skills: Strong experience with Azure Cloud Platform services. Proven expertise in Azure Data Factory (ADF) for orchestrating and automating data pipelines. Proficiency in SQL for data analysis and transformation. Hands-on experience with Snowflake and SnowSQL for data warehousing. Practical knowledge of DBT (Data Build Tool) for transforming data in the warehouse. Experience working in cloud-based data environments with large-scale datasets. Good-to-Have Skills: Experience with Azure Data Lake , Azure Synapse , or Azure Functions . Familiarity with Python or PySpark for custom data transformations. Understanding of CI/CD pipelines and DevOps for data workflows. Exposure to data governance , metadata management , or data catalog tools. Knowledge of business intelligence tools (e.g., Power BI, Tableau) is a plus. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. 5+ years of experience in data engineering roles using Azure and Snowflake. Strong problem-solving, communication, and collaboration skills.

Posted 1 month ago

Apply

4.0 - 9.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Hiring Requirement: We are looking for a candidate with strong hands-on experience in Snowflake, DBT, and Python. The ideal candidate should have in-depth expertise across all these technologies and must be comfortable working on complex data pipelines and transformations. Additionally, the role requires the candidate to be open to working from our Bangalore office for 3 days a week (hybrid model). Key Must-Haves: Proficiency in Snowflake (data warehousing, performance tuning, SQL scripting) Expertise in DBT (Data Build Tool developing, testing, and deploying models) Strong command over Python (for data engineering, scripting, and automation) Willingness to work from Bangalore office (3 days/week) Only candidates who are confident in all these skill sets should apply. If you are interested please share your updated CV on saumya.m.singh@kipi.ai

Posted 1 month ago

Apply

5.0 - 7.0 years

15 - 25 Lacs

Udaipur

Work from Office

5 to 7 years of experience in data engineering Architect and maintain scalable, secure, and reliable data platforms and pipelines Design and implement data lake/data warehouse solutions such as Redshift, BigQuery, Snowflake, or Delta Lake Build real-time and batch data pipelines using tools like Apache Airflow, Kafka, Spark, and DBT Ensure data governance, lineage, quality, and observability

Posted 1 month ago

Apply

5.0 - 8.0 years

8 - 18 Lacs

Mumbai, Hyderabad, Pune

Hybrid

Role & responsibilities Design, implement, and manage cloud-based solutions on AWS and Snowflake. Work with stakeholders to gather requirements and design solutions that meet their needs. Develop and execute test plans for new solutions Oversee and design the information architecture for the data warehouse, including all information structures such as staging area, data warehouse, data marts, and operational data stores. Ability to Optimize Snowflake configurations and data pipelines to improve performance, scalability, and overall efficiency. Deep understanding of Data Warehousing, Enterprise Architectures, Dimensional Modeling, Star & Snowflake schema design, Reference DW Architectures, ETL Architect, ETL (Extract, Transform, Load), Data Analysis, Data Conversion, Transformation, Database Design, Data Warehouse Optimization, Data Mart Development, and Enterprise Data Warehouse Maintenance and Support Significant experience working as a Data Architect with depth in data integration and data architecture for Enterprise Data Warehouse implementations (conceptual, logical, physical & dimensional models) Maintain Documentation: Develop and maintain detailed documentation for data solutions and processes. Provide Training: Offer training and leadership to share expertise and best practices with the team Collaborate with the team and provide leadership to the data engineering team, ensuring that data solutions are developed according to best practices Preferred candidate profile Snowflake, DBT, Data Architecture Design experience in Data Warehouse. Good to have Informatica or any ETL Knowledge or Hands-On Experience Good to have Databricks understanding 5+ years of IT experience with 3+ years of Data Architecture experience in Data Warehouse, 4+ years in Snowflake

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies