Home
Jobs
Companies
Resume

189 Dbt Jobs - Page 3

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 10.0 years

25 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Role & responsibilities 3+ years of experience with Snowflake (Snowpipe, Streams, Tasks) Strong proficiency in SQL for high-performance data transformations Hands-on experience building ELT pipelines using cloud-native tools Proficiency in dbt for data modeling and workflow automation Python skills (Pandas, PySpark, SQLAlchemy) for data processing Experience with orchestration tools like Airflow or Prefect Preferred candidate profile Hands-on with Python, including libraries like Pandas, PySpark, or SQLAlchemy. Experience with data cataloging, metadata manage1nent, and column-level lineage. Exposure to BI tools like Tableau, or Power Bl. Certifications: Snowflake SnowPro Core Certification preferred. Contact details: Sindhu@iflowonline.com or 9154984810

Posted 1 week ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Primary Skills - Snowflake, DBT, AWS; Good to have Skills - Fivetran (HVR), Python Responsibilities: Design, develop, and maintain data pipelines using Snowflake, DBT, and AWS. Collaborate with cross-functional teams to understand data requirements and deliver solutions. Optimize and troubleshoot existing data workflows to ensure efficiency and reliability. Implement best practices for data management and governance. Stay updated with the latest industry trends and technologies to continuously improve our data infrastructure. Required Skills: Proficiency in Snowflake, DBT, and AWS. Experience with data modeling, ETL processes, and data warehousing. Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Preferred Skills: Knowledge of Fivetran (HVR) and Python. Familiarity with data integration tools and techniques. Ability to work in a fast-paced and agile environment. Education: Bachelor's degree in Computer Science, Information Technology, or a related field.

Posted 1 week ago

Apply

7.0 - 12.0 years

7 - 12 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Foundit logo

Job Summary: We are looking for a highly skilled Data Engineer with hands-on experience in Snowflake, Python, DBT, and modern data architecture. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and data warehouse solutions that support analytics and business intelligence initiatives. Key Responsibilities: Design and implement scalable data pipelines using ETL/ELT frameworks. Develop and maintain data models and data warehouse architecture using Snowflake. Build and manage DBT (Data Build Tool) models for data transformation and lineage tracking. Write efficient and reusable Python scripts for data ingestion, transformation, and automation. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements. Ensure data quality, integrity, and governance across all data platforms. Monitor and optimize performance of data pipelines and queries. Implement best practices for data engineering, including version control, testing, and CI/CD. Required Skills and Qualifications: 8+ years of experience in data engineering or a related field. Strong expertise in Snowflake including schema design, performance tuning, and security. Proficiency in Python for data manipulation and automation. Solid understanding of data modeling concepts (star/snowflake schema, normalization, etc.). Experience with DBT for data transformation and documentation. Hands-on experience with ETL/ELT tools and orchestration frameworks (e.g., Airflow, Prefect). Strong SQL skills and experience with large-scale data sets. Familiarity with cloud platforms (AWS, Azure, or GCP) and data services.

Posted 1 week ago

Apply

8.0 - 13.0 years

8 - 13 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Foundit logo

8+ years of experience in data engineering or a related field. Strong expertise in Snowflake including schema design, performance tuning, and security. Proficiency in Python for data manipulation and automation. Solid understanding of data modeling concepts (star/snowflake schema, normalization, etc.). Experience with DBT for data transformation and documentation. Hands-on experience with ETL/ELT tools and orchestration frameworks (e.g., Airflow, Prefect). Strong SQL skills and experience with large-scale data sets. Familiarity with cloud platforms (AWS, Azure, or GCP) and data services.

Posted 1 week ago

Apply

6.0 - 9.0 years

15 - 22 Lacs

Pune, Maharashtra, India

Remote

Foundit logo

Data Engineer DBT, Snowflake, Looker Location: remote Experience: 710 years About the Role We are looking for an experienced Data Engineer to design and build scalable data pipelines and enable powerful business insights. You'll work with modern data stack tools like DBT, Snowflake , and Looker to empower data-driven decisions. Key Responsibilities Design & maintain scalable data pipelines (DBT, Snowflake) Perform data transformation , cleansing & enrichment Integrate data from multiple sources into data warehouse/data lake Support reporting, analytics & BI with Looker / similar tools Optimize performance & troubleshoot data workflows Document processes & ensure data quality Skills Required DBT, Snowflake, Looker (or similar tools) Strong SQL , Python (or similar scripting) Data modeling, schema design, database optimization Problem-solving & business requirement translation Excellent communication & cross-functional collaboration

Posted 1 week ago

Apply

6.0 - 9.0 years

15 - 22 Lacs

Bengaluru / Bangalore, Karnataka, India

Remote

Foundit logo

Data Engineer DBT, Snowflake, Looker Location: remote Experience: 710 years About the Role We are looking for an experienced Data Engineer to design and build scalable data pipelines and enable powerful business insights. You'll work with modern data stack tools like DBT, Snowflake , and Looker to empower data-driven decisions. Key Responsibilities Design & maintain scalable data pipelines (DBT, Snowflake) Perform data transformation , cleansing & enrichment Integrate data from multiple sources into data warehouse/data lake Support reporting, analytics & BI with Looker / similar tools Optimize performance & troubleshoot data workflows Document processes & ensure data quality Skills Required DBT, Snowflake, Looker (or similar tools) Strong SQL , Python (or similar scripting) Data modeling, schema design, database optimization Problem-solving & business requirement translation Excellent communication & cross-functional collaboration

Posted 1 week ago

Apply

6.0 - 9.0 years

15 - 22 Lacs

Chennai, Tamil Nadu, India

Remote

Foundit logo

Data Engineer DBT, Snowflake, Looker Location: remote Experience: 710 years About the Role We are looking for an experienced Data Engineer to design and build scalable data pipelines and enable powerful business insights. You'll work with modern data stack tools like DBT, Snowflake , and Looker to empower data-driven decisions. Key Responsibilities Design & maintain scalable data pipelines (DBT, Snowflake) Perform data transformation , cleansing & enrichment Integrate data from multiple sources into data warehouse/data lake Support reporting, analytics & BI with Looker / similar tools Optimize performance & troubleshoot data workflows Document processes & ensure data quality Skills Required DBT, Snowflake, Looker (or similar tools) Strong SQL , Python (or similar scripting) Data modeling, schema design, database optimization Problem-solving & business requirement translation Excellent communication & cross-functional collaboration

Posted 1 week ago

Apply

4.0 - 9.0 years

10 - 20 Lacs

Bengaluru

Remote

Naukri logo

Required Skills: Experience: Minimum of 5 years in business intelligence development, including data modeling, reporting, and dashboard creation. Power BI Expertise: Strong experience with Power BI, including advanced DAX calculations, data modeling, and creating visually engaging, actionable dashboards. dbt Labs Cloud IDE: At least 1 year of hands-on experience with dbt Labs Cloud IDE is required. Technical Skills: Proficiency in SQL and modern cloud-based data warehousing concepts, with experience in Snowflake, SQL Server, or Redshift. Cloud and ERP/CRM Proficiency: Familiarity with platforms such as NetSuite, Salesforce, Fivetran, and API integrations; experience with SaaS systems like Zuora Billing, Churn Zero, Marketo, and Qualtrics is a plus.

Posted 1 week ago

Apply

7.0 - 12.0 years

18 - 33 Lacs

Navi Mumbai

Work from Office

Naukri logo

About Us: Celebal Technologies is a leading Solution Service company that provide Services the field of Data Science, Big Data, Enterprise Cloud & Automation. We are at the forefront of leveraging cutting-edge technologies to drive innovation and enhance our business processes. As part of our commitment to staying ahead in the industry, we are seeking a talented and experienced Data & AI Engineer with strong Azure cloud competencies to join our dynamic team. Job Summary: We are looking for a highly skilled Azure Data Engineer with a strong background in real-time and batch data ingestion and big data processing, particularly using Kafka and Databricks . The ideal candidate will have a deep understanding of streaming architectures , Medallion data models , and performance optimization techniques in cloud environments. This role requires hands-on technical expertise , including live coding during the interview process. Key Responsibilities Design and implement streaming data pipelines integrating Kafka with Databricks using Structured Streaming . Architect and maintain Medallion Architecture with well-defined Bronze, Silver, and Gold layers . Implement efficient ingestion using Databricks Autoloader for high-throughput data loads. Work with large volumes of structured and unstructured data , ensuring high availability and performance. Apply performance tuning techniques such as partitioning, caching , and cluster resource optimization . Collaborate with cross-functional teams (data scientists, analysts, business users) to build robust data solutions. Establish best practices for code versioning , deployment automation , and data governance . Required Technical Skills: Strong expertise in Azure Databricks and Spark Structured Streaming Processing modes (append, update, complete) Output modes (append, complete, update) Checkpointing and state management Experience with Kafka integration for real-time data pipelines Deep understanding of Medallion Architecture Proficiency with Databricks Autoloader and schema evolution Deep understanding of Unity Catalog and Foreign catalog Strong knowledge of Spark SQL, Delta Lake, and DataFrames Expertise in performance tuning (query optimization, cluster configuration, caching strategies) Must have Data management strategies Excellent with Governance and Access management Strong with Data modelling, Data warehousing concepts, Databricks as a platform Solid understanding of Window functions Proven experience in: Merge/Upsert logic Implementing SCD Type 1 and Type 2 Handling CDC (Change Data Capture) scenarios Retail/Telcom/Energy any one industry expertise Real time use case execution Data modelling

Posted 1 week ago

Apply

9.0 - 14.0 years

12 - 22 Lacs

Bengaluru

Remote

Naukri logo

We are looking for Data Engineers for Multiple locations. T.E- 9-15 years Locations - Chennai , Mumbai , Pune , Noida , Hyderabad & Bangalore Skills Combination : 1. AWS + Airflow + Python 2. AWS +Airflow + DBT Please share your details at Saudaminic@hexaware.com OR Apply at : https://forms.office.com/Pages/ResponsePage.aspx?id=9TYMfIOvJEyIRJli4BY3GdCP897qAcpBoCWuDZyUhuZUN0s4OUNGOTk4UFIzOFVMM1A4UEkzV0JQRi4u JD : Must have below: 10+ Years Experience Great Communicator/Client Facing Individual Contributor 100% Hands on in the mentioned skills DBT Proficiency: model development: Experience in creating complex DBT models including incremental models, snapshots and documentation. Ability to write and maintain DBT macros for reusable code Testing and documentation: Proficiency in implementing DBT tests for data validation and quality checks Familiarity with generating and maintaining documentation using DBT's built in features Version control: Experience in managing DBT projects using git ,including implementing CI/CD process from the scratch AWS Expertise: Data STORAGE solutions: In depth understanding of AWS S3 for data storage, including best practices for organization and security Experience with AWS redshift for data warehousing and performance optimization Data Integration: Familiarity with Aws glue for ETL processes and orchestration -Nice to have Experience with AWS lambda for serverless data processing tasks Thanks Saudamini Chauhan Saudaminic@hexaware.com

Posted 1 week ago

Apply

5.0 - 9.0 years

1 - 1 Lacs

Bengaluru

Remote

Naukri logo

Hi , Synergy Technologies is a leader in technology services and consulting. We enable clients across the world to create and execute strategies .We help our clients find the right problems to solve, and to solve these effectively. We bring our expertise and innovation to every project we undertake Position: Business Intelligence Developer Duration : Contract to Full Time Location : Remote Work ( Remote Work ) Note : Consultant who are located in Bangalore are given preferences first choice.. & than other States & Cities... Note Need Strong Experience with ( DBT & Snowflake & Azure Needed )........ JD Required qualications include: Business Intelligence Developer Opportunity Our mission is clear: to enhance the safety and well-being of workers across the globe. As a trailblazer in software solutions, we empower businesses and their suppliers with a platform that champions safety, sustainability, and risk management within supply chains. Join our close-knit team of Data Systems and Internal Business Intelligence experts, where you can live out our core values daily and contribute to impactful projects that further the companys vision. About the Role As a Business Intelligence Developer , you will play a critical role in developing impactful business intelligence solutions that empower internal teams with data-driven insights for strategic decision-making. Working closely with business analysts, data engineers, and stakeholders, youll design and build data models, interactive reports, and dashboards to transform complex data into clear, actionable insights. Your efforts will ensure data quality, accuracy, and governance while enhancing accessibility for business users. Key Responsibilities Develop BI Solutions: Design, develop, and implement data models, dashboards, and reports using Power BI to support data-driven initiatives. Data Modeling & Integration: Collaborate with data engineers and analysts to create optimized data models that aggregate data from multiple sources, ensuring scalability and alignment with business needs. Enhance Data Accuracy: Continuously improve data accuracy, standardize key metrics, and refine reporting processes to drive operational efficiency. Ensure Data Governance: Adhere to the companys data governance policies, ensuring that all BI solutions comply with data security standards, especially for sensitive information. Optimize BI Performance: Monitor BI solutions to ensure performance and reliable data access, implementing enhancements as needed. Documentation & User Support: Maintain comprehensive documentation of dashboards, data models, and processes; provide end-user training to maximize tool effectiveness. Adapt and Innovate: Stay informed on BI best practices and emerging technologies to proactively enhance BI capabilities. Qualifications Education: Bachelor’s Degree in Data Science, Business Analytics, Computer Science, or a rrelated field. Experience: Minimum of 5 years in business intelligence development, including data modeling, reporting, and dashboard creation. Power BI Expertise: Strong experience with Power BI, including advanced DAX calculations, data modeling, and creating visually engaging, actionable dashboards. dbt Labs Cloud IDE: At least 1 year of hands-on experience with dbt Labs Cloud IDE is required. Technical Skills: Proficiency in SQL and modern cloud-based data warehousing concepts, with experience in Snowflake, SQL Server, or Redshift. Cloud and ERP/CRM Proficiency: Familiarity with platforms such as NetSuite, Salesforce, Fivetran, and API integrations; experience with SaaS systems like Zuora Billing, Churn Zero, Marketo, and Qualtrics is a plus. Communication Skills: Ability to translate technical insights into business-friendly language. Preferred Skills Certifications: Power BI, Snowflake, or similar BI tools. Portfolio: Ability to provide redacted samples of Power BI dashboards. SaaS Experience: Background in SaaS organizations is beneficial.

Posted 1 week ago

Apply

7.0 - 12.0 years

25 - 30 Lacs

Coimbatore

Remote

Naukri logo

Role & responsibilities SUMMARY: Data Engineer will be responsible for ETL and documentation in building data warehouse and analytics capabilities. Additionally, maintain existing systems/processes and develop new features, along with reviewing, presenting and implementing performance improvements. Duties and Responsibilities: Build ETL (extract, transform, and loading) jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Monitoring active ETL jobs in production. Build out data lineage artifacts to ensure all current and future systems are properly documented Assist with the build out design/mapping documentation to ensure development is clear and testable for QA and UAT purposes Assess current and future data transformation needs to recommend, develop, and train new data integration tool technologies Discover efficiencies with shared data processes and batch schedules to help ensure no redundancy and smooth operations Assist the Data Quality Analyst to implement checks and balances across all jobs to ensure data quality throughout the entire environment for current and future batch jobs. Hands-on experience in developing and implementing large-scale data warehouses, Business Intelligence and MDM solutions, including Data Lakes/Data Vaults. Required Skills This job has no supervisory responsibilities. Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work 5+ years experience with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate and transfer large volumes of data and perform quality checks Hands-on experience with ETL tools (e.g Informatica, Talend, dbt, Azure Data Factory) Experience working in the healthcare industry with PHI/PII Creative, lateral, and critical thinker Excellent communicator Well-developed interpersonal skills Good at prioritizing tasks and time management Ability to describe, create and implement new solutions Experience with related or complementary open source software platforms and languages (e.g. Java, Linux, Apache, Perl/Python/PHP, Chef) Knowledge / Hands-on experience with BI tools and reporting software (e.g. Cognos, Power BI, Tableau)

Posted 1 week ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Hyderabad

Remote

Naukri logo

Lead Data Engineer with Health Care Domain Role & responsibilities Position: Lead Data Engineer Experience: 7+ Years Location: Hyderabad | Chennai | Remote SUMMARY: Data Engineer will be responsible for ETL and documentation in building data warehouse and analytics capabilities. Additionally, maintain existing systems/processes and develop new features, along with reviewing, presenting and implementing performance improvements. Duties and Responsibilities Build ETL (extract, transform, and loading) jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce, and AWS technologies. Monitoring active ETL jobs in production. Build out data lineage artifacts to ensure all current and future systems are properly documented. Assist with the build out design/mapping documentation to ensure development is clear and testable for QA and UAT purposes. Assess current and future data transformation needs to recommend, develop, and train new data integration tool technologies. Discover efficiencies with shared data processes and batch schedules to help ensure no redundancy and smooth operations Assist the Data Quality Analyst to implement checks and balances across all jobs to ensure data quality throughout the entire environment for current and future batch jobs. Hands-on experience in developing and implementing large-scale data warehouses, Business Intelligence and MDM solutions, including Data Lakes/Data Vaults . Required Skills This job has no supervisory responsibilities. Need strong experience with Snowflake and Azure Data Factory(ADF). Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work. 5+ years experience with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate and transfer large volumes of data and perform quality checks Hands-on experience with ETL tools (e.g Informatica, Talend, dbt, Azure Data Factory) Experience working in the healthcare industry with PHI/PII Creative, lateral, and critical thinker Excellent communicator Well-developed interpersonal skills Good at priori zing tasks and time management Ability to describe, create and implement new solutions Experience with related or complementary open source so ware platforms and languages (e.g. Java, Linux, Apache, Perl/Python/PHP, Chef) Knowledge / Hands-on experience with BI tools and reporting software (e.g. Cognos, Power BI, Tableau) Big Data stack (e.g. Snowflake(Snowpark), SPARK, MapReduce, Hadoop, Sqoop, Pig, HBase, Hive, Flume)

Posted 1 week ago

Apply

6.0 - 11.0 years

17 - 30 Lacs

Kolkata, Hyderabad/Secunderabad, Bangalore/Bengaluru

Hybrid

Naukri logo

Inviting applications for the role of Lead Consultant- Snowflake Data Engineer( Snowflake+Python+Cloud)! In this role, the Snowflake Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Job Description: Experience in IT industry Working experience with building productionized data ingestion and processing data pipelines in Snowflake Strong understanding on Snowflake Architecture Fully well-versed with data warehousing concepts. Expertise and excellent understanding of Snowflake features and integration of Snowflake with other data processing. Able to create the data pipeline for ETL/ELT Excellent presentation and communication skills, both written and verbal Ability to problem solve and architect in an environment with unclear requirements. Able to create the high level and low-level design document based on requirement. Hands on experience in configuration, troubleshooting, testing and managing data platforms, on premises or in the cloud. Awareness on data visualisation tools and methodologies Work independently on business problems and generate meaningful insights Good to have some experience/knowledge on Snowpark or Streamlit or GenAI but not mandatory. Should have experience on implementing Snowflake Best Practices Snowflake SnowPro Core Certification will be added an advantage Roles and Responsibilities: Requirement gathering, creating design document, providing solutions to customer, work with offshore team etc. Writing SQL queries against Snowflake, developing scripts to do Extract, Load, and Transform data. Hands-on experience with Snowflake utilities such as SnowSQL, Bulk copy, Snowpipe, Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs, Snowsight, Steamlit Have experience with Snowflake cloud data warehouse and AWS S3 bucket or Azure blob storage container for integrating data from multiple source system. Should have have some exp on AWS services (S3, Glue, Lambda) or Azure services ( Blob Storage, ADLS gen2, ADF) Should have good experience in Python/Pyspark.integration with Snowflake and cloud (AWS/Azure) with ability to leverage cloud services for data processing and storage. Proficiency in Python programming language, including knowledge of data types, variables, functions, loops, conditionals, and other Python-specific concepts. Knowledge of ETL (Extract, Transform, Load) processes and tools, and ability to design and develop efficient ETL jobs using Python or Pyspark. Should have some experience on Snowflake RBAC and data security. Should have good experience in implementing CDC or SCD type-2. Should have good experience in implementing Snowflake Best Practices In-depth understanding of Data Warehouse, ETL concepts and Data Modelling Experience in requirement gathering, analysis, designing, development, and deployment. Should Have experience building data ingestion pipeline Optimize and tune data pipelines for performance and scalability Able to communicate with clients and lead team. Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in deployment using CI/CD tools and exp in repositories like Azure repo , Github etc. Qualifications we seek in you! Minimum qualifications B.E./ Masters in Computer Science, Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant as Snowflake Data Engineer. Skill Metrix: Snowflake, Python/PySpark, AWS/Azure, ETL concepts, & Data Warehousing concepts

Posted 1 week ago

Apply

8.0 - 13.0 years

15 - 25 Lacs

Hyderabad, Bengaluru

Hybrid

Naukri logo

Looking for Snowflake developer for US client, this candidate should be strong with Snowflake & DBT & should be able to do impact analysis on the current ETLs (Informatica/ Data stage) and provide solutions based on the analysis. Exp: 7- 12yrs

Posted 1 week ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Noida, Pune, Bengaluru

Hybrid

Naukri logo

We are looking for a Snowflake Data Engineer with deep expertise in Snowflake and DBT to help us build and scale our modern data platform. Key Responsibilities: Design and build scalable ELT pipelines in Snowflake using DBT . Develop efficient, well-tested DBT models (staging, intermediate, and marts layers). Implement data quality, testing, and monitoring frameworks to ensure data reliability and accuracy. Optimize Snowflake queries, storage, and compute resources for performance and cost-efficiency. Collaborate with cross-functional teams to gather data requirements and deliver data solutions. Required Qualifications: 5+ years of experience as a Data Engineer, with at least 4 years working with Snowflake . Proficient with DBT (Data Build Tool) including Jinja templating, macros, and model dependency management. Strong understanding of ELT patterns and modern data stack principles. Advanced SQL skills and experience with performance tuning in Snowflake. Interested candidates share your CV at himani.girnar@alikethoughts.com with below details Candidate's name- Email and Alternate Email ID- Contact and Alternate Contact no- Total exp- Relevant experience- Current Org- Notice period- CCTC- ECTC- Current Location- Preferred Location- Pancard No-

Posted 2 weeks ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

We specialize in delivering high-quality human-curated data and AI-first scaled operations services Based in San Francisco and Hyderabad, we are a fast-moving team on a mission to build AI for Good, driving innovation and societal impact Role Overview: We are looking for a Data Scientist to join and build intelligent, data-driven solutions for our client that enable impactful decisions This role requires contributions across the data science lifecycle from data wrangling and exploratory analysis to building and deploying machine learning models Whether youre just getting started or have years of experience, were looking for individuals who are curious, analytical, and driven to make a difference with data Responsibilities: Design, develop, and deploy machine learning models and analytical solutions Conduct exploratory data analysis and feature engineering Own or contribute to the end-to-end data science pipeline: data cleaning, modeling, validation, and deployment Collaborate with cross-functional teams (engineering, product, business) to define problems and deliver measurable impact Translate business challenges into data science problems and communicate findings clearly Implement A/B tests, statistical tests, and experimentation strategies Support model monitoring, versioning, and continuous improvement in production environments Evaluate new tools, frameworks, and best practices to improve model accuracy and scalability Required Skills: Strong programming skills in Python including libraries such as pandas, NumPy, scikit-learn, matplotlib, seaborn Proficient in SQL, comfortable querying large, complex datasets Sound understanding of statistics, machine learning algorithms, and data modeling Experience building end-to-end ML pipelines Exposure to or hands-on experience with model deployment tools like FastAPI, Flask, MLflow Experience with data visualization and insight communication Familiarity with version control tools (eg, Git) and collaborative workflows Ability to write clean, modular code and document processes clearly Nice to Have: Experience with deep learning frameworks like TensorFlow or PyTorch Familiarity with data engineering tools like Apache Spark, Kafka, Airflow, dbt Exposure to MLOps practices and managing models in production environments Working knowledge of cloud platforms like AWS, GCP, or Azure (e, SageMaker, BigQuery, Vertex AI) Experience designing and interpreting A/B tests or causal inference models Prior experience in high-growth startups or cross-functional leadership roles Educational Qualifications: Bachelors or Masters degree in Computer Science, Data Science, Mathematics, Engineering, or a related field Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune,India

Posted 2 weeks ago

Apply

6.0 - 10.0 years

16 - 25 Lacs

Hyderabad

Work from Office

Naukri logo

Key Responsibilities Architect and implement modular, test-driven ELT pipelines using dbt on Snowflake. Design layered data models (e.g., staging, intermediate, mart layers / medallion architecture) aligned with dbt best practices. Lead ingestion of structured and semi-structured data from APIs, flat files, cloud storage (Azure Data Lake, AWS S3), and databases into Snowflake. Optimize Snowflake for performance and cost: warehouse sizing, clustering, materializations, query profiling, and credit monitoring. Apply advanced dbt capabilities including macros, packages, custom tests, sources, exposures, and documentation using dbt docs. Orchestrate workflows using dbt Cloud, Airflow, or Azure Data Factory, integrated with CI/CD pipelines. Define and enforce data governance and compliance practices using Snowflake RBAC, secure data sharing, and encryption strategies. Collaborate with analysts, data scientists, architects, and business stakeholders to deliver validated, business-ready data assets. Mentor junior engineers, lead architectural/code reviews, and help establish reusable frameworks and standards. Engage with clients to gather requirements, present solutions, and manage end-to-end project delivery in a consulting setup Required Qualifications 5 to 8 years of experience in data engineering roles, with 3+ years of hands-on experience working with Snowflake and dbt in production environments. Technical Skills: o Cloud Data Warehouse & Transformation Stack: Expert-level knowledge of SQL and Snowflake, including performance optimization, storage layers, query profiling, clustering, and cost management. Experience in dbt development: modular model design, macros, tests, documentation, and version control using Git. o Orchestration and Integration: Proficiency in orchestrating workflows using dbt Cloud, Airflow, or Azure Data Factory. Comfortable working with data ingestion from cloud storage (e.g., Azure Data Lake, AWS S3) and APIs. Data Modelling and Architecture: Dimensional modelling (Star/Snowflake schemas), Slowly changing dimensions. ' Knowledge of modern data warehousing principles. Experience implementing Medallion Architecture (Bronze/Silver/Gold layers). Experience working with Parquet, JSON, CSV, or other data formats. o Programming Languages: Python: For data transformation, notebook development, automation. SQL: Strong grasp of SQL for querying and performance tuning. Jinja (nice to have): Exposure to Jinja for advanced dbt development. o Data Engineering & Analytical Skills: ETL/ELT pipeline design and optimization. Exposure to AI/ML data pipelines, feature stores, or MLflow for model tracking (good to have). Exposure to data quality and validation frameworks. o Security & Governance: Experience implementing data quality checks using dbt tests. Data encryption, secure key management and security best practices for Snowflake and dbt. Soft Skills & Leadership: Ability to thrive in client-facing roles with competing/changing priorities and fast-paced delivery cycles. Stakeholder Communication: Collaborate with business stakeholders to understand objectives and convert them into actionable data engineering designs. Project Ownership: End-to-end delivery including design, implementation, and monitoring. Mentorship: Guide junior engineers and establish best practices; Build new skill in the team. Agile Practices: Work in sprints, participate in scrum ceremonies, story estimation. Education: Bachelors or masters degree in computer science, Data Engineering, or a related field. Certifications such as Snowflake SnowPro Advanced, dbt Certified Developer are a plus.

Posted 2 weeks ago

Apply

7.0 - 12.0 years

18 - 33 Lacs

Navi Mumbai

Work from Office

Naukri logo

About Us: Celebal Technologies is a leading Solution Service company that provide Services the field of Data Science, Big Data, Enterprise Cloud & Automation. We are at the forefront of leveraging cutting-edge technologies to drive innovation and enhance our business processes. As part of our commitment to staying ahead in the industry, we are seeking a talented and experienced Data & AI Engineer with strong Azure cloud competencies to join our dynamic team. Job Summary: We are looking for a highly skilled Azure Data Engineer with a strong background in real-time and batch data ingestion and big data processing, particularly using Kafka and Databricks . The ideal candidate will have a deep understanding of streaming architectures , Medallion data models , and performance optimization techniques in cloud environments. This role requires hands-on technical expertise , including live coding during the interview process. Key Responsibilities Design and implement streaming data pipelines integrating Kafka with Databricks using Structured Streaming . Architect and maintain Medallion Architecture with well-defined Bronze, Silver, and Gold layers . Implement efficient ingestion using Databricks Autoloader for high-throughput data loads. Work with large volumes of structured and unstructured data , ensuring high availability and performance. Apply performance tuning techniques such as partitioning, caching , and cluster resource optimization . Collaborate with cross-functional teams (data scientists, analysts, business users) to build robust data solutions. Establish best practices for code versioning , deployment automation , and data governance . Required Technical Skills: Strong expertise in Azure Databricks and Spark Structured Streaming Processing modes (append, update, complete) Output modes (append, complete, update) Checkpointing and state management Experience with Kafka integration for real-time data pipelines Deep understanding of Medallion Architecture Proficiency with Databricks Autoloader and schema evolution Deep understanding of Unity Catalog and Foreign catalog Strong knowledge of Spark SQL, Delta Lake, and DataFrames Expertise in performance tuning (query optimization, cluster configuration, caching strategies) Must have Data management strategies Excellent with Governance and Access management Strong with Data modelling, Data warehousing concepts, Databricks as a platform Solid understanding of Window functions Proven experience in: Merge/Upsert logic Implementing SCD Type 1 and Type 2 Handling CDC (Change Data Capture) scenarios Retail/Telcom/Energy any one industry expertise Real time use case execution Data modelling

Posted 2 weeks ago

Apply

3.0 - 5.0 years

8 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

We are looking for an experienced and results-driven Data Engineer to join our growing Data Engineering team. The ideal candidate will be proficient in building scalable, high-performance data transformation pipelines using Snowflake and dbt and be able to effectively work in a consulting setup. In this role, you will be instrumental in ingesting, transforming, and delivering high-quality data to enable data-driven decision-making across the clients organization. Key Responsibilities 1. Design and implement scalable ELT pipelines using dbt on Snowflake, following industry accepted best practices. 2. Build ingestion pipelines from various sources including relational databases, APIs, cloud storage and flat files into Snowflake. 3. Implement data modelling and transformation logic to support layered architecture (e.g., staging, intermediate, and mart layers or medallion architecture) to enable reliable and reusable data assets.. 4. Leverage orchestration tools (e.g., Airflow,dbt Cloud, or Azure Data Factory) to schedule and monitor data workflows. 5. Apply dbt best practices: modular SQL development, testing, documentation, and version control. 6. Perform performance optimizations in dbt/Snowflake through clustering, query profiling, materialization, partitioning, and efficient SQL design. 7. Apply CI/CD and Git-based workflows for version-controlled deployments. 8. Contribute to growing internal knowledge base of dbt macros, conventions, and testing frameworks. 9. Collaborate with multiple stakeholders such as data analysts, data scientists, and data architects to understand requirements and deliver clean, validated datasets. 10. Write well-documented, maintainable code using Git for version control and CI/CD processes. 11. Participate in Agile ceremonies including sprint planning, stand-ups, and retrospectives. 12. Support consulting engagements through clear documentation, demos, and delivery of client-ready solutions. Required Qualifications 3 to 5 years of experience in data engineering roles, with 2+ years of hands-on experience in Snowflake and DBT. Experience building and deploying DBT models in a production environment. Expert-level SQL and strong understanding of ELT principles. Strong understanding of ELT patterns and data modelling (Kimball/Dimensional preferred). Familiarity with data quality and validation techniques: dbt tests, dbt docs etc. Experience with Git, CI/CD, and deployment workflows in a team setting Familiarity with orchestrating workflows using tools like dbt Cloud, Airflow, or Azure Data Factory. Core Competencies: o Data Engineering and ELT Development: Building robust and modular data pipelines using dbt. Writing efficient SQL for data transformation and performance tuning in Snowflake. Managing environments, sources, and deployment pipelines in dbt. o Cloud Data Platform Expertise: Strong proficiency with Snowflake: warehouse sizing, query profiling, data loading, and performance optimization. Experience working with cloud storage (Azure Data Lake, AWS S3, or GCS) for ingestion and external stages. Technical Toolset: o Languages & Frameworks: Python: For data transformation, notebook development, automation. SQL: Strong grasp of SQL for querying and performance tuning. Best Practices and Standards: o Knowledge of modern data architecture concepts including layered architecture (e.g., staging, intermediate, marts, Medallion architecture). Familiarity with data quality, unit testing (dbt tests), and documentation (dbt docs). Security & Governance: o Access and Permissions: Understanding of access control within Snowflake (RBAC), role hierarchies, and secure data handling. Familiar with data privacy policies (GDPR basics), encryption at rest/in transit. Deployment & Monitoring: o DevOps and Automation: Version control using Git, experience with CI/CD practices in a data context. Monitoring and logging of pipeline executions, alerting on failures. Soft Skills: o Communication & Collaboration: Ability to present solutions and handle client demos/discussions. Work closely with onshore and offshore team of analysts, data scientists, and architects. Ability to document pipelines and transformations clearly. Basic Agile/Scrum familiarity working in sprints and logging tasks. Comfort with ambiguity, competing priorities and fast-changing client environment. Education: o Bachelors or masters degree in computer science, Data Engineering, or a related field. o Certifications such as Snowflake SnowPro, dbt Certified Developer Data Engineering are a plus.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 - 0 Lacs

Bengaluru

Remote

Naukri logo

Hi , Synergy Technologies is a leader in technology services and consulting. We enable clients across the world to create and execute strategies .We help our clients find the right problems to solve, and to solve these effectively. We bring our expertise and innovation to every project we undertake Position: Business Intelligence Developer Duration : Contract to Full Time Location : Remote Work ( Remote Work ) JD Required qualications include: DBT needed DBT Labs Cloud IDE Business Intelligence Developer Opportunity Our mission is clear: to enhance the safety and well-being of workers across the globe. As a trailblazer in software solutions, we empower businesses and their suppliers with a platform that champions safety, sustainability, and risk management within supply chains. Join our close-knit team of Data Systems and Internal Business Intelligence experts, where you can live out our core values daily and contribute to impactful projects that further the company's vision. About the Role As a Business Intelligence Developer , you will play a critical role in developing impactful business intelligence solutions that empower internal teams with data-driven insights for strategic decision-making. Working closely with business analysts, data engineers, and stakeholders, youll design and build data models, interactive reports, and dashboards to transform complex data into clear, actionable insights. Your efforts will ensure data quality, accuracy, and governance while enhancing accessibility for business users. Key Responsibilities Develop BI Solutions: Design, develop, and implement data models, dashboards, and reports using Power BI to support data-driven initiatives. Data Modeling & Integration: Collaborate with data engineers and analysts to create optimized data models that aggregate data from multiple sources, ensuring scalability and alignment with business needs. Enhance Data Accuracy: Continuously improve data accuracy, standardize key metrics, and refine reporting processes to drive operational efficiency. Ensure Data Governance: Adhere to the company's data governance policies, ensuring that all BI solutions comply with data security standards, especially for sensitive information. Optimize BI Performance: Monitor BI solutions to ensure performance and reliable data access, implementing enhancements as needed. Documentation & User Support: Maintain comprehensive documentation of dashboards, data models, and processes; provide end-user training to maximize tool effectiveness. Adapt and Innovate: Stay informed on BI best practices and emerging technologies to proactively enhance BI capabilities. Qualifications Education: Bachelors Degree in Data Science, Business Analytics, Computer Science, or a related field. Experience: Minimum of 5 years in business intelligence development, including data modeling, reporting, and dashboard creation. Power BI Expertise: Strong experience with Power BI, including advanced DAX calculations, data modeling, and creating visually engaging, actionable dashboards. dbt Labs Cloud IDE: At least 1 year of hands-on experience with dbt Labs Cloud IDE is required. Technical Skills: Proficiency in SQL and modern cloud-based data warehousing concepts, with experience in Snowflake, SQL Server, or Redshift. Cloud and ERP/CRM Proficiency: Familiarity with platforms such as NetSuite, Salesforce, Fivetran, and API integrations; experience with SaaS systems like Zuora Billing, Churn Zero, Marketo, and Qualtrics is a plus. Communication Skills: Ability to translate technical insights into business-friendly language. Preferred Skills Certifications: Power BI, Snowflake, or similar BI tools. Portfolio: Ability to provide redacted samples of Power BI dashboards. SaaS Experience: Background in SaaS organizations is beneficial.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

12 - 16 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

Role & responsibilities Mastery of SQL, especially within cloud-based data warehouses like Snowflake. Experience on Snowflake with data architecture, design, analytics and Development. 2.Detailed knowledge and hands-on working experience in Snowpipe/ SnowProc/ SnowSql. 3.Technical lead with strong development background having 2-3 years of rich hands-on development experience in snowflake. 4.Experience designing highly scalable ETL/ELT processes with complex data transformations, data formats including error handling and monitoring. Good working knowledge of ETL/ELT tool. For Transformation good to have DBT experience 5.Analysis, design, and development of traditional data warehouse and business intelligence solutions. Work with customers to understand and execute their requirements. 6.Working knowledge of software engineering best practices. Should be willing to work in implementation & support projects. Flexible for Onsite & Offshore traveling. 7.Collaborate with other team members to ensure the proper delivery of the requirement. Ability to think strategically about the broader market and influence company direction. 8.Should have good communication skills, team player & good analytical skills. Snowflake certified is preferable. Soniya soniya05.mississippiconsultants@gmail.com We are a Recruitment firm based in Pune, having various clients globally.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Foundit logo

Introduction In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. Youll work with visionaries across multiple industries to improve the hybrid cloud and Al journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, youll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience. In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities Minimum 3 years of experience in developing applications programs to implement the ETL workflow by creating the ETL jobs, data models in datamarts using Snowflake, DBT, Unix, SQL technologies. Redesign Control M Batch processing for the ETL job build to run efficiently in Production. Study existing system to evaluate effectiveness and developed new system to improve efficiency and workflow. Responsibilities: Perform requirements identification conduct business program analysis, testing, and system enhancements while providing production support. Developer should have good understanding of working in Agile environment, Good understanding of JIRA, Sharepoint tools. Good written and verbal communication skills are a MUST as the candidate is expected to work directly with client counterpart. Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise Intuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications Preferred technical and professional experience Responsible to develop triggers, functions, stored procedures to support this effort Assist with impact analysis of changing upstream processes to Data Warehouse and Reporting systems. Assist with design, testing, support, and debugging of new and existing ETL and reporting processes. Perform data profiling and analysis using a variety of tools. Troubleshoot and support production processes. Create and maintain documentation.

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Foundit logo

Introduction In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities Data Strategy and Planning: Develop and implement data architecture strategies that align with organizational goals and objectives. Collaborate with business stakeholders to understand data requirements and translate them into actionable plans. Data Modeling: Design and implement logical and physical data models to support business needs. Ensure data models are scalable, efficient, and comply with industry best practices. Database Design and Management: Oversee the design and management of databases, selecting appropriate database technologies based on requirements. Optimize database performance and ensure data integrity and security. Data Integration: Define and implement data integration strategies to facilitate seamless flow of information across. Responsibilities: Experience in data architecture and engineering Proven expertise with Snowflake data platform Strong understanding of ETL/ELT processes and data integration Experience with data modeling and data warehousing concepts Familiarity with performance tuning and optimization techniques Excellent problem-solving skills and attention to detail Strong communication and collaboration skills Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise Cloud & Data Architecture: AWS , Snowflake ETL & Data Engineering: AWS Glue, Apache Spark, Step Functions Big Data & Analytics: Athena,Presto, Hadoop Database & Storage: SQL, Snow sql Security & Compliance: IAM, KMS, Data Masking Preferred technical and professional experience Cloud Data Warehousing: Snowflake (Data Modeling, Query Optimization) Data Transformation: DBT (Data Build Tool) for ELT pipeline management Metadata & Data Governance: Alation (Data Catalog, Lineage, Governance

Posted 2 weeks ago

Apply

6.0 - 9.0 years

15 - 17 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Hybrid

Naukri logo

Proper Job Description- JD shared by client. 6+ years of experience in data engineering. Strong knowledge in SQL. Expertise in Snowflake, DBT and Python Minimum 3+ years SnapLogic or FivTran tool knowledge is added advantage. Must automate manual work using SnapLogic. Good communication and interpersonal skills is must as need to collaborate with data team, business analyst 2. Primary Skills in 5 liners that manager cannot negotiate on - Snowflake, DBT and Python & SQL 3. Location and Flexible locations – Yes

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies