Home
Jobs

1714 Snowflake Jobs - Page 44

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5 - 8 years

18 - 25 Lacs

Pune

Work from Office

Naukri logo

We are seeking an experienced Modern Microservice Developer to join our team and contribute to the design, development, and optimization of scalable microservices and data processing workflows. The ideal candidate will have expertise in Python, containerization, and orchestration tools, along with strong skills in SQL and data integration. Key Responsibilities: Develop and optimize data processing workflows and large-scale data transformations using Python. Write and maintain complex SQL queries in Snowflake to support efficient data extraction, manipulation, and aggregation. Integrate diverse data sources and perform validation testing to ensure data accuracy and integrity. Design and deploy containerized applications using Docker, ensuring scalability and reliability. Build and maintain RESTful APIs to support microservices architecture. Implement CI/CD pipelines and manage orchestration tools such as Kubernetes or ECS for automated deployments. Monitor and log application performance, ensuring high availability and quick issue resolution. Requirements Mandatory: Bachelor's degree in Computer Science, Engineering, or a related field. 5-8 years of experience in Python development, with a focus on data processing and automation. Proficiency in SQL, with hands-on experience in Snowflake. Strong experience with Docker and containerized application development. Solid understanding of RESTful APIs and microservices architecture. Familiarity with CI/CD pipelines and orchestration tools like Kubernetes or ECS. Knowledge of logging and monitoring tools to ensure system health and performance. Preferred Skills: Experience with cloud platforms (AWS, Azure, or GCP) is a plus.

Posted 2 months ago

Apply

3 - 4 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title: Data Engineer Location: Pune, India (Work from Office) Walk-in Interviews: Bangalore Employment Type: Full-Time Job Overview: We are seeking an experienced Data Engineer to manage ongoing and upcoming data infrastructure projects. The ideal candidate will be responsible for expanding and optimizing data pipelines, enhancing data flow and collection mechanisms , and collaborating with cross-functional teams. This role requires hands-on experience in Python scripting, SQL, AWS, and Snowflake to ensure efficient data handling and insights generation. Key Responsibilities: Develop and Maintain Data Pipelines : Design, build, and maintain optimal data pipeline architecture to support functional and non-functional business requirements using Python and SQL on AWS/Snowflake . Process Automation : Identify, design, and implement internal process improvements through automation, optimizing data delivery, and enhancing system scalability. Data Extraction and Transformation : Develop ETL solutions for extracting, transforming, and loading data from a wide variety of sources. Analytics and Insights : Build analytics tools that utilize the data pipeline to provide actionable insights for customer acquisition, operational efficiency, and key business metrics . Stakeholder Collaboration : Work closely with Executives, Product Managers, Data Scientists, and Design Teams to support data-driven decision-making. Data Security and Compliance : Ensure data security across multiple data centers and AWS regions, adhering to national and global compliance standards. System Enhancement : Work alongside data and analytics experts to improve data architecture and performance . Qualifications and Skills: Technical Skills: Advanced SQL knowledge with hands-on experience in relational databases and query authoring . Expertise in Python scripting for data processing, automation, and analytics . Experience in data wrangling and transformation with Pandas, NumPy, and similar libraries. Strong analytical skills for working with unstructured datasets and performing root-cause analysis. Hands-on experience with ETL pipeline development, metadata management, and workload management . Experience with relational SQL and NoSQL databases (PostgreSQL, Cassandra, etc.). Proficiency in cloud services including AWS, Azure, Snowflake, and Google Cloud . Experience: 3+ years of experience in a Data Engineering role with a Bachelors degree in Computer Science or a related field. Experience with data integration, API development, web scraping, and automation . Knowledge of SQL Server activity monitoring and Snowflake . Ability to debug and troubleshoot Python scripts and SQL queries . Strong project management and organizational skills with the ability to work in dynamic, cross-functional environments .

Posted 2 months ago

Apply

3 - 5 years

5 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

About your team The GPS Datawarehouse & Reporting is a team of around 100 people whose role is to develop and maintain the datwarehouse and reporting platforms that we use to administer the persions and investments of our workplace and retail customers across the world. In doing this we critical to the delivery of our core product and value proposition to these clients today and in future About your role The role will focus on functional testing, data test automation, and collaboration with both development and business analysts to identify functional and technical testing areas, as well as address key capability testing gaps. The role will involve using/enhancing existing automation frameworks or developing a new one to improve the speed of change. The key outcomes shall include (but are not limited to) creating and maintaining testing related services/framework and a functional automation suite while keeping the DevOps principle of speed and quality in mind About you Experience in working with Snowflake and ETL tool like Snaplogic/Informatica Experience in Python Experience in designing DB validation test cases for Cloud platform Expertise in writing Complex queries for Data base testing, data mining Good understanding of database concepts, such as data models, schemas, tables, queries, views, stored procedures, triggers, indexes, constraints, and transactions. Expertise in Data warehouse ETL testing and concepts Well known to the overall Cloud Terminologies, tools and usage Experience in at least one Cloud based tool for Database platform Experience of test automation framework design for service layer/API. Relevant experience in Application development tools and frameworks like Rest Assured, Cucumber etc. Experience of using source code management tools, e.g. GitHub Knowing development methodologies such as Scrum, Agile, and Kanban. Candidate needs to have rich experience around engineering skills, CI CD and build/ deployment automation tools. Appreciate the business principles involved in a project

Posted 2 months ago

Apply

8 - 13 years

35 - 40 Lacs

Delhi NCR, Mumbai, Bengaluru

Work from Office

Naukri logo

Design, develop, and maintain data pipelines in Snowflake. Perform data transformations, mappings, and scheduling of ETL processes. Set up and manage dbt models to ensure data quality and consistency. Monitor and troubleshoot data jobs to ensure seamless operation. Collaborate with data analysts and engineers to optimize data workflows. Implement best practices for data storage, retrieval, and security. Tech Stack - AWS Big Data Stack Expertise in ETL, SQL, Python and AWS tools like Redshift,S3, Glue, Data pipeline, Scala, Spark, Lambda is a must. Good to have knowledge on Glue Workflows, Step Functions, Quick sight, Athena, Terraform and Dockers. Responsibilities -Assists in the analysis, design and development of a roadmap, design pattern, and implementation based upon a current vs. future state from a architecture viewpoint. Participates in the data related technical and business discussions relative to future serverless architecture. Responsible for working with our Enterprise customers and migrate data into Cloud. Set up scalable ETL process to move data into Cloud warehouse. Deep understanding in Data Warehousing, Dimensional Modelling, ETL Architect, Data Conversion/Transformation, Database Design, Data Warehouse Optimization, Data Mart Development etc. .ETL, SSIS, SSAS TSQL Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote

Posted 2 months ago

Apply

6 - 8 years

15 - 20 Lacs

Pune

Work from Office

Naukri logo

We are conducting Virtual drive on Saturday 29th March and Sunday 30th March 25 . Job Description for SQE/ETL Testers with Databricks / Big Data / Snowflake Testing Exposure Position Title: Software Quality Engineer (SQE) / ETL Tester Location: Pune Hinjewadi Phase 1 Job Type: Full-Time Experience Level: Mid-Senior Level NP : Immediate to 15 Days About the Role: We are seeking a dynamic and detail-oriented Software Quality Engineer (SQE) / ETL Tester to join our team. The ideal candidate will have hands-on experience in ETL testing , along with expertise in Big Data environments , and experience working with tools like Databricks , Snowflake , and other cloud-based data platforms. You will be responsible for ensuring the quality and performance of data pipelines, data transformation processes, and large-scale data environments. As a part of the team, you will test and validate end-to-end data processes, ensuring that ETL processes from source to target systems are functioning correctly and efficiently in complex, high-volume data environments. Key Responsibilities: ETL Testing: Perform testing on ETL processes to validate data extraction, transformation, and loading between various systems. Create and execute comprehensive test cases for ETL processes ensuring data accuracy and integrity. Design and execute manual and automated test scripts to ensure the quality of ETL processes and data transformations. Big Data Testing: Work with Big Data platforms like Databricks and Snowflake to validate the integrity, consistency, and scalability of data processing. Ensure that data pipelines and data storage systems handle large-scale data processing effectively and efficiently. Data Validation & Quality Assurance: Validate the quality of data at various stages, including extraction, transformation, and loading. Work closely with data engineers and business analysts to ensure data accuracy, performance, and compliance with business requirements. Automation: Develop and maintain test automation scripts for ETL testing using popular frameworks. Focus on automating data validation and ensuring scalability of tests across large datasets and Big Data platforms. Reporting & Issue Tracking: Document and track defects found during testing and collaborate with development teams for timely resolution. Provide regular status reports and communicate test progress and issues to stakeholders. Collaboration & Communication: Work in Agile teams with Data Engineers, Data Analysts, and other stakeholders to ensure high-quality deliverables. Actively participate in daily stand-ups, sprint planning, and retrospective meetings. Required Skills and Qualifications: Experience in ETL Testing: Hands-on experience testing ETL processes, including data extraction, transformation, and loading using various ETL tools. Big Data Testing Exposure: Experience with Databricks , Snowflake , Hadoop , or other Big Data technologies. Knowledge of cloud-based data platforms (AWS, Azure, Google Cloud) is a plus. Data Validation: Strong ability to work with large data sets, verify data accuracy, and ensure data integrity. SQL & Scripting Skills: Advanced proficiency in SQL and experience with data querying, stored procedures, and scripts. Automation Testing: Proficiency with test automation tools and frameworks (e.g., Selenium, Python, or other ETL automation tools). Experience with Agile: Comfortable working in an Agile development environment and collaborating with cross-functional teams. Testing Tools & Techniques: Familiarity with data testing tools such as Apache JMeter, Talend, or Informatica. Preferred Qualifications: Experience with Databricks/Snowflake: Hands-on experience with Databricks, Snowflake, or similar cloud-based Big Data platforms. Programming Languages: Familiarity with programming languages like Python, Java, or Scala. Cloud Architecture: Experience working with cloud architectures, including Azure, AWS, or Google Cloud. Automation Frameworks: Experience developing and maintaining automation frameworks for data and ETL testing. Intrested Candidates , Pls share CV on ashwini.dabir@orcapod.work

Posted 2 months ago

Apply

5 - 7 years

7 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

The Data Engineer will be a part of a global team that designs, develops, and delivers BI and Analytics solutions leveraging the latest BI and Analytics technologies for Koch Industries. Koch Industries is a privately held global organization with over 120,000 employees around the world, with subsidiaries involved in manufacturing, trading, and investments. Koch global services India (KGSI) is being developed in India to extend its IT operations, as well as act as a hub for innovation in the IT function. As KGSI rapidly scales up its operations in India, its employees will get opportunities to carve out a career path for themselves within the organization. This role will have the opportunity to join on the ground floor and will play a critical part in helping build out the Koch global services India (KGSI) over the next several years. Working closely with global colleagues would provide significant global exposure to the employees. This role is a part of the INVISTA team within the KGSI. Our Team We are seeking a Data Engineer who will be responsible to develop and implement a future-state data analytics platform for both the back-end data processing and the front-end data visualization component for the Data and Analytics teams. This will be a hands-on role to build ingestion pipelines and the Datawarehouse What You Will Do Design, develop, enhance, and debug the existing SQL code in the existing and new data pipeline. Design, develop, enhance, and debug the existing SQL code. Implement best practices for high data availability, computational efficiency, cost-effectiveness, and data quality within Snowflake and AWS environments. Build and enhance environments, processes, functionalities, and tools to streamline all stages of data lake implementation and analytics solution development, including proof of concepts, prototypes, and production systems. Drive automation initiatives throughout the data lifecycle based on configuration-driven approaches, ensuring repeatability and scalability of processes. Stay updated with relevant technology trends and product updates, especially within AWS services, and incorporate them into the data engineering ecosystem. Collaborate with cross-functional teams following agile methodologies to ensure alignment with business objectives and best practices for data governance. Who You Are (Basic Qualifications) 5-7 years of professional experience in data engineering or Big data and data warehousing. Strong hands-on SQL and Data Modelling skills (must) and intermediate Python. Proficient in handling high volume of data and developing SQL queries which are scalable and performant. Proven experience in ETL/ELT concepts and methodologies. Proficiency in data analytics tasks such as data wrangling, mining, integration, analysis, visualization, data modelling, and reporting, using BI tools. Expertise in primary skills including Data warehousing, SQL. Excellent communication skills with the ability to effectively communicate complex technical concepts and drive initiatives within the team and across departments. Proven track record of contribution-driven work ethic, striving for excellence in every aspect of data engineering and software development. What Will Put You Ahead Have delivered BI project by working on any modern BI tools like Power BI, Tableau, Qlik Sense. Knowledge of Qlik Replicate and Denodo for data virtualization layer.

Posted 2 months ago

Apply

2 - 4 years

4 - 6 Lacs

Bengaluru

Work from Office

Naukri logo

About your team The Data Platform team manage the products and technical infrastructure that underpin the use of data at Fidelity databases (Oracle, SQL Server, PostgreSQL), data streaming (Kafka, Snaplogic), data security, data lake (Snowflake), data analytics (PowerBI, Oracle Analytics Cloud), data management, and more. We provide both cloud-based and on premise solutions as well as automations and self-service tools. The company predominately uses AWS, but we also deploy on Azure and Oracle Cloud in certain situations. About your role You role will be to use your skills (and the many skills you will acquire on the job) to develop our cloud solutions for the products and services we look after. We offer a complete service, so this includes support work as well as technical development work. Our goal is to provide the best possible service to our customers and so you will also be involved technical work to streamline our processes and provide self-service options where possible. We work in a highly regulated environment and so all solutions must be secure, highly available and compliant with policies. About you You will be a motivated, curious and technically savvy person who is always collaborative and keeps the customer in mind with the work you perform. Required skills are: - Practical experience of implementing simple & effective cloud and/or database solutions - Practical working knowledge of fundamental AWS concepts, such as IAM, networking, security, compute (Lambda, EC2), S3, SQS/SNS, scheduling tools - Python, BOTO and SQL programming (Java Script a bonus) - Experience of delivering change through CI/CD (and ideally Terraform) - Ability to work on tasks as a team player, share knowledge and deal effectively with people from other company departments - Excellent verbal & written communication in English

Posted 2 months ago

Apply

7 - 10 years

12 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

About us: Working at Target means helping all families discover the joy of everyday life. We bring that vision to life through our values and culture. . About the Role: As a Lead Data Engineer , you will serve as the technical anchor for the engineering team, responsible for designing and developing scalable, high-performance data solutions . You will own and drive data architecture that supports both functional and non-functional business needs, ensuring reliability, efficiency, and scalability . Your expertise in big data technologies, distributed systems, and cloud platforms will help shape the engineering roadmap and best practices for data processing, analytics, and real-time data serving . You will play a key role in architecting and optimizing data pipelines using Hadoop, Spark, Scala/Java, and cloud technologies to support enterprise-wide data initiatives. Additionally, experience with API development for serving low-latency data and Customer Data Platforms (CDP) will be a strong plus. Key Responsibilities: Architect and build scalable, high-performance data pipelines and distributed data processing solutions using Hadoop, Spark, Scala/Java, and cloud platforms (AWS/GCP/Azure) . Design and implement real-time and batch data processing solutions , ensuring data is efficiently processed and made available for analytical and operational use. Develop APIs and data services to expose low-latency, high-throughput data for downstream applications, enabling real-time decision-making. Optimize and enhance data models, workflows, and processing frameworks to improve performance, scalability, and cost-efficiency. Drive data governance, security, and compliance best practices. Collaborate with data scientists, product teams, and business stakeholders to understand requirements and deliver data-driven solutions . Lead the design, implementation, and lifecycle management of data services and solutions. Stay up to date with emerging technologies and drive adoption of best practices in big data engineering, cloud computing, and API development . Provide technical leadership and mentorship to engineering teams, promoting best practices in data engineering and API design . About You: 7+ years of experience in data engineering, software development, or distributed systems. Expertise in big data technologies such as Hadoop, Spark, and distributed processing frameworks. Strong programming skills in Scala and/or Java (Python is a plus). Experience with cloud platforms (AWS, GCP, or Azure) and their data ecosystem (e.g., S3, BigQuery, Databricks, EMR, Snowflake, etc.). Proficiency in API development using REST, GraphQL, or gRPC to serve real-time and batch data. Experience with real-time and streaming data architectures (Kafka, Flink, Kinesis, etc.). Strong knowledge of data modeling, ETL pipeline design, and performance optimization . Understanding of data governance, security, and compliance in large-scale data environments. Experience with Customer Data Platforms (CDP) or customer-centric data processing is a strong plus. Strong problem-solving skills and ability to work in complex, unstructured environments . Excellent communication and collaboration skills, with experience working in cross-functional teams . Why Join Us? Work with cutting-edge big data, API, and cloud technologies in a fast-paced, collaborative environment. Influence and shape the future of data architecture and real-time data services at Target. Solve high-impact business problems using scalable, low-latency data solutions . Be part of a culture that values innovation, learning, and growth .

Posted 2 months ago

Apply

1 - 3 years

2 - 4 Lacs

Hyderabad

Work from Office

Naukri logo

Description: As a Business Intelligence Data Analyst in the Digital Supply Chain team, we will be part of an Agile project team focused on driving data analytic initiatives in the automation of a digital supply chain using visualizations and stories. We will be responsible for creating and delivering data reporting solutions and visualizations that are consumed by a wide audience including business leaders, business analysts & planners, suppliers and customers. You will work with business partners to define system requirements (such as requirements for interface design to support a global user base) and coordinating with the Information Technology (IT) department as needed. As a team, we craft and build specialized business intelligence solutions that answer key business questions on semiconductor industry. We strive to deliver value to the organization through the visibility of data through metrics, dashboards, analytics, and reporting. We seek to deliver high-quality data, support, and availability of our information assets. Successful applicants will work on Analyze and define Functional requirements of the user story (requirements) and acceptance criteria. Design , Develop and deliver Reporting and Analytic solutions using Microsoft SQL Server Business Intelligence tools, Snowflake, SAP Hana/Business Objects and Power BI or Tableau. Complete unit testing, participate in code review, document solution delivery, and coordinate deployment. Support Operational reporting requirements coming from Singapore team and US Teams, quickly deliver them. Support the development team in functional requirement and understanding of User stories. Collaborate with Global user base and BI Analysts to ensure that the solution is meeting the business needs. Drive effective agile project management through active engagement in the scrum development process. Provide hands on demonstration of the solution Provide Functional support to business team members during testing and deployment phase. Work with teams to deliver effective, high-value reporting solutions by using an established delivery methodology. Qualified candidates must possess: Outstanding SQL and Python skills Java experience is Nice to Have. Intermediate Knowledge on Data visualization and Reporting software like Tableau/Power BI Nice to have knowledge on UiPath/Blue Prism in Automating the Manual Work by developing Standard Bots standardized for Business purposes. Bachelors Degree or equivalent experience in relevant Management Information Systems (MIS), Computer Information Systems, Computer Science, Statistics, Engineering or related field of study

Posted 2 months ago

Apply

2 - 4 years

3 - 7 Lacs

Mumbai, Gurgaon

Work from Office

Naukri logo

We are seeking a Data Scientist to contribute to advancedanalytics and predictive modelling initiatives. The ideal candidate willcombine strong statistical knowledge with practical business understanding tohelp develop Required Candidate profile -2-4 yearsexperience in applied data science, preferably in marketing/retail -Experiencein developing and implementing machine learning models -Strongunderstanding of statistical concepts

Posted 2 months ago

Apply

10 - 17 years

11 - 21 Lacs

Pune, Bengaluru, Hyderabad

Work from Office

Naukri logo

company : us based mnc Job Title: Lead Engineer Skill : Team Lead, SSIS, SRSS, SQL Server, AGILE METHODOLOGY, SLDC PRINCIPLE, DATA MODELLING, DATA WAREHOUSING. Experience : 10+ Yrs. Location : Hyderabad, Bangalore, Pune, Mumbai Shift : Hybrid N.P:0

Posted 2 months ago

Apply

7 - 9 years

35 - 37 Lacs

Pune, Mumbai, Kolkata

Work from Office

Naukri logo

Dear Candidate, We are seeking a skilled BI & Data Visualization Developer to join our team. The ideal candidate will be responsible for designing, developing, and maintaining business intelligence solutions, dashboards, and reports to help drive data-driven decision-making. Role & Responsibilities: Develop, maintain, and optimize interactive dashboards and reports . Design and implement data visualization solutions that provide business insights. Extract, transform, and load (ETL) data from multiple sources into BI platforms. Ensure data accuracy, consistency, and reliability across reports and dashboards. Work closely with business stakeholders to understand reporting needs and translate them into technical solutions. Optimize query performance for large datasets. Ensure security and governance of BI data and reports. Stay updated with the latest BI tools and trends . Required Skills & Qualifications: Proficiency in BI tools such as Power BI, Tableau, or Looker. Strong knowledge of SQL and data modeling . Experience with ETL processes and data integration. Familiarity with cloud data platforms (AWS, Azure, GCP). Understanding of data warehousing concepts and best practices. Strong problem-solving and analytical skills . Experience with Python or R for data analysis is a plus. Ability to work collaboratively with cross-functional teams . Excellent communication and presentation skills . Soft Skills: Strong problem-solving and analytical skills. Excellent communication skills to work with cross-functional teams. Ability to work independently and as part of a team. Detail-oriented with a focus on delivering high-quality solutions Note: If you are interested, please share your updated resume and suggest the best number & time to connect with you. If your resume is shortlisted, one of the HR from my team will contact you as soon as possible. Srinivasa Reddy Kandi Delivery Manager Integra Technologies

Posted 2 months ago

Apply

7 - 12 years

30 - 40 Lacs

Bengaluru

Hybrid

Naukri logo

Senior Data Engineer Location: Bangalore, India Primary Tech skills: Advanced Web-crawling & scraping methods and tools Building end-end Data Engineering pipelines for Semi and unstructured data (Text, all kinds of simple/complex table structures, images, video and audio data) Python, Pyspark, SQL, RDBMS Data Transformation (ETL/ELT) activities SQL Data warehouse (e.g. Snowflake) working / preferably administration Secondary Tech skills: Databricks Familiarity with AWS services : S3, Glue, EMR, EC2, RDS, monitoring and IAM Kafka, Spark & Kafka Streaming Workflow automation (e.g. using Github actions) Performing RCA Responsibilities: Develop, maintain, and optimize data pipelines and workflows and Feature Store to ensure seamless data ingestion and transformation as a scalable data solution. Design, develop, implement, and architect Data Engineering pipelines, considering performance & scalability including data storage and processing. Implement advanced data transformations and quality checks to ensure data accuracy, completeness, security and consistency of data. Seamlessly integrate data from diverse sources, for data ingestion, transformation and storage, leveraging AWS S3 Storage and possibly Snowflake as a SQL Data Warehouse. Create and implement advanced data models and schemas and ensure data governance and data management best practices. Qualification and Desired Experiences: 7+ years of data analysis and engineering experience Bachelors degree in computer science, Statistics, Informatics, Information Systems or another quantitative field. Working knowledge of API or Stream-based data extraction processes like Salesforce API and Bulk API and have hands-on experience in web crawling. Personal Skills: Ability to collaborate cross-functionally and build sound working relationships within all levels of the organization Ability to handle sensitive information with keen attention to detail and accuracy. Passion for data handling ethics. Effective time management skills and ability to solve complex technical problems with creative solutions while anticipating stakeholder needs and helping meet or exceed expectations Comfortable with ambiguity and uncertainty of change when assessing needs for stakeholders Self-motivated and innovative; confident when working independently, but an excellent team player with a growth-oriented personality

Posted 2 months ago

Apply

6 - 10 years

10 - 18 Lacs

Chennai

Work from Office

Naukri logo

Proven experience as a Snowflake Lead/Senior Developer or similar role. Strong proficiency in SQL/python Strong proficiency in Dbt data modelling. Solid understanding of data modelling principles and best practices. Preferred Qualifications: Experience with cloud platforms (GCP preferred). Please send cv to gopi@nithminds.com

Posted 2 months ago

Apply

7 - 12 years

16 - 27 Lacs

Hyderabad

Work from Office

Naukri logo

Job Description Data Engineer We are seeking a highly skilled Data Engineer with extensive experience in Snowflake, Data Build Tool (dbt), Snaplogic, SQL Server, PostgreSQL, Azure Data Factory, and other ETL tools. The ideal candidate will have a strong ability to optimize SQL queries and a good working knowledge of Python. A positive attitude and excellent teamwork skills are essential. Role & responsibilities Data Pipeline Development: Design, develop, and maintain scalable data pipelines using Snowflake, DBT, Snaplogic, and ETL tools. SQL Optimization: Write and optimize complex SQL queries to ensure high performance and efficiency. Data Integration: Integrate data from various sources, ensuring consistency, accuracy, and reliability. Database Management: Manage and maintain SQL Server and PostgreSQL databases. ETL Processes: Develop and manage ETL processes to support data warehousing and analytics. Collaboration: Work closely with data analysts, data scientists, and business stakeholders to understand data requirements and deliver solutions. Documentation: Maintain comprehensive documentation of data models, data flows, and ETL processes. Troubleshooting: Identify and resolve data-related issues and discrepancies. Python Scripting: Utilize Python for data manipulation, automation, and integration tasks. Preferred candidate profile Proficiency in Snowflake, DBT, Snaplogic, SQL Server, PostgreSQL, and Azure Data Factory. Strong SQL skills with the ability to write and optimize complex queries. Knowledge of Python for data manipulation and automation. Knowledge of data governance frameworks and best practices Soft Skills: Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Positive attitude and ability to work well in a team environment. Certifications: Relevant certifications (e.g., Snowflake, Azure) are a plus. Please forward your updated profiles to the below mentioned Email Address: divyateja.s@prudentconsulting.com

Posted 2 months ago

Apply

8 - 13 years

30 - 35 Lacs

Hyderabad

Work from Office

Naukri logo

Preferred candidate profile Senior Snowflake Database Engineer who excels in developing complex queries and stored procedures . The ideal candidate should have a deep understanding of Snowflake architecture and performance tuning techniques. He / She will work closely with application engineers to integrate database solutions seamlessly into applications, ensuring optimal performance and reliability. Strong expertise in Snowflake , including data modeling, query optimization, and performance tuning. Proficiency in writing complex SQL queries, stored procedures, and functions. Experience with database performance tuning techniques, including indexing and query profiling. Familiarity with integrating database solutions into application code and workflows. Knowledge of data governance and data quality best practices is a plus. Strong analytical and problem-solving skills along with excellent communication skills to collaborate effectively

Posted 2 months ago

Apply

7 - 9 years

10 - 15 Lacs

Mumbai

Work from Office

Naukri logo

We are seeking a highly skilled Senior Snowflake Developer with expertise in Python, SQL, and ETL tools to join our dynamic team. The ideal candidate will have a proven track record of designing and implementing robust data solutions on the Snowflake platform, along with strong programming skills and experience with ETL processes. Key Responsibilities: Designing and developing scalable data solutions on the Snowflake platform to support business needs and analytics requirements. Leading the end-to-end development lifecycle of data pipelines, including data ingestion, transformation, and loading processes. Writing efficient SQL queries and stored procedures to perform complex data manipulations and transformations within Snowflake. Implementing automation scripts and tools using Python to streamline data workflows and improve efficiency. Collaborating with cross-functional teams to gather requirements, design data models, and deliver high-quality solutions. Performance tuning and optimization of Snowflake databases and queries to ensure optimal performance and scalability. Implementing best practices for data governance, security, and compliance within Snowflake environments. Mentoring junior team members and providing technical guidance and support as needed. Qualifications: Bachelor's degree in Computer Science, Engineering, or related field. 7+ years of experience working with Snowflake data warehouse. Strong proficiency in SQL with the ability to write complex queries and optimize performance. Extensive experience developing data pipelines and ETL processes using Python and ETL tools such as Apache Airflow, Informatica, or Talend. Strong Python coding experience needed minimum 2 yrs Solid understanding of data warehousing concepts, data modeling, and schema design. Experience working with cloud platforms such as AWS, Azure, or GCP. Excellent problem-solving and analytical skills with a keen attention to detail. Strong communication and collaboration skills with the ability to work effectively in a team environment. Any relevant certifications in Snowflake or related technologies would be a plus.

Posted 2 months ago

Apply

2 - 7 years

27 - 42 Lacs

Bangalore Rural

Hybrid

Naukri logo

Note: We are preferring employees from Product organization and premium Engineering Institutes. Data Platform Engineer: Assisting team members in designing and building data infrastructure at scale. We handle PB of data each day through streaming and batch processing. You will be helping to deliver data to our data lake for use by our Data Warehouse team, Analytics teams and Data Scientists. Work on Data Lakehouse system architecture and ingesting/pipelining of data, and tools to automateand orchestrate in delivering performance, reliability, and operational efficiency • Define both batch and streaming data-parallel processing pipelines and distributed processing back-ends Build CI/CD Pipelines and manage configuration management Build tools and services that run on k8s that are part of our data ecosystem Routinely write efficient, legible, and well-commented Python Clear communication skills to deliver on complex, technical topics Help scale our data warehouse (we use Snowflake) for clean data-ready delivery for analysis • Work closely with Analytic Engineers and Data Analysts for the collection/analysis of raw data for models that empower end users. Build and scale our warehouse platform for data ingest, logging, search, aggregation, viewing, and analysis What we're looking for: 5+ years of professional experience working and developing using Python and/or Java • 3+ years of professional experience working with Python scripting experience (Unix, bash, python) AWS Certification or equivalent experience Terraform or other IaC Tools (Terraform Preferred) Experience with Streaming Data Apache Beam, Fink, Spark, and Kafka Experience with modern data technologies such as Airflow, Snowflake, Redshift, Spark • Knowledge of source control, gitflow, gitlabflow, CI/CD (gitlab, circleci) Knowledge/Experience working with Kubernetes, Docker, Helm Experience with automation and orchestration tools Bachelors degree or equivalent in computer science, information systems, or a combination of education and related experience required Data Engineer: What You'll Do Lead the building of scalable, fault tolerant pipelines with built in data quality checks that transform, load and curate data from various internal and external systems Provide leadership to cross-functional initiatives and projects. Influence architecture design and decisions. Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs. Improve engineering processes and cross-team collaboration. Ruthlessly prioritize work to align with company priorities. Provide thought leadership to grow and evolve DE function and implementation of SDLC best practices in building internal-facing data products by staying up-to-date with • industry trends, emerging technologies, and best practices in data engineering What we're looking for: 3-12 years of experience in BI and Data Warehousing. Minimum 3 years of experience leading data teams in a high-volume environment • Minimum 4 years of experience with dbt, Airflow and snowflake Experience with Apache Iceberg tables Experience and knowledge of building data-lakes in AWS (i.e. Spark/Glue, Athena), • Including data modeling, data quality best practices, and self-service tooling. • Experience mentoring data professionals from junior to senior levels Demonstrated success leading cross functional initiatives Passionate about data quality, code quality, SLAs and continuous improvement • Deep understanding of data system architecture Deep understanding of ETL/ELT patterns Development experience in at least one object-oriented language (Python,R,Scala, etc.). • Comfortable with SQL and related tooling

Posted 2 months ago

Apply

2 - 7 years

27 - 42 Lacs

Bengaluru

Hybrid

Naukri logo

Note: We are preferring employees from Product organization and premium Engineering Institutes. Data Platform Engineer: Assisting team members in designing and building data infrastructure at scale. We handle PB of data each day through streaming and batch processing. You will be helping to deliver data to our data lake for use by our Data Warehouse team, Analytics teams and Data Scientists. Work on Data Lakehouse system architecture and ingesting/pipelining of data, and tools to automateand orchestrate in delivering performance, reliability, and operational efficiency • Define both batch and streaming data-parallel processing pipelines and distributed processing back-ends Build CI/CD Pipelines and manage configuration management Build tools and services that run on k8s that are part of our data ecosystem Routinely write efficient, legible, and well-commented Python Clear communication skills to deliver on complex, technical topics Help scale our data warehouse (we use Snowflake) for clean data-ready delivery for analysis • Work closely with Analytic Engineers and Data Analysts for the collection/analysis of raw data for models that empower end users. Build and scale our warehouse platform for data ingest, logging, search, aggregation, viewing, and analysis What we're looking for: 5+ years of professional experience working and developing using Python and/or Java • 3+ years of professional experience working with Python scripting experience (Unix, bash, python) AWS Certification or equivalent experience Terraform or other IaC Tools (Terraform Preferred) Experience with Streaming Data Apache Beam, Fink, Spark, and Kafka Experience with modern data technologies such as Airflow, Snowflake, Redshift, Spark • Knowledge of source control, gitflow, gitlabflow, CI/CD (gitlab, circleci) Knowledge/Experience working with Kubernetes, Docker, Helm Experience with automation and orchestration tools Bachelors degree or equivalent in computer science, information systems, or a combination of education and related experience required Data Engineer: What You'll Do Lead the building of scalable, fault tolerant pipelines with built in data quality checks that transform, load and curate data from various internal and external systems Provide leadership to cross-functional initiatives and projects. Influence architecture design and decisions. Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs. Improve engineering processes and cross-team collaboration. Ruthlessly prioritize work to align with company priorities. Provide thought leadership to grow and evolve DE function and implementation of SDLC best practices in building internal-facing data products by staying up-to-date with • industry trends, emerging technologies, and best practices in data engineering What we're looking for: 3-12 years of experience in BI and Data Warehousing. Minimum 3 years of experience leading data teams in a high-volume environment • Minimum 4 years of experience with dbt, Airflow and snowflake Experience with Apache Iceberg tables Experience and knowledge of building data-lakes in AWS (i.e. Spark/Glue, Athena), • Including data modeling, data quality best practices, and self-service tooling. • Experience mentoring data professionals from junior to senior levels Demonstrated success leading cross functional initiatives Passionate about data quality, code quality, SLAs and continuous improvement • Deep understanding of data system architecture Deep understanding of ETL/ELT patterns Development experience in at least one object-oriented language (Python,R,Scala, etc.). • Comfortable with SQL and related tooling

Posted 2 months ago

Apply

2 - 7 years

27 - 42 Lacs

Bangalore Rural

Hybrid

Naukri logo

We are preferring employees from Product organisation and premium Engineering Institutes. We are hiring for our client, who is an Indian multinational technology services company based in Pune. It is primarily engaged in cloud computing, internet of things, endpoint security, big data analytics and software product engineering services. Data Platform Engineer: Assisting team members in designing and building data infrastructure at scale. We handle PB of data each day through streaming and batch processing. You will be helping to deliver data to our data lake for use by our Data Warehouse team, Analytics teams and Data Scientists. Work on Data Lakehouse system architecture and ingesting/pipelining of data, and tools to automateand orchestrate in delivering performance, reliability, and operational efficiency • Define both batch and streaming data-parallel processing pipelines and distributed processing back-ends Build CI/CD Pipelines and manage configuration management Build tools and services that run on k8s that are part of our data ecosystem Routinely write efficient, legible, and well-commented Python Clear communication skills to deliver on complex, technical topics Help scale our data warehouse (we use Snowflake) for clean data-ready delivery for analysis • Work closely with Analytic Engineers and Data Analysts for the collection/analysis of raw data for models that empower end users. Build and scale our warehouse platform for data ingest, logging, search, aggregation, viewing, and analysis What we're looking for: 5+ years of professional experience working and developing using Python and/or Java • 3+ years of professional experience working with Python scripting experience (Unix, bash, python) AWS Certification or equivalent experience Terraform or other IaC Tools (Terraform Preferred) Experience with Streaming Data Apache Beam, Fink, Spark, and Kafka Experience with modern data technologies such as Airflow, Snowflake, Redshift, Spark • Knowledge of source control, gitflow, gitlabflow, CI/CD (gitlab, circleci) Knowledge/Experience working with Kubernetes, Docker, Helm Experience with automation and orchestration tools Bachelors degree or equivalent in computer science, information systems, or a combination of education and related experience required Data Engineer: What You'll Do Lead the building of scalable, fault tolerant pipelines with built in data quality checks that transform, load and curate data from various internal and external systems Provide leadership to cross-functional initiatives and projects. Influence architecture design and decisions. Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs. Improve engineering processes and cross-team collaboration. Ruthlessly prioritize work to align with company priorities. Provide thought leadership to grow and evolve DE function and implementation of SDLC best practices in building internal-facing data products by staying up-to-date with • industry trends, emerging technologies, and best practices in data engineering What we're looking for: 3-12 years of experience in BI and Data Warehousing. Minimum 3 years of experience leading data teams in a high-volume environment • Minimum 4 years of experience with dbt, Airflow and snowflake Experience with Apache Iceberg tables Experience and knowledge of building data-lakes in AWS (i.e. Spark/Glue, Athena), • Including data modeling, data quality best practices, and self-service tooling. • Experience mentoring data professionals from junior to senior levels Demonstrated success leading cross functional initiatives Passionate about data quality, code quality, SLAs and continuous improvement • Deep understanding of data system architecture Deep understanding of ETL/ELT patterns Development experience in at least one object-oriented language (Python,R,Scala, etc.). • Comfortable with SQL and related tooling

Posted 2 months ago

Apply

2 - 7 years

27 - 42 Lacs

Bengaluru

Hybrid

Naukri logo

We are preferring employees from Product organisation and premium Engineering Institutes. We are hiring for our client, who is an Indian multinational technology services company based in Pune. It is primarily engaged in cloud computing, internet of things, endpoint security, big data analytics and software product engineering services. Data Platform Engineer: Assisting team members in designing and building data infrastructure at scale. We handle PB of data each day through streaming and batch processing. You will be helping to deliver data to our data lake for use by our Data Warehouse team, Analytics teams and Data Scientists. Work on Data Lakehouse system architecture and ingesting/pipelining of data, and tools to automateand orchestrate in delivering performance, reliability, and operational efficiency • Define both batch and streaming data-parallel processing pipelines and distributed processing back-ends Build CI/CD Pipelines and manage configuration management Build tools and services that run on k8s that are part of our data ecosystem Routinely write efficient, legible, and well-commented Python Clear communication skills to deliver on complex, technical topics Help scale our data warehouse (we use Snowflake) for clean data-ready delivery for analysis • Work closely with Analytic Engineers and Data Analysts for the collection/analysis of raw data for models that empower end users. Build and scale our warehouse platform for data ingest, logging, search, aggregation, viewing, and analysis What we're looking for: 5+ years of professional experience working and developing using Python and/or Java • 3+ years of professional experience working with Python scripting experience (Unix, bash, python) AWS Certification or equivalent experience Terraform or other IaC Tools (Terraform Preferred) Experience with Streaming Data Apache Beam, Fink, Spark, and Kafka Experience with modern data technologies such as Airflow, Snowflake, Redshift, Spark • Knowledge of source control, gitflow, gitlabflow, CI/CD (gitlab, circleci) Knowledge/Experience working with Kubernetes, Docker, Helm Experience with automation and orchestration tools Bachelors degree or equivalent in computer science, information systems, or a combination of education and related experience required Data Engineer: What You'll Do Lead the building of scalable, fault tolerant pipelines with built in data quality checks that transform, load and curate data from various internal and external systems Provide leadership to cross-functional initiatives and projects. Influence architecture design and decisions. Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs. Improve engineering processes and cross-team collaboration. Ruthlessly prioritize work to align with company priorities. Provide thought leadership to grow and evolve DE function and implementation of SDLC best practices in building internal-facing data products by staying up-to-date with • industry trends, emerging technologies, and best practices in data engineering What we're looking for: 3-12 years of experience in BI and Data Warehousing. Minimum 3 years of experience leading data teams in a high-volume environment • Minimum 4 years of experience with dbt, Airflow and snowflake Experience with Apache Iceberg tables Experience and knowledge of building data-lakes in AWS (i.e. Spark/Glue, Athena), • Including data modeling, data quality best practices, and self-service tooling. • Experience mentoring data professionals from junior to senior levels Demonstrated success leading cross functional initiatives Passionate about data quality, code quality, SLAs and continuous improvement • Deep understanding of data system architecture Deep understanding of ETL/ELT patterns Development experience in at least one object-oriented language (Python,R,Scala, etc.). • Comfortable with SQL and related tooling

Posted 2 months ago

Apply

3 - 6 years

15 - 20 Lacs

Vijayawada

Work from Office

Naukri logo

Job Description : - Experience in architecting with AWS or Azure Cloud Data Platform - Successfully implemented large scale data warehouse data lake solutions in snowflake or AWS Redshift - Be proficient in Data modelling and data architecture design experienced in reviewing 3rd Normal Form and Dimensional models. - Experience in implementing Master data management, process design and implementation - Experience in implementing Data quality solutions including processes - Experience in IOT Design using AWS or Azure Cloud platforms - Experience designing and implementing machine learning solutions as part of high-volume data ingestion and transformation - Experience working with structured and unstructured data including geo-spatial data - Experience in technologies like python, SQL, no SQL, KAFKA, Elastic Search - Hands on experience using snowflake, informatica, azure logic apps, azure functions, azure storage, azure data lake and azure search.

Posted 2 months ago

Apply

3 - 6 years

15 - 20 Lacs

Patna

Work from Office

Naukri logo

Job Description : - Experience in architecting with AWS or Azure Cloud Data Platform - Successfully implemented large scale data warehouse data lake solutions in snowflake or AWS Redshift - Be proficient in Data modelling and data architecture design experienced in reviewing 3rd Normal Form and Dimensional models. - Experience in implementing Master data management, process design and implementation - Experience in implementing Data quality solutions including processes - Experience in IOT Design using AWS or Azure Cloud platforms - Experience designing and implementing machine learning solutions as part of high-volume data ingestion and transformation - Experience working with structured and unstructured data including geo-spatial data - Experience in technologies like python, SQL, no SQL, KAFKA, Elastic Search - Hands on experience using snowflake, informatica, azure logic apps, azure functions, azure storage, azure data lake and azure search.

Posted 2 months ago

Apply

6 - 11 years

13 - 22 Lacs

Pune

Work from Office

Naukri logo

vConstruct, a Pune-based Construction Technology company is seeking a Senior Data Engineer for its Data Science and Analytics team, a close-knit group of analysts and engineers supporting all data aspects of the business. You will be responsible for designing, developing, and maintaining our data infrastructure, ensuring data integrity, and supporting various data-driven projects. You will work closely with cross-functional teams to integrate, process, and manage data from various sources, enabling business insights and enhancing operational efficiency. Responsibilities Act as the SME for data warehousing architecture, overseeing the design patterns, data transformation processes, and operational functions of the data warehouse. Provide hands-on support for performance tuning, monitoring, and alerting, specifically within the Snowflake environment. Manage all facets of Snowflake administration, including role-based access control, environment monitoring, and performance optimization. Analyze complex data patterns to design and implement scalable, efficient data storage solutions in Snowflake. Define and document best practices for creating data models (source and dimensional), ensuring consistency across the organization. Mentor team members in data modeling techniques and work with business users to capture and implement data requirements. Architect and implement scalable, efficient data pipelines using Snowflake and DBT to support data processing and transformation. Build and optimize data models, warehouses, and data marts to drive business intelligence and analytics initiatives. Write clean, efficient, and reusable SQL queries within DBT to manage data transformations and ensure high-quality results. Establish and enforce data quality checks, validation processes, and continuous monitoring using DBT to maintain data integrity. Design, develop, and maintain robust, scalable data pipelines and ETL/ELT processes to efficiently ingest, transform, and store data from diverse sources. Collaborate with cross-functional teams to design, implement, and sustain data-driven solutions that optimize data flow and system integration. Organize and lead discussions with business and operational data stakeholders to understand requirements and deliver solutions. Collaborate with data analysts, developers, and business users to ensure data solutions are accurate, scalable, and efficient. Qualifications 7 to 10 years of experience in data engineering, with a focus on building data solutions at scale. 5+ years of experience in data warehousing and data modeling techniques (both relational and dimensional). 5+ years of hands-on experience in writing complex, highly optimized SQL queries across large data sets. 4+ years of hands-on experience working with Snowflake. 4+ years of experience in scripting languages like Python etc. 2+ years of experience using DBT (Data Build Tool) for data transformation. Expertise in SQL with a strong focus on database optimization and performance tuning. Proven experience in data warehousing technologies such as Snowflake including administration, performance tuning, and implementation of best practices. Extensive hands-on experience with DBT (Data Build Tool) for data transformation, including developing and maintaining modular, reusable, and efficient DBT models. Strong ability to write and optimize DBT SQL models for transformation layers and data pipelines. Hands-on experience with data integration tools like Azure Data Factory, FiveTran, or Matillion with a preference for FiveTran. Proven experience with API integrations and working with diverse data sources. Ability to understand, consume and use APIs, JSON, Webservices for data pipelines. Experience in designing and implementing data pipelines using cloud platforms such as AWS, GCP, or Azure. Proficient in Python for data transformation and automation. Experience with CI/CD processes and automation in data engineering. Knowledge of Power BI or similar data visualization tools is a plus. Education Bachelors or Masters degree in Computer Science/Information technology or related field. About vConstruct: vConstruct specializes in providing high quality Building Information Modeling and Construction Technology services geared towards construction projects. vConstruct is a wholly owned subsidiary of DPR Construction.vConstruct has 100+ team members working on Software Development, Data Analytics, Data Engineering and AI/ML. We have matured Data Science practice and growing at accelerated pace. For more information, please visit www.vconstruct.com About DPR Construction: DPR Construction is a national commercial general contractor and construction manager specializing in technically challenging and sustainable projects for the advanced technology, biopharmaceutical, corporate office, and higher education and healthcare markets. With the purpose of building great thingsgreat teams, great buildings, great relationshipsDPR is a truly great company. For more information, please visit www.dpr.com

Posted 2 months ago

Apply

10 - 15 years

20 - 35 Lacs

Noida

Hybrid

Naukri logo

Required Qualification: Btech or Master degreed MCA/MTech Have experience into API Development Have experience into python Have experience into Solution & designing Have experience into Azure Data bricks Have experience witn SQL Have experience working with Snowflake Have experience working in any LLM - Gen AI or Open AI projects Have experience working with multiple stakeholders Have experience working in more than 1 or 2 projects in AI. Good communications skils

Posted 2 months ago

Apply

Exploring Snowflake Jobs in India

Snowflake has become one of the most sought-after skills in the tech industry, with a growing demand for professionals who are proficient in handling data warehousing and analytics using this cloud-based platform. In India, the job market for Snowflake roles is flourishing, offering numerous opportunities for job seekers with the right skill set.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Mumbai
  5. Chennai

These cities are known for their thriving tech industries and have a high demand for Snowflake professionals.

Average Salary Range

The average salary range for Snowflake professionals in India varies based on experience levels: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum

Career Path

A typical career path in Snowflake may include roles such as: - Junior Snowflake Developer - Snowflake Developer - Senior Snowflake Developer - Snowflake Architect - Snowflake Consultant - Snowflake Administrator

Related Skills

In addition to expertise in Snowflake, professionals in this field are often expected to have knowledge in: - SQL - Data warehousing concepts - ETL tools - Cloud platforms (AWS, Azure, GCP) - Database management

Interview Questions

  • What is Snowflake and how does it differ from traditional data warehousing solutions? (basic)
  • Explain how Snowflake handles data storage and compute resources in the cloud. (medium)
  • How do you optimize query performance in Snowflake? (medium)
  • Can you explain how data sharing works in Snowflake? (medium)
  • What are the different stages in the Snowflake architecture? (advanced)
  • How do you handle data encryption in Snowflake? (medium)
  • Describe a challenging project you worked on using Snowflake and how you overcame obstacles. (advanced)
  • How does Snowflake ensure data security and compliance? (medium)
  • What are the benefits of using Snowflake over traditional data warehouses? (basic)
  • Explain the concept of virtual warehouses in Snowflake. (medium)
  • How do you monitor and troubleshoot performance issues in Snowflake? (medium)
  • Can you discuss your experience with Snowflake's semi-structured data handling capabilities? (advanced)
  • What are Snowflake's data loading options and best practices? (medium)
  • How do you manage access control and permissions in Snowflake? (medium)
  • Describe a scenario where you had to optimize a Snowflake data pipeline for efficiency. (advanced)
  • How do you handle versioning and change management in Snowflake? (medium)
  • What are the limitations of Snowflake and how would you work around them? (advanced)
  • Explain how Snowflake supports semi-structured data formats like JSON and XML. (medium)
  • What are the considerations for scaling Snowflake for large datasets and high concurrency? (advanced)
  • How do you approach data modeling in Snowflake compared to traditional databases? (medium)
  • Discuss your experience with Snowflake's time travel and data retention features. (medium)
  • How would you migrate an on-premise data warehouse to Snowflake in a production environment? (advanced)
  • What are the best practices for data governance and metadata management in Snowflake? (medium)
  • How do you ensure data quality and integrity in Snowflake pipelines? (medium)

Closing Remark

As you explore opportunities in the Snowflake job market in India, remember to showcase your expertise in handling data analytics and warehousing using this powerful platform. Prepare thoroughly for interviews, demonstrate your skills confidently, and keep abreast of the latest developments in Snowflake to stay competitive in the tech industry. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies