Jobs
Interviews

24 Snowpark Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 11.0 years

0 Lacs

maharashtra

On-site

As a skilled Snowflake Developer with over 7 years of experience, you will be responsible for designing, developing, and optimizing Snowflake data solutions. Your expertise in Snowflake SQL, ETL/ELT pipelines, and cloud data integration will be crucial in building scalable data warehouses, implementing efficient data models, and ensuring high-performance data processing in Snowflake. Your key responsibilities will include: - Designing and developing Snowflake databases, schemas, tables, and views following best practices. - Writing complex SQL queries, stored procedures, and UDFs for data transformation. - Optimizing query performance using clustering, partitioning, and materialized views. - Implementing Snowflake features such as Time Travel, Zero-Copy Cloning, Streams & Tasks. - Building and maintaining ETL/ELT pipelines using Snowflake, Snowpark, Python, or Spark. - Integrating Snowflake with cloud storage (S3, Blob) and data ingestion tools (Snowpipe). - Developing CDC (Change Data Capture) and real-time data processing solutions. - Designing star schema, snowflake schema, and data vault models in Snowflake. - Implementing data sharing, secure views, and dynamic data masking. - Ensuring data quality, consistency, and governance across Snowflake environments. - Monitoring and optimizing Snowflake warehouse performance (scaling, caching, resource usage). - Troubleshooting data pipeline failures, latency issues, and query bottlenecks. - Collaborating with data analysts, BI teams, and business stakeholders to deliver data solutions. - Documenting data flows, architecture, and technical specifications. - Mentoring junior developers on Snowflake best practices. Required Skills & Qualifications: - 7+ years in database development, data warehousing, or ETL. - 4+ years of hands-on Snowflake development experience. - Strong SQL or Python skills for data processing. - Experience with Snowflake utilities (SnowSQL, Snowsight, Snowpark). - Knowledge of cloud platforms (AWS/Azure) and data integration tools (Coalesce, Airflow, DBT). - Certifications: SnowPro Core Certification (preferred). Preferred Skills: - Familiarity with data governance and metadata management. - Familiarity with DBT, Airflow, SSIS & IICS. - Knowledge of CI/CD pipelines (Azure DevOps).,

Posted 2 days ago

Apply

3.0 - 8.0 years

0 Lacs

delhi

On-site

As a Snowflake Solution Architect, you will be responsible for owning and driving the development of Snowflake solutions and products as part of the COE. Your role will involve working with and guiding the team to build solutions using the latest innovations and features launched by Snowflake. Additionally, you will conduct sessions on the latest and upcoming launches of the Snowflake ecosystem and liaise with Snowflake Product and Engineering to stay ahead of new features, innovations, and updates. You will be expected to publish articles and architectures that can solve business problems for businesses. Furthermore, you will work on accelerators to demonstrate how Snowflake solutions and tools integrate and compare with other platforms such as AWS, Azure Fabric, and Databricks. In this role, you will lead the post-sales technical strategy and execution for high-priority Snowflake use cases across strategic customer accounts. You will also be responsible for triaging and resolving advanced, long-running customer issues while ensuring timely and clear communication. Developing and maintaining robust internal documentation, knowledge bases, and training materials to scale support efficiency will also be a part of your responsibilities. Additionally, you will support with enterprise-scale RFPs focused around Snowflake. To be successful in this role, you should have at least 8 years of industry experience, including a minimum of 3 years in a Snowflake consulting environment. You should possess experience in implementing and operating Snowflake-centric solutions and proficiency in implementing data security measures, access controls, and design specifically within the Snowflake platform. An understanding of the complete data analytics stack and workflow, from ETL to data platform design to BI and analytics tools is essential. Strong skills in databases, data warehouses, data processing, as well as extensive hands-on expertise with SQL and SQL analytics are required. Familiarity with data science concepts and Python is a strong advantage. Knowledge of Snowflake components such as Snowpipe, Query Parsing and Optimization, Snowpark, Snowflake ML, Authorization and Access control management, Metadata Management, Infrastructure Management & Auto-scaling, Snowflake Marketplace for datasets and applications, as well as DevOps & Orchestration tools like Airflow, dbt, and Jenkins is necessary. Possessing Snowflake certifications would be a good-to-have qualification. Strong communication and presentation skills are essential in this role as you will be required to engage with both technical and executive audiences. Moreover, you should be skilled in working collaboratively across engineering, product, and customer success teams. This position is open in all Xebia office locations including Pune, Bangalore, Gurugram, Hyderabad, Chennai, Bhopal, and Jaipur. If you meet the above requirements and are excited about this opportunity, please share your details here: [Apply Now](https://forms.office.com/e/LNuc2P3RAf),

Posted 6 days ago

Apply

6.0 - 11.0 years

22 - 27 Lacs

Pune, Bengaluru

Work from Office

Build ETL jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Build out data lineage artifacts to ensure all current and future systems are properly documented Required Candidate profile exp with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate & transfer large volumes of data and perform quality checks Exp in healthcare industry with PHI/PII

Posted 1 week ago

Apply

7.0 - 12.0 years

22 - 27 Lacs

Hyderabad, Pune, Mumbai (All Areas)

Work from Office

Job Description - Snowflake Developer Experience: 7+ years Location: India, Hybrid Employment Type: Full-time Job Summary We are looking for a Snowflake Developer with 7+ years of experience to design, develop, and maintain our Snowflake data platform. The ideal candidate will have strong expertise in Snowflake SQL, data modeling, and ETL/ELT processes to build efficient and scalable data solutions. Key Responsibilities 1. Snowflake Development & Implementation Design and develop Snowflake databases, schemas, tables, and views Write and optimize complex SQL queries, stored procedures, and UDFs Implement Snowflake features (Time Travel, Zero-Copy Cloning, Streams & Tasks) Manage virtual warehouses, resource monitors, and cost optimization 2. Data Pipeline & Integration Build and maintain ETL/ELT pipelines using Snowflake and tools like Snowpark, Python, or Spark Integrate Snowflake with cloud storage (S3, Blob Storage) and data sources (APIs) Develop data ingestion processes (batch and real-time) using Snowpipe 3. Performance Tuning & Optimization Optimize query performance through clustering, partitioning, and indexing Monitor and troubleshoot data pipelines and warehouse performance Implement caching strategies and materialized views for faster analytics 4. Data Modeling & Governance Design star schema, snowflake schema, and normalized data models Implement data security (RBAC, dynamic data masking, row-level security) Ensure data quality, documentation, and metadata management 5. Collaboration & Support Work with analysts, BI teams, and business users to deliver data solutions Document technical specifications and data flows Provide support and troubleshooting for Snowflake-related issues Required Skills & Qualifications 7+ years in database development, data warehousing, or ETL 3+ years of hands-on Snowflake development experience Strong SQL and scripting (Python, Bash) skills Experience with Snowflake utilities (SnowSQL, Snowsight) Knowledge of cloud platforms (AWS, Azure) and data integration tools SnowPro Core Certification (preferred but not required) Experience with Coalesce DBT , Airflow, or other data orchestration tools Familiarity with CI/CD pipelines and DevOps practices Knowledge of data visualization tools (Power BI, Tableau)

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Data Engineer at Ethoca, a Mastercard Company in Pune, India, you will play a crucial role in driving data enablement and exploring big data solutions within our technology landscape. Your responsibilities will include designing, developing, and optimizing batch and real-time data pipelines using tools such as Snowflake, Snowpark, Python, and PySpark. You will also be involved in building data transformation workflows, implementing CI/CD pipelines, and administering the Snowflake platform to ensure performance tuning, access management, and platform scalability. Collaboration with stakeholders to understand data requirements and deliver reliable data solutions will be a key part of your role. Your expertise in cloud-based database infrastructure, SQL development, and building scalable data models using tools like Power BI will be essential in supporting business analytics and dashboarding. Additionally, you will be responsible for real-time data streaming pipelines, data observability practices, and planning and executing deployments, migrations, and upgrades across data platforms while minimizing service impacts. To be successful in this role, you should have a strong background in computer science or software engineering, along with deep hands-on experience with Snowflake, Snowpark, Python, PySpark, and CI/CD tooling. Familiarity with Schema Change, Java JDK, Spring & Springboot framework, Databricks, and real-time data processing is desirable. You should also possess excellent problem-solving and analytical skills, as well as effective written and verbal communication abilities for collaborating across technical and non-technical teams. You will be part of a high-performing team that is committed to making systems resilient and easily maintainable on the cloud. If you are looking for a challenging role that allows you to leverage cutting-edge software and development skills while working with massive data volumes, this position at Ethoca may be the right fit for you.,

Posted 2 weeks ago

Apply

2.0 - 5.0 years

7 - 17 Lacs

Hyderabad

Work from Office

Key Responsibilities: Design and implement scalable data models using Snowflake to support business intelligence and analytics solutions. Implement ETL/ELT solutions that involve complex business transformations. Handle end-to-end Data warehousing solutions Migrate the data from legacy systems to Snowflake systems Write complex SQL queries for extracting, transforming, and loading data, ensuring high performance and accuracy. Optimize the SnowSQL queries for better processing speeds Integrate Snowflake with 3rd party applications Use any ETL/ELT technology Implement data security policies, including user access control and data masking, to maintain compliance with organizational standards. Document solutions and data flows. Skills & Qualifications: Experience: 2+ years of experience in data engineering, with a focus on Snowflake. Proficient in SQL and Snowflake-specific SQL functions . Experience with ETL/ELT tools and cloud data integrations. Technical Skills: Strong understanding of Snowflake architecture, features, and best practices. Experience in using Snowpark, Snowpipe, Streamlit Experience in using Dynamic tables is good to have Familiarity with cloud platforms (AWS, Azure, or GCP) and other cloud-based data technologies. Experience with data modeling concepts like star schema, snowflake schema, and data partitioning. Experience with Snowflakes Time Travel, Streams, and Tasks features Experience in data pipeline orchestration. Knowledge of Python or Java for scripting and automation. Knowledge of Snowflake pipelines is good to have Knowledge of data governance practices, including security, compliance, and data lineage.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

22 - 27 Lacs

Pune, Bengaluru

Work from Office

Build ETL jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Build out data lineage artifacts to ensure all current and future systems are properly documented Required Candidate profile exp with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate & transfer large volumes of data and perform quality checks Exp in healthcare industry with PHI/PII

Posted 1 month ago

Apply

5.0 - 10.0 years

22 - 27 Lacs

Chennai, Mumbai (All Areas)

Work from Office

Build ETL jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Build out data lineage artifacts to ensure all current and future systems are properly documented Required Candidate profile exp with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate & transfer large volumes of data and perform quality checks Exp in healthcare industry with PHI/PII

Posted 1 month ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

________________________________________ Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos AI Gigafactory, our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Senior Principal Consultant- Senior Data Engineer - Snowflake, AWS, Cortex AI & Horizon Catalog Role Summary: We are seeking an experienced Senior Data Engineer with deep expertise in modernizing Data & Analytics platforms on Snowflake, leveraging AWS services, Cortex AI, and Horizon Catalog for high-performance, AI-driven data management. The role involves designing scalable data architectures, integrating AI-powered automation, and optimizing data governance, lineage, and analytics frameworks. Key Responsibilities: . Architect & modernize enterprise Data & Analytics platforms on Snowflake, utilizing AWS, Cortex AI, and Horizon Catalog. . Design and optimize Snowflake-based Lakehouse architectures, integrating AWS services (S3, Redshift, Glue, Lambda, EMR, etc.). . Leverage Cortex AI for AI-driven data automation, predictive analytics, and workflow orchestration. . Implement Horizon Catalog for enhanced data lineage, governance, metadata management, and security. . Develop high-performance ETL/ELT pipelines, integrating Snowflake with AWS and AI-powered automation frameworks. . Utilize Snowflake&rsquos native capabilities like Snowpark, Streams, Tasks, and Dynamic Tables for real-time data processing. . Establish data quality automation, lineage tracking, and AI-enhanced data governance strategies. . Collaborate with data scientists, ML engineers, and business stakeholders to drive AI-led data initiatives. . Continuously evaluate emerging AI and cloud-based data engineering technologies to improve efficiency and innovation. Qualifications we seek in you! Minimum Qualifications . experience in Data Engineering, AI-powered automation, and cloud-based analytics. . Expertise in Snowflake (Warehousing, Snowpark, Streams, Tasks, Dynamic Tables). . Strong experience with AWS services (S3, Redshift, Glue, Lambda, EMR). . Deep understanding of Cortex AI for AI-driven data engineering automation. . Proficiency in Horizon Catalog for metadata management, lineage tracking, and data governance. . Advanced knowledge of SQL, Python, and Scala for large-scale data processing. . Experience in modernizing Data & Analytics platforms and migrating on-premises solutions to Snowflake. . Strong expertise in Data Quality, AI-driven Observability, and ModelOps for data workflows. . Familiarity with Vector Databases & Retrieval-Augmented Generation (RAG) architectures for AI-powered analytics. . Excellent leadership, problem-solving, and stakeholder collaboration skills. Preferred Skills: . Experience with Knowledge Graphs (Neo4J, TigerGraph) for structured enterprise data systems. . Exposure to Kubernetes, Terraform, and CI/CD pipelines for scalable cloud deployments. . Background in streaming technologies (Kafka, Kinesis, AWS MSK, Snowflake Snowpipe). Why Join Us . Lead Data & AI platform modernization initiatives using Snowflake, AWS, Cortex AI, and Horizon Catalog. . Work on cutting-edge AI-driven automation for cloud-native data architectures. . Competitive salary, career progression, and an opportunity to shape next-gen AI-powered data solutions. ________________________________________Why join Genpact . Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation . Make an impact - Drive change for global enterprises and solve business challenges that matter . Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities . Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day . Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 month ago

Apply

6.0 - 8.0 years

6 - 8 Lacs

Navi Mumbai, Maharashtra, India

On-site

We are looking for a Senior Big Data Engineer with deep experience in building scalable, high-performance data processing pipelines using Snowflake (Snowpark) and the Hadoop ecosystem . You'll design and implement batch and streaming data workflows, transform complex datasets, and optimize infrastructure to power analytics and data science solutions. Key Responsibilities: Design, develop, and maintain end-to-end scalable data pipelines for high-volume batch and real-time use cases. Implement advanced data transformations using Spark, Snowpark, Pig , and Sqoop . Process large-scale datasets from varied sources using tools across the Hadoop ecosystem . Optimize data storage and retrieval in HBase , Hive , and other NoSQL stores. Collaborate closely with data scientists, analysts, and business stakeholders to enable data-driven decision-making. Ensure data quality, integrity, and compliance with enterprise security and governance standards. Tune and troubleshoot distributed data applications for performance and efficiency. Must-Have Skills: 5+ years in Data Engineering or Big Data roles Expertise in: Snowflake (Snowpark) Apache Spark MapReduce, Hadoop Sqoop, Pig, HBase Strong knowledge of: ETL/ELT pipeline design Distributed computing principles Big Data architecture & performance tuning Proven experience handling large-scale data ingestion , processing, and transformation Nice-to-Have Skills: Workflow orchestration with Apache Airflow or Oozie Cloud experience: AWS, Azure, or GCP Proficiency in Python or Scala Familiarity with CI/CD pipelines , Git, and DevOps environments Soft Skills: Strong problem-solving and analytical mindset Excellent communication and documentation abilities Ability to work independently and within cross-functional Agile teams

Posted 1 month ago

Apply

6.0 - 8.0 years

6 - 8 Lacs

Delhi, India

On-site

We are looking for a Senior Big Data Engineer with deep experience in building scalable, high-performance data processing pipelines using Snowflake (Snowpark) and the Hadoop ecosystem . You'll design and implement batch and streaming data workflows, transform complex datasets, and optimize infrastructure to power analytics and data science solutions. Key Responsibilities: Design, develop, and maintain end-to-end scalable data pipelines for high-volume batch and real-time use cases. Implement advanced data transformations using Spark, Snowpark, Pig , and Sqoop . Process large-scale datasets from varied sources using tools across the Hadoop ecosystem . Optimize data storage and retrieval in HBase , Hive , and other NoSQL stores. Collaborate closely with data scientists, analysts, and business stakeholders to enable data-driven decision-making. Ensure data quality, integrity, and compliance with enterprise security and governance standards. Tune and troubleshoot distributed data applications for performance and efficiency. Must-Have Skills: 5+ years in Data Engineering or Big Data roles Expertise in: Snowflake (Snowpark) Apache Spark MapReduce, Hadoop Sqoop, Pig, HBase Strong knowledge of: ETL/ELT pipeline design Distributed computing principles Big Data architecture & performance tuning Proven experience handling large-scale data ingestion , processing, and transformation Nice-to-Have Skills: Workflow orchestration with Apache Airflow or Oozie Cloud experience: AWS, Azure, or GCP Proficiency in Python or Scala Familiarity with CI/CD pipelines , Git, and DevOps environments Soft Skills: Strong problem-solving and analytical mindset Excellent communication and documentation abilities Ability to work independently and within cross-functional Agile teams

Posted 1 month ago

Apply

6.0 - 8.0 years

6 - 8 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

We are looking for a Senior Big Data Engineer with deep experience in building scalable, high-performance data processing pipelines using Snowflake (Snowpark) and the Hadoop ecosystem . You'll design and implement batch and streaming data workflows, transform complex datasets, and optimize infrastructure to power analytics and data science solutions. Key Responsibilities: Design, develop, and maintain end-to-end scalable data pipelines for high-volume batch and real-time use cases. Implement advanced data transformations using Spark, Snowpark, Pig , and Sqoop . Process large-scale datasets from varied sources using tools across the Hadoop ecosystem . Optimize data storage and retrieval in HBase , Hive , and other NoSQL stores. Collaborate closely with data scientists, analysts, and business stakeholders to enable data-driven decision-making. Ensure data quality, integrity, and compliance with enterprise security and governance standards. Tune and troubleshoot distributed data applications for performance and efficiency. Must-Have Skills: 5+ years in Data Engineering or Big Data roles Expertise in: Snowflake (Snowpark) Apache Spark MapReduce, Hadoop Sqoop, Pig, HBase Strong knowledge of: ETL/ELT pipeline design Distributed computing principles Big Data architecture & performance tuning Proven experience handling large-scale data ingestion , processing, and transformation Nice-to-Have Skills: Workflow orchestration with Apache Airflow or Oozie Cloud experience: AWS, Azure, or GCP Proficiency in Python or Scala Familiarity with CI/CD pipelines , Git, and DevOps environments Soft Skills: Strong problem-solving and analytical mindset Excellent communication and documentation abilities Ability to work independently and within cross-functional Agile teams

Posted 1 month ago

Apply

8.0 - 10.0 years

8 - 10 Lacs

Navi Mumbai, Maharashtra, India

On-site

We are seeking an experienced Big Data Engineer to design and maintain scalable data processing systems and pipelines across large-scale, distributed environments. This role requires deep expertise in tools such as Snowflake (Snowpark), Spark, Hadoop, Sqoop, Pig, and HBase . You will work closely with data scientists and stakeholders to transform raw data into actionable intelligence and power analytics platforms. Key Responsibilities: Design and develop high-performance, scalable data pipelines for batch and streaming processing. Implement data transformations and ETL workflows using Spark, Snowflake (Snowpark), Pig, Sqoop , and related tools. Manage large-scale data ingestion from various structured and unstructured data sources. Work with Hadoop ecosystem components including MapReduce, HBase, Hive, and HDFS . Optimize storage and query performance for high-throughput, low-latency systems. Collaborate with data scientists, analysts, and product teams to define and implement end-to-end data solutions. Ensure data integrity, quality, governance, and security across all systems. Monitor, troubleshoot, and fine-tune the performance of distributed systems and jobs. Must-Have Skills: Strong hands-on experience with: Snowflake & Snowpark Apache Spark Hadoop, MapReduce Pig, Sqoop, HBase, Hive Expertise in data ingestion, transformation, and pipeline orchestration In-depth knowledge of distributed computing and big data architecture Experience in data modeling, storage optimization , and query performance tuning

Posted 1 month ago

Apply

8.0 - 10.0 years

8 - 10 Lacs

Delhi, India

On-site

We are seeking an experienced Big Data Engineer to design and maintain scalable data processing systems and pipelines across large-scale, distributed environments. This role requires deep expertise in tools such as Snowflake (Snowpark), Spark, Hadoop, Sqoop, Pig, and HBase . You will work closely with data scientists and stakeholders to transform raw data into actionable intelligence and power analytics platforms. Key Responsibilities: Design and develop high-performance, scalable data pipelines for batch and streaming processing. Implement data transformations and ETL workflows using Spark, Snowflake (Snowpark), Pig, Sqoop , and related tools. Manage large-scale data ingestion from various structured and unstructured data sources. Work with Hadoop ecosystem components including MapReduce, HBase, Hive, and HDFS . Optimize storage and query performance for high-throughput, low-latency systems. Collaborate with data scientists, analysts, and product teams to define and implement end-to-end data solutions. Ensure data integrity, quality, governance, and security across all systems. Monitor, troubleshoot, and fine-tune the performance of distributed systems and jobs. Must-Have Skills: Strong hands-on experience with: Snowflake & Snowpark Apache Spark Hadoop, MapReduce Pig, Sqoop, HBase, Hive Expertise in data ingestion, transformation, and pipeline orchestration In-depth knowledge of distributed computing and big data architecture Experience in data modeling, storage optimization , and query performance tuning

Posted 1 month ago

Apply

8.0 - 10.0 years

8 - 10 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

We are seeking an experienced Big Data Engineer to design and maintain scalable data processing systems and pipelines across large-scale, distributed environments. This role requires deep expertise in tools such as Snowflake (Snowpark), Spark, Hadoop, Sqoop, Pig, and HBase . You will work closely with data scientists and stakeholders to transform raw data into actionable intelligence and power analytics platforms. Key Responsibilities: Design and develop high-performance, scalable data pipelines for batch and streaming processing. Implement data transformations and ETL workflows using Spark, Snowflake (Snowpark), Pig, Sqoop , and related tools. Manage large-scale data ingestion from various structured and unstructured data sources. Work with Hadoop ecosystem components including MapReduce, HBase, Hive, and HDFS . Optimize storage and query performance for high-throughput, low-latency systems. Collaborate with data scientists, analysts, and product teams to define and implement end-to-end data solutions. Ensure data integrity, quality, governance, and security across all systems. Monitor, troubleshoot, and fine-tune the performance of distributed systems and jobs. Must-Have Skills: Strong hands-on experience with: Snowflake & Snowpark Apache Spark Hadoop, MapReduce Pig, Sqoop, HBase, Hive Expertise in data ingestion, transformation, and pipeline orchestration In-depth knowledge of distributed computing and big data architecture Experience in data modeling, storage optimization , and query performance tuning

Posted 1 month ago

Apply

5.0 - 15.0 years

22 - 24 Lacs

Gurgaon / Gurugram, Haryana, India

On-site

This role is for one of the Weekday's clients Salary range: Rs 2200000 - Rs 2400000 (ie INR 22-24 LPA) Min Experience: 5 years Location: Bengaluru, Chennai, Gurgaon JobType: full-time We are looking for an experiencedSnowflake Developerto join our Data Engineering team. The ideal candidate will possess a deep understanding ofData Warehousing,SQL,ETL tools like Informatica, andvisualization platforms such as Power BI. This role involves building scalable data pipelines, optimizing data architectures, and collaborating with cross-functional teams to deliver impactful data solutions. Requirements Key Responsibilities Data Engineering & Warehousing:Leverage over 5 years of hands-on experience in Data Engineering with a focus on Data Warehousing and Business Intelligence. Pipeline Development:Design and maintain ELT pipelines usingSnowflake,Fivetran, andDBTto ingest and transform data from multiple sources. SQL Development:Write and optimize complexSQL queriesandstored proceduresto support robust data transformations and analytics. Data Modeling & ELT:Implement advanced data modeling practices includingSCD Type-2, and build high-performance ELT workflows using DBT. Requirement Analysis:Partner with business stakeholders to capture data needs and convert them into scalable technical solutions. Data Quality & Troubleshooting:Conduct root cause analysis on data issues, maintain high data integrity, and ensure reliability across systems. Collaboration & Documentation:Collaborate with engineering and business teams. Develop and maintain thorough documentation for pipelines, data models, and processes. Skills & Qualifications Expertise inSnowflakefor large-scale data warehousing and ELT operations. StrongSQLskills with the ability to create and manage complex queries and procedures. Proven experience withInformatica PowerCenterfor ETL development. Proficiency withPower BIfor data visualization and reporting. Hands-on experience withFivetranfor automated data integration. Familiarity withDBT,Sigma Computing,Tableau, andOracle. Solid understanding ofdata analysis,requirement gathering, andsource-to-target mapping. Knowledge of cloud ecosystems such asAzure (including ADF, Databricks); experience withAWS or GCPis a plus. Experience with workflow orchestration tools likeAirflow,Azkaban, orLuigi. Proficiency inPythonfor scripting and data processing (Java or Scala is a plus). Bachelor's or Graduate degree inComputer Science,Statistics,Informatics,Information Systems, or a related field. Key Tools & Technologies Snowflake,snowsql,Snowpark SQL,Informatica,Power BI,DBT Python,Fivetran,Sigma Computing,Tableau Airflow,Azkaban,Azure,Databricks,ADF

Posted 1 month ago

Apply

6.0 - 11.0 years

5 - 15 Lacs

Ahmedabad, Mumbai (All Areas)

Work from Office

6 years of exp. with AWS, Snowflake, Microsoft SQL Server, SSMS, Visual Studio, and Data Warehouse ETL processes. 4 yrs of programming exp. with Python, C#, VB.NET, T-SQL. Minimum of 3 years of exp. building end-to-end pipelines within AWS Stack. Required Candidate profile Strong collaborative team-oriented style Impeccable customer service skills Exp. with healthcare information systems and healthcare practice processes. Exp. with SaaS applications. Good Communication

Posted 2 months ago

Apply

3.0 - 6.0 years

0 - 0 Lacs

Hyderabad

Work from Office

Snowflake Developer Job Location: Hyderabad Description: We are seeking a talented SNOWFLAKE ETL/ELT Engineer to join our growing Data Engineering team. The ideal candidate will have extensive experience designing, building, and maintaining scalable data integration solutions in Snowflake. Responsibilities: Design, develop, and implement data integration solutions using Snowflake's ELT features Load and transform large data volumes from a variety of sources into Snowflake Optimize data integration processes for performance and efficiency Collaborate with other teams, such as Data Analytics and Business Intelligence, to ensure the integration of data into the data warehouse meets their needs Create and maintain technical documentation for ETL/ELT processes and data structures Stay current with emerging trends and technologies related to Snowflake, ETL, and ELT Requirements: 3-6 years of experience in data integration and ETL/ELT development Extensive experience with Snowflake, including its ELT features Experience with advanced Snowflake Features including AIML Strong proficiency in SQL, PYTHON and data transformation techniques Experience with cloud-based data warehousing and data integration tools Knowledge of data warehousing design principles and best practices Excellent communication and collaboration skills If you have a passion for data engineering and a proven track record of success in Snowflake, ETL, and ELT, we want to hear from you! Please share below details: CTC- ECTC- Notice- Relevant experience in snowflake development- current location- Willing to work from Hyderabad office(Y/N)-

Posted 2 months ago

Apply

4.0 - 9.0 years

15 - 27 Lacs

Kolkata, Hyderabad, Bengaluru

Hybrid

Location: Kolkata, Hyderabad, Bangalore Exp 4 to 17 years Band 4B, 4C, 4D Skill set -Snowflake, Horizon , Snowpark, Kafka for ETL

Posted 2 months ago

Apply

8.0 - 13.0 years

15 - 30 Lacs

Pune, Chennai, Bengaluru

Work from Office

Consultant Data Engineer Tools & Technology : Snowflake, Snowsql, AWS, DBT, Snowpark, Airflow, DWH, Unix, SQL, Shell Scripting, Pyspark, GIT, Visual Studio, Service Now. Duties and Responsibility Act as Consultant Data Engineer Understand business requirement and designing, developing & maintaining scalable automated data pipelines & ETL processes to ensure efficient data processing and storage. Create a robust, extensible architecture to meet the client/business requirements Snowflake objects with integration with AWS services and DBT Involved in different type of data ingestion pipelines as per requirements. Development in DBT (Data Build Tool) for data transformation as per the requirements Working on multiple AWS services integration with Snowflake. Working with integration of structured data & Semi-Structure data sets Work on Performance Tuning and cost optimization Work on implementing CDC or SCD type 2 Design and build solutions for near real-time stream as well as batch processing. Implement best practices for data management, data quality, and data governance. Responsible for data collection, data cleaning & pre-processing using Snowflake and DBT Investigate production issues and fine-tune our data pipelines Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery. Co-ordinates and Support software developers, database architects, data analysts and data scientists on data initiatives Orchestrate the pipeline using Airflow Suggests improvements to processes, products and services. Interact with users, management, and technical personnel to clarify business issues, identify problems, and suggest changes/solutions to business and developers. Create technical documentation on confluence to aim knowledge sharing. -Associate Data Engineer Tools & Technology : Snowflake, DBT, AWS, Airflow, ETL, Datawarehouse, shell Scripting, SQL, Git, Confluence, Python • Duties and Responsibility Act as offshore Data engineer and enhancement & testing. Design and build solutions for near real-time stream processing as well as batch processing. Development in snowflake objects with there unique features implemented Implementing data integration and transformation workflows using DBT Integration with AWS services with snowflake Participate in implementation plan, respond to production issues Responsible for data collection, data cleaning & pre-processing Experience in developing UDF, Snowflake Procedures, Streams, and Tasks. Involved in troubleshooting customer data issues, manual load if any data missed, data duplication checking and handling with RCA Investigate Productions jobs failure with including investigation till find out RCA. Development of ETL processes and data integration solutions. Understanding the business needs of the client and provide technical solution Monitoring the overall functioning of process, identifying improvement area and implement help of scripting. Handling major outages effectively along with effective communication to business, users & development partners. Defines and creates Run Book entries and knowledge articles based on incidents experienced in the production - Associate Engineer • Tools and Technology: UNIX, ORACLE, Shell scripting, ETL, Hadoop, Spark, Sqoop, Hive, Control-m, Techtia, SQL, Jira, HDFS, Snowflake, DBT, AWS • Duties and Responsibility Worked as an Senior Production /Application Support Engineer Working as Production support member for Loading, Processing and Reporting of files and generating Reports. Monitoring multiple Batches, Jobs, Processes and analyzing issue related the job failures and Handling FTP failure, Connectivity issue of Batch/Job failure. Performing data analysis on files and generating files and sending files to destination server depends on functionality of job. Creating Shell Script for automating the daily task or Service Owner Requested. Involved in tuning the Jobs to improve performance and performing daily checks. Coordinating with Middleware, DWH, CRM and most of teams in case of any issue of CRQ. Monitoring the overall functioning of process, identifying improvement area and implement help of scripting. Involved in tuning the Jobs to improve performance and raising PBI after approval from Service owner. Involved in performance improvement Automation activities to decrees manual workload Data ingestion from RDBMS system to HDFS/Hive through SQOOP Understand customer problems and provide appropriate technical solutions. Handling major outages effectively along with effective communication to business, users & development partners. Coordinating with Client, On- Site persons and joining the bridge call for any issues. Handling daily issues based on application and jobs performance.

Posted 2 months ago

Apply

4.0 - 8.0 years

4 - 9 Lacs

Bengaluru

Work from Office

Job Location: Bangalore Experience: 4+ Years Job Type: FTE Note: Looking only for Immediate to 1 week joiners. Must be comfortable for Video discussion. JD KeySkills required : Option :1 Bigdata Hadoop + Hive + HDFS Python OR Scala - Language OR Option :2 Snowflake with Bigdata knowledge & Snowpark is preferred Python / Scala - Language Contact Person - Amrita Please share your updated profile to amrita.anandita@htcinc.com with the below mentioned details: Full Name (As per Aadhar card) - Total Exp. - Rel. Exp. (Bigdata Hadoop) - Rel. Exp. (Python) - Rel. Exp. (Scala) - Rel. Exp. (Hive) - Rel. Exp. (HDFS) - OR Rel. Exp. (Snowflake) - Rel. Exp. (Snowpark) - Highest Education (if has done B.Tech/ B.E, then specify) - Notice Period - If serving Notice or not working, then mention your last working day as per your relieving letter - CCTC - ECTC - Current Location - Preferred Location -

Posted 2 months ago

Apply

8.0 - 10.0 years

10 - 12 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Big Data Engineer (Remote, Contract 6 Months+) ig Data ecosystem including Snowflake (Snowpark), Spark, MapReduce, Hadoop, and more. We are looking for a Senior Big Data Engineer with deep expertise in large-scale data processing technologies and frameworks. This is a remote, contract-based position suited for a data engineering expert with strong experience in the Big Data ecosystem including Snowflake (Snowpark), Spark, MapReduce, Hadoop, and more. #KeyResponsibilities Design, develop, and maintain scalable data pipelines and big data solutions. Implement data transformations using Spark, Snowflake (Snowpark), Pig, and Sqoop. Process large data volumes from diverse sources using Hadoop ecosystem tools. Build end-to-end data workflows for batch and streaming pipelines. Optimize data storage and retrieval processes in HBase, Hive, and other NoSQL databases. Collaborate with data scientists and business stakeholders to design robust data infrastructure. Ensure data integrity, consistency, and security in line with organizational policies. Troubleshoot and tune performance for distributed systems and applications. #MustHaveSkills in Data Engineering / Big Data Tools: Snowflake (Snowpark), Spark, MapReduce, Hadoop, Sqoop, Pig, HBase Data Ingestion & ETL, Data Pipeline Design, Distributed Computing Strong understanding of Big Data architectures & performance tuning Hands-on experience with large-scale data storage and query optimization #NiceToHave Apache Airflow / Oozie experience Knowledge of cloud platforms (AWS, Azure, or GCP) Proficiency in Python or Scala CI/CD and DevOps exposure #ContractDetails Role: Senior Big Data Engineer Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote Duration: 6+ Months (Contract) Apply via Email: navaneeta@suzva.com Contact: 9032956160 #HowToApply Send your updated resume with the subject: "Application for Remote Big Data Engineer Contract Role" Include in your email: Updated Resume Current CTC Expected CTC Current Location Notice Period / Availabilit

Posted 2 months ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

JobOpening Big Data Engineer (Remote, Contract 6 Months+) Location: Remote | Contract Duration: 6+ Months | Domain: Big Data Stack We are looking for a Senior Big Data Engineer with deep expertise in large-scale data processing technologies and frameworks. This is a remote, contract-based position suited for a data engineering expert with strong experience in the Big Data ecosystem including Snowflake (Snowpark), Spark, MapReduce, Hadoop, and more. #KeyResponsibilities Design, develop, and maintain scalable data pipelines and big data solutions. Implement data transformations using Spark, Snowflake (Snowpark), Pig, and Sqoop. Process large data volumes from diverse sources using Hadoop ecosystem tools. Build end-to-end data workflows for batch and streaming pipelines. Optimize data storage and retrieval processes in HBase, Hive, and other NoSQL databases. Collaborate with data scientists and business stakeholders to design robust data infrastructure. Ensure data integrity, consistency, and security in line with organizational policies. Troubleshoot and tune performance for distributed systems and applications. #MustHaveSkills in Data Engineering / Big Data Tools: Snowflake (Snowpark), Spark, MapReduce, Hadoop, Sqoop, Pig, HBase Data Ingestion & ETL, Data Pipeline Design, Distributed Computing Strong understanding of Big Data architectures & performance tuning Hands-on experience with large-scale data storage and query optimization #NiceToHave Apache Airflow / Oozie experience Knowledge of cloud platforms (AWS, Azure, or GCP) Proficiency in Python or Scala CI/CD and DevOps exposure #ContractDetails Role: Senior Big Data Engineer Location: Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote Duration: 6+ Months (Contract) Apply via Email: navaneeta@suzva.com Contact: 9032956160 #HowToApply Send your updated resume with the subject: "Application for Remote Big Data Engineer Contract Role" Include in your email: Updated Resume Current CTC Expected CTC Current Location Notice Period / Availabilit

Posted 2 months ago

Apply

3 - 5 years

0 - 2 Lacs

Bengaluru

Hybrid

Demand 1 :: - Mandatory Skill :: 3.5 -7 Years (Bigdata -Adobe& scala, python, linux) Demands 2:: Mandatory Skill :: 3.5 -7 Years (Bigdata -Snowflake (snowpark ) & scala, python, linux) Specialist Software Engineer - Bigdata Missions We are seeking an experienced Big Data Senior Developer to lead our data engineering efforts. In this role, you will design, develop, and maintain large-scale data processing systems. You will work with cutting-edge technologies to deliver high-quality solutions for data ingestion, storage, processing, and analytics. Your expertise will be critical in driving our data strategy and ensuring the reliability and scalability of our big data infrastructure. Profile 3 to 8 years of experience on application development with Spark/Scala •Good hands-on experience of working on the Hadoop Eco-system ( HDFS, Hive, Spark ) •Good understanding of the Hadoop File Formats •Good Expertise on Hive / HDFS, PySpark, Spark, JupiterNotebook, ELT Talend, Control-M, Unix/Script, Python, CI/CD, Git / Jira, Hadoop, TOM, Oozie, Snowflake •Expertise in the implementation of the Data Quality Controls •Ability to interpret the Spark UI and identify the bottlenecks in the Spark process and provide the optimal solution. Tools •Ability to learn and work with various tools such as IntelliJ, GIT, Control M, Sonar Qube and also on board the new frameworks into the project. •Should be able to independently handle the projects. Agile •Good to have exposure to CI/CD processes •Exposure to Agile methodology and processes Others •Ability to understand complex business rules and translate into technical specifications/design. •Write highly efficient and optimized code which is easily scalable. •Adherence to coding, quality and security standards. •Effective verbal and written communication to work closely with all the stakeholders •Should be able to convince the stakeholders on the proposed solutions

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies