Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 15.0 years
22 - 24 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
This role is for one of the Weekday's clients Salary range: Rs 2200000 - Rs 2400000 (ie INR 22-24 LPA) Min Experience: 5 years Location: Bengaluru, Chennai, Gurgaon JobType: full-time We are looking for an experiencedSnowflake Developerto join our Data Engineering team. The ideal candidate will possess a deep understanding ofData Warehousing,SQL,ETL tools like Informatica, andvisualization platforms such as Power BI. This role involves building scalable data pipelines, optimizing data architectures, and collaborating with cross-functional teams to deliver impactful data solutions. Requirements Key Responsibilities Data Engineering & Warehousing:Leverage over 5 years of hands-on experience in Data Engineering with a focus on Data Warehousing and Business Intelligence. Pipeline Development:Design and maintain ELT pipelines usingSnowflake,Fivetran, andDBTto ingest and transform data from multiple sources. SQL Development:Write and optimize complexSQL queriesandstored proceduresto support robust data transformations and analytics. Data Modeling & ELT:Implement advanced data modeling practices includingSCD Type-2, and build high-performance ELT workflows using DBT. Requirement Analysis:Partner with business stakeholders to capture data needs and convert them into scalable technical solutions. Data Quality & Troubleshooting:Conduct root cause analysis on data issues, maintain high data integrity, and ensure reliability across systems. Collaboration & Documentation:Collaborate with engineering and business teams. Develop and maintain thorough documentation for pipelines, data models, and processes. Skills & Qualifications Expertise inSnowflakefor large-scale data warehousing and ELT operations. StrongSQLskills with the ability to create and manage complex queries and procedures. Proven experience withInformatica PowerCenterfor ETL development. Proficiency withPower BIfor data visualization and reporting. Hands-on experience withFivetranfor automated data integration. Familiarity withDBT,Sigma Computing,Tableau, andOracle. Solid understanding ofdata analysis,requirement gathering, andsource-to-target mapping. Knowledge of cloud ecosystems such asAzure (including ADF, Databricks); experience withAWS or GCPis a plus. Experience with workflow orchestration tools likeAirflow,Azkaban, orLuigi. Proficiency inPythonfor scripting and data processing (Java or Scala is a plus). Bachelor's or Graduate degree inComputer Science,Statistics,Informatics,Information Systems, or a related field. Key Tools & Technologies Snowflake,snowsql,Snowpark SQL,Informatica,Power BI,DBT Python,Fivetran,Sigma Computing,Tableau Airflow,Azkaban,Azure,Databricks,ADF
Posted 1 week ago
6.0 - 11.0 years
5 - 15 Lacs
Ahmedabad, Mumbai (All Areas)
Work from Office
6 years of exp. with AWS, Snowflake, Microsoft SQL Server, SSMS, Visual Studio, and Data Warehouse ETL processes. 4 yrs of programming exp. with Python, C#, VB.NET, T-SQL. Minimum of 3 years of exp. building end-to-end pipelines within AWS Stack. Required Candidate profile Strong collaborative team-oriented style Impeccable customer service skills Exp. with healthcare information systems and healthcare practice processes. Exp. with SaaS applications. Good Communication
Posted 2 weeks ago
3.0 - 6.0 years
0 - 0 Lacs
Hyderabad
Work from Office
Snowflake Developer Job Location: Hyderabad Description: We are seeking a talented SNOWFLAKE ETL/ELT Engineer to join our growing Data Engineering team. The ideal candidate will have extensive experience designing, building, and maintaining scalable data integration solutions in Snowflake. Responsibilities: Design, develop, and implement data integration solutions using Snowflake's ELT features Load and transform large data volumes from a variety of sources into Snowflake Optimize data integration processes for performance and efficiency Collaborate with other teams, such as Data Analytics and Business Intelligence, to ensure the integration of data into the data warehouse meets their needs Create and maintain technical documentation for ETL/ELT processes and data structures Stay current with emerging trends and technologies related to Snowflake, ETL, and ELT Requirements: 3-6 years of experience in data integration and ETL/ELT development Extensive experience with Snowflake, including its ELT features Experience with advanced Snowflake Features including AIML Strong proficiency in SQL, PYTHON and data transformation techniques Experience with cloud-based data warehousing and data integration tools Knowledge of data warehousing design principles and best practices Excellent communication and collaboration skills If you have a passion for data engineering and a proven track record of success in Snowflake, ETL, and ELT, we want to hear from you! Please share below details: CTC- ECTC- Notice- Relevant experience in snowflake development- current location- Willing to work from Hyderabad office(Y/N)-
Posted 2 weeks ago
4.0 - 9.0 years
15 - 27 Lacs
Kolkata, Hyderabad, Bengaluru
Hybrid
Location: Kolkata, Hyderabad, Bangalore Exp 4 to 17 years Band 4B, 4C, 4D Skill set -Snowflake, Horizon , Snowpark, Kafka for ETL
Posted 2 weeks ago
8.0 - 13.0 years
15 - 30 Lacs
Pune, Chennai, Bengaluru
Work from Office
Consultant Data Engineer Tools & Technology : Snowflake, Snowsql, AWS, DBT, Snowpark, Airflow, DWH, Unix, SQL, Shell Scripting, Pyspark, GIT, Visual Studio, Service Now. Duties and Responsibility Act as Consultant Data Engineer Understand business requirement and designing, developing & maintaining scalable automated data pipelines & ETL processes to ensure efficient data processing and storage. Create a robust, extensible architecture to meet the client/business requirements Snowflake objects with integration with AWS services and DBT Involved in different type of data ingestion pipelines as per requirements. Development in DBT (Data Build Tool) for data transformation as per the requirements Working on multiple AWS services integration with Snowflake. Working with integration of structured data & Semi-Structure data sets Work on Performance Tuning and cost optimization Work on implementing CDC or SCD type 2 Design and build solutions for near real-time stream as well as batch processing. Implement best practices for data management, data quality, and data governance. Responsible for data collection, data cleaning & pre-processing using Snowflake and DBT Investigate production issues and fine-tune our data pipelines Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery. Co-ordinates and Support software developers, database architects, data analysts and data scientists on data initiatives Orchestrate the pipeline using Airflow Suggests improvements to processes, products and services. Interact with users, management, and technical personnel to clarify business issues, identify problems, and suggest changes/solutions to business and developers. Create technical documentation on confluence to aim knowledge sharing. -Associate Data Engineer Tools & Technology : Snowflake, DBT, AWS, Airflow, ETL, Datawarehouse, shell Scripting, SQL, Git, Confluence, Python • Duties and Responsibility Act as offshore Data engineer and enhancement & testing. Design and build solutions for near real-time stream processing as well as batch processing. Development in snowflake objects with there unique features implemented Implementing data integration and transformation workflows using DBT Integration with AWS services with snowflake Participate in implementation plan, respond to production issues Responsible for data collection, data cleaning & pre-processing Experience in developing UDF, Snowflake Procedures, Streams, and Tasks. Involved in troubleshooting customer data issues, manual load if any data missed, data duplication checking and handling with RCA Investigate Productions jobs failure with including investigation till find out RCA. Development of ETL processes and data integration solutions. Understanding the business needs of the client and provide technical solution Monitoring the overall functioning of process, identifying improvement area and implement help of scripting. Handling major outages effectively along with effective communication to business, users & development partners. Defines and creates Run Book entries and knowledge articles based on incidents experienced in the production - Associate Engineer • Tools and Technology: UNIX, ORACLE, Shell scripting, ETL, Hadoop, Spark, Sqoop, Hive, Control-m, Techtia, SQL, Jira, HDFS, Snowflake, DBT, AWS • Duties and Responsibility Worked as an Senior Production /Application Support Engineer Working as Production support member for Loading, Processing and Reporting of files and generating Reports. Monitoring multiple Batches, Jobs, Processes and analyzing issue related the job failures and Handling FTP failure, Connectivity issue of Batch/Job failure. Performing data analysis on files and generating files and sending files to destination server depends on functionality of job. Creating Shell Script for automating the daily task or Service Owner Requested. Involved in tuning the Jobs to improve performance and performing daily checks. Coordinating with Middleware, DWH, CRM and most of teams in case of any issue of CRQ. Monitoring the overall functioning of process, identifying improvement area and implement help of scripting. Involved in tuning the Jobs to improve performance and raising PBI after approval from Service owner. Involved in performance improvement Automation activities to decrees manual workload Data ingestion from RDBMS system to HDFS/Hive through SQOOP Understand customer problems and provide appropriate technical solutions. Handling major outages effectively along with effective communication to business, users & development partners. Coordinating with Client, On- Site persons and joining the bridge call for any issues. Handling daily issues based on application and jobs performance.
Posted 2 weeks ago
4.0 - 8.0 years
4 - 9 Lacs
Bengaluru
Work from Office
Job Location: Bangalore Experience: 4+ Years Job Type: FTE Note: Looking only for Immediate to 1 week joiners. Must be comfortable for Video discussion. JD KeySkills required : Option :1 Bigdata Hadoop + Hive + HDFS Python OR Scala - Language OR Option :2 Snowflake with Bigdata knowledge & Snowpark is preferred Python / Scala - Language Contact Person - Amrita Please share your updated profile to amrita.anandita@htcinc.com with the below mentioned details: Full Name (As per Aadhar card) - Total Exp. - Rel. Exp. (Bigdata Hadoop) - Rel. Exp. (Python) - Rel. Exp. (Scala) - Rel. Exp. (Hive) - Rel. Exp. (HDFS) - OR Rel. Exp. (Snowflake) - Rel. Exp. (Snowpark) - Highest Education (if has done B.Tech/ B.E, then specify) - Notice Period - If serving Notice or not working, then mention your last working day as per your relieving letter - CCTC - ECTC - Current Location - Preferred Location -
Posted 2 weeks ago
8.0 - 10.0 years
10 - 12 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Big Data Engineer (Remote, Contract 6 Months+) ig Data ecosystem including Snowflake (Snowpark), Spark, MapReduce, Hadoop, and more. We are looking for a Senior Big Data Engineer with deep expertise in large-scale data processing technologies and frameworks. This is a remote, contract-based position suited for a data engineering expert with strong experience in the Big Data ecosystem including Snowflake (Snowpark), Spark, MapReduce, Hadoop, and more. #KeyResponsibilities Design, develop, and maintain scalable data pipelines and big data solutions. Implement data transformations using Spark, Snowflake (Snowpark), Pig, and Sqoop. Process large data volumes from diverse sources using Hadoop ecosystem tools. Build end-to-end data workflows for batch and streaming pipelines. Optimize data storage and retrieval processes in HBase, Hive, and other NoSQL databases. Collaborate with data scientists and business stakeholders to design robust data infrastructure. Ensure data integrity, consistency, and security in line with organizational policies. Troubleshoot and tune performance for distributed systems and applications. #MustHaveSkills in Data Engineering / Big Data Tools: Snowflake (Snowpark), Spark, MapReduce, Hadoop, Sqoop, Pig, HBase Data Ingestion & ETL, Data Pipeline Design, Distributed Computing Strong understanding of Big Data architectures & performance tuning Hands-on experience with large-scale data storage and query optimization #NiceToHave Apache Airflow / Oozie experience Knowledge of cloud platforms (AWS, Azure, or GCP) Proficiency in Python or Scala CI/CD and DevOps exposure #ContractDetails Role: Senior Big Data Engineer Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote Duration: 6+ Months (Contract) Apply via Email: navaneeta@suzva.com Contact: 9032956160 #HowToApply Send your updated resume with the subject: "Application for Remote Big Data Engineer Contract Role" Include in your email: Updated Resume Current CTC Expected CTC Current Location Notice Period / Availabilit
Posted 2 weeks ago
6.0 - 8.0 years
8 - 10 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
JobOpening Big Data Engineer (Remote, Contract 6 Months+) Location: Remote | Contract Duration: 6+ Months | Domain: Big Data Stack We are looking for a Senior Big Data Engineer with deep expertise in large-scale data processing technologies and frameworks. This is a remote, contract-based position suited for a data engineering expert with strong experience in the Big Data ecosystem including Snowflake (Snowpark), Spark, MapReduce, Hadoop, and more. #KeyResponsibilities Design, develop, and maintain scalable data pipelines and big data solutions. Implement data transformations using Spark, Snowflake (Snowpark), Pig, and Sqoop. Process large data volumes from diverse sources using Hadoop ecosystem tools. Build end-to-end data workflows for batch and streaming pipelines. Optimize data storage and retrieval processes in HBase, Hive, and other NoSQL databases. Collaborate with data scientists and business stakeholders to design robust data infrastructure. Ensure data integrity, consistency, and security in line with organizational policies. Troubleshoot and tune performance for distributed systems and applications. #MustHaveSkills in Data Engineering / Big Data Tools: Snowflake (Snowpark), Spark, MapReduce, Hadoop, Sqoop, Pig, HBase Data Ingestion & ETL, Data Pipeline Design, Distributed Computing Strong understanding of Big Data architectures & performance tuning Hands-on experience with large-scale data storage and query optimization #NiceToHave Apache Airflow / Oozie experience Knowledge of cloud platforms (AWS, Azure, or GCP) Proficiency in Python or Scala CI/CD and DevOps exposure #ContractDetails Role: Senior Big Data Engineer Location: Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote Duration: 6+ Months (Contract) Apply via Email: navaneeta@suzva.com Contact: 9032956160 #HowToApply Send your updated resume with the subject: "Application for Remote Big Data Engineer Contract Role" Include in your email: Updated Resume Current CTC Expected CTC Current Location Notice Period / Availabilit
Posted 3 weeks ago
3 - 5 years
0 - 2 Lacs
Bengaluru
Hybrid
Demand 1 :: - Mandatory Skill :: 3.5 -7 Years (Bigdata -Adobe& scala, python, linux) Demands 2:: Mandatory Skill :: 3.5 -7 Years (Bigdata -Snowflake (snowpark ) & scala, python, linux) Specialist Software Engineer - Bigdata Missions We are seeking an experienced Big Data Senior Developer to lead our data engineering efforts. In this role, you will design, develop, and maintain large-scale data processing systems. You will work with cutting-edge technologies to deliver high-quality solutions for data ingestion, storage, processing, and analytics. Your expertise will be critical in driving our data strategy and ensuring the reliability and scalability of our big data infrastructure. Profile 3 to 8 years of experience on application development with Spark/Scala •Good hands-on experience of working on the Hadoop Eco-system ( HDFS, Hive, Spark ) •Good understanding of the Hadoop File Formats •Good Expertise on Hive / HDFS, PySpark, Spark, JupiterNotebook, ELT Talend, Control-M, Unix/Script, Python, CI/CD, Git / Jira, Hadoop, TOM, Oozie, Snowflake •Expertise in the implementation of the Data Quality Controls •Ability to interpret the Spark UI and identify the bottlenecks in the Spark process and provide the optimal solution. Tools •Ability to learn and work with various tools such as IntelliJ, GIT, Control M, Sonar Qube and also on board the new frameworks into the project. •Should be able to independently handle the projects. Agile •Good to have exposure to CI/CD processes •Exposure to Agile methodology and processes Others •Ability to understand complex business rules and translate into technical specifications/design. •Write highly efficient and optimized code which is easily scalable. •Adherence to coding, quality and security standards. •Effective verbal and written communication to work closely with all the stakeholders •Should be able to convince the stakeholders on the proposed solutions
Posted 1 month ago
7 - 12 years
15 - 25 Lacs
Delhi NCR, Bengaluru, Hyderabad
Hybrid
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Principal Consultant- Sr. Snowflake Data Engineer (Snowflake+Python+Cloud)! In this role, the Sr. Snowflake Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Job Description: Experience in IT industry Working experience with building productionized data ingestion and processing data pipelines in Snowflake Strong understanding on Snowflake Architecture Fully well-versed with data warehousing concepts. Expertise and excellent understanding of Snowflake features and integration of Snowflake with other data processing. Able to create the data pipeline for ETL/ELT Good to have DBT experience Excellent presentation and communication skills, both written and verbal Ability to problem solve and architect in an environment with unclear requirements. Able to create the high level and low-level design document based on requirement. Hands on experience in configuration, troubleshooting, testing and managing data platforms, on premises or in the cloud. Awareness on data visualisation tools and methodologies Work independently on business problems and generate meaningful insights Good to have some experience/knowledge on Snowpark or Streamlit or GenAI but not mandatory. Should have experience on implementing Snowflake Best Practices Snowflake SnowPro Core Certification will be added an advantage Roles and Responsibilities: Requirement gathering, creating design document, providing solutions to customer, work with offshore team etc. Writing SQL queries against Snowflake, developing scripts to do Extract, Load, and Transform data. Hands-on experience with Snowflake utilities such as SnowSQL, Bulk copy, Snowpipe, Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs, Snowsight. Have experience with Snowflake cloud data warehouse and AWS S3 bucket or Azure blob storage container for integrating data from multiple source system. Should have have some exp on AWS services (S3, Glue, Lambda) or Azure services ( Blob Storage, ADLS gen2, ADF) Should have good experience in Python/Pyspark.integration with Snowflake and cloud (AWS/Azure) with ability to leverage cloud services for data processing and storage. Proficiency in Python programming language, including knowledge of data types, variables, functions, loops, conditionals, and other Python-specific concepts. Knowledge of ETL (Extract, Transform, Load) processes and tools, and ability to design and develop efficient ETL jobs using Python and Pyspark. Should have some experience on Snowflake RBAC and data security. Should have good experience in implementing CDC or SCD type-2. Should have good experience in implementing Snowflake Best Practices In-depth understanding of Data Warehouse, ETL concepts and Data Modelling Experience in requirement gathering, analysis, designing, development, and deployment. Should Have experience building data ingestion pipeline Optimize and tune data pipelines for performance and scalability Able to communicate with clients and lead team. Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in deployment using CI/CD tools and exp in repositories like Azure repo , Github etc. Qualifications we seek in you! Minimum qualifications B.E./ Masters in Computer Science, Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant as Senior Snowflake Data Engineer. Skill Metrix: Snowflake, Python/PySpark, AWS/Azure, ETL concepts, Data Modeling & Data Warehousing concepts Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted 2 months ago
8 - 13 years
30 - 35 Lacs
Hyderabad
Remote
Key Responsibilities: Snowflake Architecture & Setup : Design and implement Snowflake environments , ensuring best practices in RBAC, network security policies, and external access integrations . Iceberg Catalog Implementation : Configure and manage Apache Iceberg catalogs within Snowflake and integrate with Azure ADLS Gen2 for external storage. External Storage & Access : Set up external tables, storage integrations , and access policies for ADLS Gen2, AWS S3, and GCS . Data Ingestion & Streaming : Implement Snowpipe, Dynamic Tables , and batch/streaming ETL pipelines for real-time and scheduled data processing. CI/CD & Automation : Develop CI/CD pipelines for Snowflake schema changes, security updates, and data workflows using Terraform, dbt, GitHub Actions, or Azure DevOps . Snowflake Notebooks & Snowpark : Utilize Snowflake Notebooks for analytics and data exploration, and develop Snowpark applications for machine learning and complex data transformations using Python, Java, or Scala . Security & Compliance : Implement RBAC, Okta SSO authentication, OAuth, network security policies, and governance frameworks for Snowflake environments. Notification & Monitoring Integration : Set up event-driven notifications and alerting using Azure Event Grid, SNS, or cloud-native services . Performance & Cost Optimization : Continuously monitor query performance, warehouse utilization, cost estimates, and optimizations to improve efficiency. Documentation & Best Practices : Define best practices for Snowflake architecture, automation, security, and performance tuning . Required Skills & Experience: 7+ years of experience in data architecture and engineering, specializing in Snowflake Expertise in SQL, Python, and Snowpark APIs. Hands-on experience with Iceberg Catalogs, Snowflake Notebooks, and external storage (Azure ADLS Gen2, S3, GCS). Strong understanding of CI/CD for Snowflake , including automation with Terraform, dbt, and DevOps tools. Experience with Snowpipe, Dynamic Tables, and real-time/batch ingestion pipelines. Proven ability to analyze and optimize Snowflake performance, storage costs, and compute efficiency. Knowledge of Okta SSO, OAuth, federated authentication, and network security in Snowflake. Cloud experience in Azure, AWS, or GCP , including cloud networking and security configurations. Additional Details: This is a Contractual position for a duration of 6-12 months, This is a Completely Remote Opportunity .
Posted 2 months ago
5 - 7 years
8 - 10 Lacs
Bengaluru
Hybrid
Contract Duration: 12 months Experience Level: 5+ years of experience in Design, develop, document, test, and debug new and existing software systems . Snowflake SQL Writing SQL queries against Snowflake Developing scripts Unix, Python Required Candidate profile , Tasks, Streams, Time travel, Optimizer, Metadata Manager, data sharing, and stored procedures. . In-depth understanding of Data Warehouse/ODS, ETL concept and modeling structure principles
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2