Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
15 - 19 Lacs
Chennai
Work from Office
Senior Data Engineer - Azure Years of Experience : 5 Job location: Chennai Job Description : We are looking for a skilled and experienced Senior Azure Developer to join the team! As part of the team, you will be involved in the implementation of the ongoing and new initiatives for our company. If you love learning, thinking strategically, innovating,and helping others, this job is for you! Primary Skills : ADF,Databricks Secondary Skills : DBT,Python,Databricks,Airflow,Fivetran,Glue,Snowflake Role Description : Data engineering role requires creating and managing technological infrastructure of a data platform, be in-charge / involved in architecting, building, and managing data flows / pipelines and construct data storages (noSQL, SQL), tools to work with big data (Hadoop, Kafka), and integration tools to connect sources or other databases. Role Responsibility : l Translate functional specifications and change requests into technical specifications l Translate business requirement document, functional specification, and technical specification to related coding l Develop efficient code with unit testing and code documentation l Ensuring accuracy and integrity of data and applications through analysis, coding, documenting, testing, and problem solving l Setting up the development environment and configuration of the development tools l Communicate with all the project stakeholders on the project status l Manage, monitor, and ensure the security and privacy of data to satisfy business needs l Contribute to the automation of modules, wherever required l To be proficient in written, verbal and presentation communication (English) l Co-ordinating with the UAT team Role Requirement : l Proficient in basic and advanced SQL programming concepts (Procedures, Analytical functions etc.) l Good Knowledge and Understanding of Data warehouse concepts (Dimensional Modeling, change data capture, slowly changing dimensions etc.) l Knowledgeable in Shell / PowerShell scripting l Knowledgeable in relational databases, nonrelational databases, data streams, and file stores l Knowledgeable in performance tuning and optimization l Experience in Data Profiling and Data validation l Experience in requirements gathering and documentation processes and performing unit testing l Understanding and Implementing QA and various testing process in the project l Knowledge in any BI tools will be an added advantage l Sound aptitude, outstanding logical reasoning, and analytical skills l Willingness to learn and take initiatives l Ability to adapt to fast-paced Agile environment Additional Requirement : l Demonstrated expertise as a Data Engineer, specializing in Azure cloud services. l Highly skilled in Azure Data Factory, Azure Data Lake, Azure Databricks, and Azure Synapse Analytics. l Create and execute efficient, scalable, and dependable data pipelines utilizing Azure Data Factory. l Utilize Azure Databricks for data transformation and processing. l Effectively oversee and enhance data storage solutions, emphasizing Azure Data Lake and other Azure storage services. l Construct and uphold workflows for data orchestration and scheduling using Azure Data Factory or equivalent tools. l Proficient in programming languages like Python, SQL, and conversant with pertinent l scripting languages.
Posted 1 month ago
5.0 - 20.0 years
10 - 35 Lacs
Hyderabad, Pune, Delhi / NCR
Work from Office
Mandatory Skill - Snowflake, Matillion
Posted 1 month ago
12.0 - 22.0 years
25 - 40 Lacs
Pune, Chennai, Bengaluru
Hybrid
Role & responsibilities Snowflake JD Must have 12- 22 years of experience in Data warehouse, ETL, BI projects • Must have atleast 5+ years of experience in Snowflake, 3+DBT • Expertise in Snowflake architecture is must. • Must have atleast 3+ years of experience and strong hold in Python/PySpark • Must have experience implementing complex stored Procedures and standard DWH and ETL concepts • Proficient in Oracle database, complex PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot • Good to have experience with AWS services and creating DevOps templates for various AWS services. • Experience in using Github, Jenkins • Good communication and Analytical skills • Snowflake certification is desirable
Posted 1 month ago
5.0 - 10.0 years
7 - 17 Lacs
Jaipur
Remote
Lead Databricks to Snowflake migration. Expertise in PySpark, Snowflake, DBT, Airflow, CI/CD, SQL optimization, and orchestration. Ensure scalable, high-performance pipelines with strong DevOps and monitoring practices.
Posted 1 month ago
6.0 - 10.0 years
12 - 20 Lacs
Pune, Delhi / NCR, Mumbai (All Areas)
Hybrid
Role & responsibilities (Exp is required 6+ Years) Job Description: Enterprise Business Technology is on a mission to support and create enterprise software for our organization. We're a highly collaborative team that interlocks with corporate functions such as Finance and Product teams to deliver value with innovative technology solutions. Each day, thousands of people rely on Enlyte's technology and services to help their customers during challenging life events. We're looking for a remote Senior Data Analytics Engineer for our Corporate Analytics team. Opportunity - Technical lead for our corporate analytics practice using dbt, Dagster, Snowflake and Power BI, SQL and Python Responsibilities Build our data pipelines for our data warehouse in Python working with APIs to source data Build power bi reports and dashboards associated to this process Contribute to our strategy for new data pipelines and data engineering approaches Maintain a medallion based architecture for data analysis with Kimball Participates in daily scrum calls, follows agile SDLC Creates meaningful documentation of their work Follow organizational best practices for dbt and writes maintainable code Qualifications 5+ years of professional experience as a Data Engineer Strong dbt experience (3+ years) and knowledge of modern data stack Strong experience with Snowflake (3+ years) You have experience using Dagster and running complex pipelines (1+ year) Some Python experience, experience with git and Azure Devops Experience with data modeling in Kimball and medallion based structures
Posted 1 month ago
5.0 - 10.0 years
22 - 27 Lacs
Pune, Bengaluru
Work from Office
Build ETL jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Build out data lineage artifacts to ensure all current and future systems are properly documented Required Candidate profile exp with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate & transfer large volumes of data and perform quality checks Exp in healthcare industry with PHI/PII
Posted 1 month ago
7.0 - 12.0 years
30 - 40 Lacs
Hyderabad
Work from Office
Support enhancements to the MDM platform Develop pipelines using snowflake python SQL and airflow Track System Performance Troubleshoot issues Resolve production issues Required Candidate profile 5+ years of hands on expert level Snowflake, Python, orchestration tools like Airflow Good understanding of investment domain Experience with dbt, Cloud experience (AWS, Azure) DevOps
Posted 1 month ago
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant- Databricks Developer ! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications Bachelor&rsquos Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain . Must have implemented at least 2 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql . Must have strong performance optimization skills to improve efficiency and reduce cost . Must have worked on both Batch and streaming data pipeline . Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training .
Posted 1 month ago
4.0 - 9.0 years
11 - 19 Lacs
Chennai
Work from Office
Role & responsibilities Python, Dataproc, Airflow PySpark, Cloud Storage, DBT, DataForm, NAS, Pubsub, TERRAFORM, API, Big Query, Data Fusion, GCP, Tekton Preferred candidate profile Data Engineer in Python - GCP Location Chennai Only 4+ Years of Experience
Posted 1 month ago
6.0 - 8.0 years
22 - 25 Lacs
Pune
Work from Office
We are looking for immediate joiner + who can join us within 30 Days for below position Senior Snowflake DBT Developer Primary Role We are seeking a skilled Senior Snowflake DBT Developer to join our data engineering team. The ideal candidate will have solid experience developing ETL/ELT pipelines on snowflake + DBT, strong SQL skills, and hands-on expertise working with Snowflake and DBT on the Azure cloud platform. This role involves designing, building, and maintaining scalable data transformation workflows and data models to support analytics and business intelligence. Key Responsibilities: Design, develop, and maintain data transformation pipelines using DBT to build modular, reusable, and scalable data models on Snowflake. Develop and optimize SQL queries and procedures for data loading, transformation, and analysis in Snowflake. Load and manage data efficiently in Snowflake from various sources, ensuring data quality and integrity. Analyze and profile data using SQL to support business requirements and troubleshooting. Collaborate with data engineers, analysts, and business stakeholders to understand data needs and translate them into technical solutions. Implement best practices for DBT project structure, version control (Git), testing, and documentation. Work on Azure cloud platform, leveraging its integration capabilities with Snowflake. Participate in code reviews, unit testing, and deployment processes to ensure high-quality deliverables. Troubleshoot and optimize data pipelines for performance and cost-effectiveness. Desired Skills Qualification Bachelors degree in science, Engineering and related disciplines. Work Experience 5-7 years of experience in data engineering or development roles, with at least 2 years of hands-on experience in Snowflake and 2 years with DBT. Experience developing ETL/ELT pipelines and working with data warehouse concepts. Strong proficiency in SQL, including complex query writing, data analysis, and performance tuning. Proven experience loading and transforming data in Snowflake. Hands-on experience working on Azure cloud platform and integrating Snowflake with Azure services. Familiarity with DBT Core features such as models, macros, tests, hooks, and modular project structure. Good understanding of data modeling concepts and dimensional modeling (star/snowflake schemas). Experience with version control systems like Git and CI/CD workflows is a plus. Strong analytical, problem-solving, and communication skills.
Posted 1 month ago
5.0 - 9.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Tech stack GCP service – SQL Python DBT Terraform Basics of Azure (optional depending on which migration team needs) Soft skills: GitHub DevOps CICD Agile methodologies 9916086641 anamika@makevisionsoutsourcing.in
Posted 1 month ago
12.0 - 15.0 years
37 - 40 Lacs
Pune
Work from Office
We are looking for immediate joiner + we can join us within 30 Days for below position Primary Role We are looking for a seasoned Snowflake Architect/Lead to design, build, and optimize enterprise-scale data warehouse solutions on the Snowflake Data Cloud. The ideal candidate will have extensive experience in Snowflake architecture, cloud data platforms, and enterprise data warehousing, with a strong background in the banking domain and Azure cloud. This role requires leadership skills to guide technical teams and deliver robust, scalable data solutions aligned with business needs. Key Responsibilities: Architect and lead the implementation of scalable, secure, and high-performance Snowflake data warehouse solutions, including enterprise data warehouse (EDW) projects. Develop and oversee ETL/ELT pipelines using tools such as DBT, ensuring data quality, transformation, and orchestration. Manage Snowflake environments on Azure cloud, leveraging cloud-native features for compute and storage scalability. Implement data governance, security policies, and compliance standards suitable for the banking domain. Collaborate with cross-functional teams including data engineers, analysts, and business stakeholders to translate requirements into technical architecture. Lead performance tuning, query optimization, and cost management strategies within Snowflake. Mentor and lead data engineering teams, enforcing best practices and coding standards. Stay updated with Snowflake platform advancements and cloud data trends to drive continuous improvement. Support solution design, code reviews, and architectural governance. Desired Skills Qualification Bachelors degree in science, Engineering and related disciplines. Work Experience 12+ years of experience in data engineering, data architecture, or related roles. Minimum 3-5 years of hands-on experience architecting and implementing Snowflake solutions. Proven experience implementing enterprise data warehouses (EDW) on Snowflake. Strong experience with Azure cloud platform and its integration with Snowflake. Expertise in DBT for data transformations and pipeline orchestration. Deep knowledge of banking domain data requirements, compliance, and security. Proficiency in SQL and ETL Experience with cloud data platform concepts such as separation of compute and storage, multi-cluster architecture, and scalability. Solid understanding of data modeling (star/snowflake schemas) and data governance. Strong leadership, communication, and stakeholder management skills. Experience leading and mentoring technical teams.
Posted 1 month ago
5.0 - 9.0 years
1 - 6 Lacs
Bengaluru
Work from Office
Job Title: Data Engineer Experience: 5-7 Years Location : Bangalore Job Type: Full-Time with NAM Job Summary We are seeking an experienced Data Engineer with 5 to 7 years of experience in building and optimizing data pipelines and architectures on modern cloud data platforms. The ideal candidate will have strong expertise across Google Cloud Platform (GCP), DBT, Snowflake, Apache Airflow, and Data Lake architectures. Key Responsibilities Design, build, and maintain robust, scalable, and efficient ETL/ELT pipelines. Implement data ingestion processes using FiveTran and integrate various structured and unstructured data sources into GCP-based environments. Develop data models and transformation workflows using DBT and manage version-controlled pipelines. Build and manage data storage solutions using Snowflake, optimizing for cost, performance, and scalability. Orchestrate workflows and pipeline dependencies using Apache Airflow. Design and support Data Lake architecture for raw and curated data zones. Collaborate with Data Analysts, Scientists, and Product teams to ensure availability and quality of data. Monitor data pipeline performance, ensure data integrity, and handle error recovery mechanisms. Followbest practices in CI/CD, testing, data governance, and security standards. Required Skills 5 - 7 years of professional experience in data engineering roles. Hands-on experience with GCP services: BigQuery, Cloud Storage, Pub/Sub, Dataflow, Composer, etc. Expertisein FiveTran and experience integrating APIs and external sources. Proficient in writing modular SQL transformations and data modeling using DBT. Deep understanding of Snowflake warehousing: performance tuning, cost optimization, security. Experience with Airflow for pipeline orchestration and DAG management. Familiarity with designing and implementing Data Lake solutions. Proficient in Python and/or SQL. Strong understanding of data governance, data quality frameworks, and DevOps practices. Preferred Qualifications GCP Professional Data Engineer certification is a plus. Experience in agile development environments. Exposure to data catalog tools and data observability platforms. Send profiles to narasimha@nam-it.com Thanks & regards, Narasimha.B Staffing executive NAM Info Pvt Ltd, 29/2B-01, 1st Floor, K.R. Road, Banashankari 2nd Stage, Bangalore - 560070. +91 9182480146 (India)
Posted 1 month ago
5.0 - 9.0 years
1 - 6 Lacs
Pune, Chennai, Bengaluru
Work from Office
Job Title: Data Engineer Experience: 5-7 Years Location : Bangalore Job Type: Full-Time with NAM Job Summary We are seeking an experienced Data Engineer with 5 to 7 years of experience in building and optimizing data pipelines and architectures on modern cloud data platforms. The ideal candidate will have strong expertise across Google Cloud Platform (GCP), DBT, Snowflake, Apache Airflow, and Data Lake architectures. Key Responsibilities Design, build, and maintain robust, scalable, and efficient ETL/ELT pipelines. Implement data ingestion processes using FiveTran and integrate various structured and unstructured data sources into GCP-based environments. Develop data models and transformation workflows using DBT and manage version-controlled pipelines. Build and manage data storage solutions using Snowflake, optimizing for cost, performance, and scalability. Orchestrate workflows and pipeline dependencies using Apache Airflow. Design and support Data Lake architecture for raw and curated data zones. Collaborate with Data Analysts, Scientists, and Product teams to ensure availability and quality of data. Monitor data pipeline performance, ensure data integrity, and handle error recovery mechanisms. Followbest practices in CI/CD, testing, data governance, and security standards. Required Skills 5 - 7 years of professional experience in data engineering roles. Hands-on experience with GCP services: BigQuery, Cloud Storage, Pub/Sub, Dataflow, Composer, etc. Expertisein FiveTran and experience integrating APIs and external sources. Proficient in writing modular SQL transformations and data modeling using DBT. Deep understanding of Snowflake warehousing: performance tuning, cost optimization, security. Experience with Airflow for pipeline orchestration and DAG management. Familiarity with designing and implementing Data Lake solutions. Proficient in Python and/or SQL. Strong understanding of data governance, data quality frameworks, and DevOps practices. Preferred Qualifications GCP Professional Data Engineer certification is a plus. Experience in agile development environments. Exposure to data catalog tools and data observability platforms. Send profiles to narasimha@nam-it.com Thanks & regards, Narasimha.B Staffing executive NAM Info Pvt Ltd, 29/2B-01, 1st Floor, K.R. Road, Banashankari 2nd Stage, Bangalore - 560070. +91 9182480146 (India)
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Bengaluru
Remote
Minimum 5+ years of Developing, designing, and implementing of Data Engineering. Collaborate with data engineers and architects to design and optimize data models for Snowflake Data Warehouse. Optimize query performance and data storage in Snowflake by utilizing clustering, partitioning, and other optimization techniques. Experience working on projects were housed within an Amazon Web Services (AWS) cloud environment. Experience working on projects housed within a Tableau and DBT Work closely with business stakeholders to understand requirements and translate them into technical solutions. Excellent presentation and communication skills, both written and verbal, ability to problem solve and design in an environment with unclear requirements.
Posted 1 month ago
5.0 - 10.0 years
22 - 27 Lacs
Chennai, Mumbai (All Areas)
Work from Office
Build ETL jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Build out data lineage artifacts to ensure all current and future systems are properly documented Required Candidate profile exp with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate & transfer large volumes of data and perform quality checks Exp in healthcare industry with PHI/PII
Posted 1 month ago
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Senior Principal Consultant- Databricks Developer ! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications Bachelor&rsquos Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience . Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain . Must have implemented at least 4 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Must have skills: Azure data factory, Azure data bricks, Python and Pyspark Expert with database technologies and ETL tools. Hands-on experience on designing and developing scripts for custom ETL processes and automation in Azure data factory, Azure databricks , Python, Pyspark etc. Good knowledge of AZURE, AWS, GCP Cloud platform services stack Hands-on experience on designing and developing scripts for custom ETL processes and automation in Azure data factory, Azure databricks , Delta lake, Databricks workflows orchestration, Python, Pyspark etc. Good Knowledge on Unity Catalog implementation. Good Knowledge on integration with other tools like - DBT, other transformation tools. Good knowledge on Unity Catalog integration with Snowlflake Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql . Must have strong performance optimization skills to improve efficiency and reduce cost . Must have worked on both Batch and streaming data pipeline . Must have extensive knowledge of Spark and Hive data processing framework . Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 month ago
6.0 - 11.0 years
10 - 20 Lacs
Hyderabad, Bangalore Rural, Bengaluru
Work from Office
We are seeking a highly skilled Snowflake Developer to join our team in Bangalore. The ideal candidate will have extensive experience in designing, implementing, and managing Snowflake-based data solutions. This role involves developing data architectures and ensuring the effective use of Snowflake to drive business insights and innovation. Key Responsibilities: Design and implement scalable, efficient, and secure Snowflake solutions to meet business requirements. Develop data architecture frameworks, standards, and principles, including modeling, metadata, security, and reference data. Implement Snowflake-based data warehouses, data lakes, and data integration solutions. Manage data ingestion, transformation, and loading processes to ensure data quality and performance. Collaborate with business stakeholders and IT teams to develop data strategies and ensure alignment with business goals. Drive continuous improvement by leveraging the latest Snowflake features and industry trends. Qualifications: Bachelors or Master’s degree in Computer Science, Information Technology, Data Science, or a related field. 8+ years of experience in data architecture, data engineering, or a related field. Extensive experience with Snowflake, including designing and implementing Snowflake-based solutions. Must have exposure working in Airflow Proven track record of contributing to data projects and working in complex environments. Familiarity with cloud platforms (e.g., AWS, GCP) and their data services. Snowflake certification (e.g., SnowPro Core, SnowPro Advanced) is a plus.
Posted 1 month ago
6.0 - 9.0 years
18 - 20 Lacs
Bengaluru
Hybrid
Job Title: Data Engineer Experience Range: 6-9 years Location : Bengaluru Notice period : immediate - 15 days Job Summary: We are looking for a skilled Data Engineer to design, build, and maintain robust, scalable data pipelines and infrastructure. This role is essential in enabling data accessibility, quality, and insights across the organization. You will work with modern cloud and big data technologies such as Azure Databricks , Snowflake , and DBT , collaborating with cross-functional teams to power data-driven decision-making. Key Responsibilities: For External Candidates: Data Pipeline Development: Build and optimize data pipelines to ingest, transform, and load data from multiple sources using Azure Databricks, Snowflake, and DBT. Data Modeling & Architecture: Design efficient data models and structures within Snowflake ensuring optimal performance and accessibility. Data Transformation: Implement standardized and reusable data transformations in DBT for reliable analytics and reporting. Performance Optimization: Monitor and tune data workflows for performance, scalability, and fault-tolerance. Cross-Team Collaboration: Partner with data scientists, analysts, and business users to support analytics and machine learning projects with reliable, well-structured datasets. Additional Responsibilities (Internal Candidates): Implement and manage CI/CD pipelines using tools such as Jenkins , Azure DevOps , or GitHub . Develop data lake solutions using Scala and Python in a Hadoop/Spark ecosystem. Work with Azure Data Factory and orchestration tools to schedule, monitor, and maintain workflows. Apply deep understanding of Hadoop architecture , Spark, Hive, and storage optimization. Mandatory Skills: Hands-on experience with: Azure Databricks (data processing and orchestration) Snowflake (data warehousing) DBT (data transformation) Azure Data Factory (pipeline orchestration) Strong SQL and data modeling capabilities Proficiency in Scala and Python for data engineering use cases Experience with big data ecosystems : Hadoop, Spark, Hive Knowledge of CI/CD pipelines (Jenkins, GitHub, Azure DevOps) Qualifications: Bachelors degree in Computer Science , Data Engineering , or a related field 6–9 years of relevant experience in data engineering or data infrastructure roles
Posted 1 month ago
8.0 - 13.0 years
8 - 12 Lacs
Navi Mumbai, Maharashtra, India
On-site
Years of experience: 8+ years No of interviews: 2 (1 internal and 1 client interview) Must have: Tableau plus CRM analytics Data Analytics: CRMA (2) Primary Responsibility: Must have Apollo is searching for a hands-on business-focused technology lead. The ideal candidate will have functional knowledge in Sales and Marketing processes and core data engineering technologies such as databases, BI, Alteryx and Tableau tools. The individual will work closely and partner with global sales enablement, analytics client management, operations, and finance teams to deliver on projects that provide the capabilities defined in the target operating model. Strong database management skills Alteryx, DBT, Tableau & CRM Analytics. Build/maintain data analytics & intelligence capabilities, support management of data & analytics ecosystem. Build data intelligence solutions including data quality tools, data on demand, executive reports/dashboards. Collaborating extensively with clients to gain a deep understanding of their data requirements and translating them into robust technical solutions. Provide support throughout the product & platform enhancement lifecycle, manage user queries. Secondary Responsibility: Good to have Database management skills Snowflake/ Redshift / Bigquery/SQL/Oracle, Python & DBT Cloud infra e.g. ADLS & ADF or S3 & Glue or Airflow or Databricks etc. Relational and dimensional data modeling. Top-notch lead engineering talent with 7+ years of experience building and managing data related solutions preferably in financial industry. Designing and developing efficient data ingestion processes, data models, and data pipelines leveraging the power of Snowflake/ Redshift / Bigquery Implementing end-to-end ETL/ELT workflows to seamlessly extract, transform, and load data into Snowflake/ Redshift / Bigquery Conducting thorough data quality assessments and implementing effective data governance best practices.
Posted 1 month ago
10.0 - 20.0 years
10 - 20 Lacs
Navi Mumbai, Maharashtra, India
On-site
Domain: Financial Services (Asset Management buy/sell side) good to have for all the positions but this shall help in client screening better post internal selection Communication: Has to be very strong Notice period: 1 week to 15 days joiners only Candidate preferences: As much possible, candidates should be from financial set ups/tier 1 companies and check for any overlaps or documentation issues. Positions: Data Engineer: Primary Skill: Snowflake, SQL, Python and DBT, Snowpro Certification is mandatory. Secondary Skill: Alteryx/any ETL tool, DBT No of interviews: 2 (L1 client interview)
Posted 1 month ago
8.0 - 13.0 years
8 - 13 Lacs
Pune, Maharashtra, India
On-site
Consultant Data Engineer Tools & Technology: Snowflake, Snowsql, AWS, DBT, Snowpark, Airflow, DWH, Unix, SQL, Shell Scripting, Pyspark, GIT, Visual Studio, Service Now. Duties and Responsibilities Act as Consultant Data Engineer Understand business requirement and designing, developing & maintaining scalable automated data pipelines & ETL processes to ensure efficient data processing and storage Create a robust, extensible architecture to meet the client/business requirements Snowflake objects with integration with AWS services and DBT Involved in different types of data ingestion pipelines as per requirements Development in DBT (Data Build Tool) for data transformation as per the requirements Working on multiple AWS services integration with Snowflake Working with integration of structured data & Semi-Structured data sets Work on Performance Tuning and cost optimization Work on implementing CDC or SCD type 2 Design and build solutions for near real-time stream as well as batch processing Implement best practices for data management, data quality, and data governance Responsible for data collection, data cleaning & pre-processing using Snowflake and DBT Investigate production issues and fine-tune our data pipelines Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery Coordinate and support software developers, database architects, data analysts and data scientists on data initiatives Orchestrate the pipeline using Airflow Suggest improvements to processes, products and services Interact with users, management, and technical personnel to clarify business issues, identify problems, and suggest changes/solutions to business and developers Create technical documentation on confluence to aim knowledge sharing Associate Data Engineer Tools & Technology: Snowflake, DBT, AWS, Airflow, ETL, Datawarehouse, Shell Scripting, SQL, Git, Confluence, Python Duties and Responsibilities Act as offshore Data engineer and enhancement & testing Design and build solutions for near real-time stream processing as well as batch processing Development in snowflake objects with their unique features implemented Implementing data integration and transformation workflows using DBT Integration with AWS services with Snowflake Participate in implementation plan, respond to production issues Responsible for data collection, data cleaning & pre-processing Experience in developing UDF, Snowflake Procedures, Streams, and Tasks Involved in troubleshooting customer data issues, manual load if any data missed, data duplication checking and handling with RCA Investigate production job failures with RCA investigation Development of ETL processes and data integration solutions Understanding the business needs of the client and provide technical solution Monitoring the overall functioning of processes, identifying improvement areas and implementing with scripting Handling major outages effectively along with effective communication to business, users & development partners Define and create Run Book entries and knowledge articles based on incidents experienced in production Associate Engineer Tools and Technology: UNIX, ORACLE, Shell Scripting, ETL, Hadoop, Spark, Sqoop, Hive, Control-m, Techtia, SQL, Jira, HDFS, Snowflake, DBT, AWS Duties and Responsibilities Worked as a Senior Production/Application Support Engineer Worked as Production support member for loading, processing and reporting of files and generating reports Monitoring multiple batches, jobs, processes and analyzing issues related to job failures and handling FTP failure, connectivity issues of batch/job failures Performing data analysis on files and generating/sending files to destination server depending on functionality of job Creating shell scripts for automating daily tasks or as requested by service owner Involved in tuning jobs to improve performance and performing daily checks Coordinating with Middleware, DWH, CRM and other teams in case of any CRQ issues Monitoring overall functioning of processes, identifying improvement areas and implementing with scripting Raising PBI after approval from service owner Involved in performance improvement automation activities to decrease manual workload Data ingestion from RDBMS system to HDFS/Hive through SQOOP Understanding customer problems and providing appropriate technical solutions Handling major outages effectively with proper communication to business, users & development partners Coordinating with client, on-site personnel and joining bridge calls for any issues Handling daily issues based on application and job performance
Posted 1 month ago
8.0 - 13.0 years
8 - 13 Lacs
Chennai, Tamil Nadu, India
On-site
Consultant Data Engineer Tools & Technology: Snowflake, Snowsql, AWS, DBT, Snowpark, Airflow, DWH, Unix, SQL, Shell Scripting, Pyspark, GIT, Visual Studio, Service Now. Duties and Responsibilities Act as Consultant Data Engineer Understand business requirement and designing, developing & maintaining scalable automated data pipelines & ETL processes to ensure efficient data processing and storage Create a robust, extensible architecture to meet the client/business requirements Snowflake objects with integration with AWS services and DBT Involved in different types of data ingestion pipelines as per requirements Development in DBT (Data Build Tool) for data transformation as per the requirements Working on multiple AWS services integration with Snowflake Working with integration of structured data & Semi-Structured data sets Work on Performance Tuning and cost optimization Work on implementing CDC or SCD type 2 Design and build solutions for near real-time stream as well as batch processing Implement best practices for data management, data quality, and data governance Responsible for data collection, data cleaning & pre-processing using Snowflake and DBT Investigate production issues and fine-tune our data pipelines Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery Coordinate and support software developers, database architects, data analysts and data scientists on data initiatives Orchestrate the pipeline using Airflow Suggest improvements to processes, products and services Interact with users, management, and technical personnel to clarify business issues, identify problems, and suggest changes/solutions to business and developers Create technical documentation on confluence to aim knowledge sharing Associate Data Engineer Tools & Technology: Snowflake, DBT, AWS, Airflow, ETL, Datawarehouse, Shell Scripting, SQL, Git, Confluence, Python Duties and Responsibilities Act as offshore Data engineer and enhancement & testing Design and build solutions for near real-time stream processing as well as batch processing Development in snowflake objects with their unique features implemented Implementing data integration and transformation workflows using DBT Integration with AWS services with Snowflake Participate in implementation plan, respond to production issues Responsible for data collection, data cleaning & pre-processing Experience in developing UDF, Snowflake Procedures, Streams, and Tasks Involved in troubleshooting customer data issues, manual load if any data missed, data duplication checking and handling with RCA Investigate production job failures with RCA investigation Development of ETL processes and data integration solutions Understanding the business needs of the client and provide technical solution Monitoring the overall functioning of processes, identifying improvement areas and implementing with scripting Handling major outages effectively along with effective communication to business, users & development partners Define and create Run Book entries and knowledge articles based on incidents experienced in production Associate Engineer Tools and Technology: UNIX, ORACLE, Shell Scripting, ETL, Hadoop, Spark, Sqoop, Hive, Control-m, Techtia, SQL, Jira, HDFS, Snowflake, DBT, AWS Duties and Responsibilities Worked as a Senior Production/Application Support Engineer Worked as Production support member for loading, processing and reporting of files and generating reports Monitoring multiple batches, jobs, processes and analyzing issues related to job failures and handling FTP failure, connectivity issues of batch/job failures Performing data analysis on files and generating/sending files to destination server depending on functionality of job Creating shell scripts for automating daily tasks or as requested by service owner Involved in tuning jobs to improve performance and performing daily checks Coordinating with Middleware, DWH, CRM and other teams in case of any CRQ issues Monitoring overall functioning of processes, identifying improvement areas and implementing with scripting Raising PBI after approval from service owner Involved in performance improvement automation activities to decrease manual workload Data ingestion from RDBMS system to HDFS/Hive through SQOOP Understanding customer problems and providing appropriate technical solutions Handling major outages effectively with proper communication to business, users & development partners Coordinating with client, on-site personnel and joining bridge calls for any issues Handling daily issues based on application and job performance
Posted 1 month ago
8.0 - 13.0 years
8 - 13 Lacs
Bengaluru, Karnataka, India
On-site
Consultant Data Engineer Tools & Technology: Snowflake, Snowsql, AWS, DBT, Snowpark, Airflow, DWH, Unix, SQL, Shell Scripting, Pyspark, GIT, Visual Studio, Service Now. Duties and Responsibilities Act as Consultant Data Engineer Understand business requirement and designing, developing & maintaining scalable automated data pipelines & ETL processes to ensure efficient data processing and storage Create a robust, extensible architecture to meet the client/business requirements Snowflake objects with integration with AWS services and DBT Involved in different types of data ingestion pipelines as per requirements Development in DBT (Data Build Tool) for data transformation as per the requirements Working on multiple AWS services integration with Snowflake Working with integration of structured data & Semi-Structured data sets Work on Performance Tuning and cost optimization Work on implementing CDC or SCD type 2 Design and build solutions for near real-time stream as well as batch processing Implement best practices for data management, data quality, and data governance Responsible for data collection, data cleaning & pre-processing using Snowflake and DBT Investigate production issues and fine-tune our data pipelines Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery Coordinate and support software developers, database architects, data analysts and data scientists on data initiatives Orchestrate the pipeline using Airflow Suggest improvements to processes, products and services Interact with users, management, and technical personnel to clarify business issues, identify problems, and suggest changes/solutions to business and developers Create technical documentation on confluence to aim knowledge sharing Associate Data Engineer Tools & Technology: Snowflake, DBT, AWS, Airflow, ETL, Datawarehouse, Shell Scripting, SQL, Git, Confluence, Python Duties and Responsibilities Act as offshore Data engineer and enhancement & testing Design and build solutions for near real-time stream processing as well as batch processing Development in snowflake objects with their unique features implemented Implementing data integration and transformation workflows using DBT Integration with AWS services with Snowflake Participate in implementation plan, respond to production issues Responsible for data collection, data cleaning & pre-processing Experience in developing UDF, Snowflake Procedures, Streams, and Tasks Involved in troubleshooting customer data issues, manual load if any data missed, data duplication checking and handling with RCA Investigate production job failures with RCA investigation Development of ETL processes and data integration solutions Understanding the business needs of the client and provide technical solution Monitoring the overall functioning of processes, identifying improvement areas and implementing with scripting Handling major outages effectively along with effective communication to business, users & development partners Define and create Run Book entries and knowledge articles based on incidents experienced in production Associate Engineer Tools and Technology: UNIX, ORACLE, Shell Scripting, ETL, Hadoop, Spark, Sqoop, Hive, Control-m, Techtia, SQL, Jira, HDFS, Snowflake, DBT, AWS Duties and Responsibilities Worked as a Senior Production/Application Support Engineer Worked as Production support member for loading, processing and reporting of files and generating reports Monitoring multiple batches, jobs, processes and analyzing issues related to job failures and handling FTP failure, connectivity issues of batch/job failures Performing data analysis on files and generating/sending files to destination server depending on functionality of job Creating shell scripts for automating daily tasks or as requested by service owner Involved in tuning jobs to improve performance and performing daily checks Coordinating with Middleware, DWH, CRM and other teams in case of any CRQ issues Monitoring overall functioning of processes, identifying improvement areas and implementing with scripting Raising PBI after approval from service owner Involved in performance improvement automation activities to decrease manual workload Data ingestion from RDBMS system to HDFS/Hive through SQOOP Understanding customer problems and providing appropriate technical solutions Handling major outages effectively with proper communication to business, users & development partners Coordinating with client, on-site personnel and joining bridge calls for any issues Handling daily issues based on application and job performance
Posted 1 month ago
3.0 - 8.0 years
0 - 3 Lacs
Bengaluru
Remote
If you are passionate about Snowflake, data warehousing, and cloud-based analytics, we'd love to hear from you! Apply now to be a part of our growing team. Perks and benefits Intersected candidates can go through the below link to apply directly and can complete the 1st round of technical discussion https://app.hyrgpt.com/candidate-job-details?jobId=67ecc88dda1154001cc8b88f Job Summary: We are looking for a skilled Snowflake Engineer with 3-10 years of experience in designing and implementing cloud-based data warehousing solutions. The ideal candidate will have hands-on expertise in Snowflake architecture, SQL, ETL pipeline development, and performance optimization. This role requires proficiency in handling structured and semi-structured data, data modeling, and query optimization to support business intelligence and analytics initiatives. The ideal candidate will work on a project for one of our key Big4 consulting customer and will have immense learning opportunities Key Responsibilities: Design, develop, and manage high-performance data pipelines for ingestion, transformation, and storage in Snowflake. Optimize Snowflake workloads, ensuring efficient query execution and cost management. Develop and maintain ETL processes using SQL, Python, and orchestration tools. Implement data governance, security, and access control best practices within Snowflake. Work with structured and semi-structured data formats such as JSON, Parquet, Avro, and XML. Design and maintain fact and dimension tables, ensuring efficient data warehousing and reporting. Collaborate with data analysts and business teams to support reporting, analytics, and business intelligence needs. Troubleshoot and resolve data pipeline issues, ensuring high availability and reliability. Monitor and optimize Snowflake storage and compute usage to improve efficiency and performance. Required Skills & Qualifications: 3-10 years of experience in Snowflake, SQL, and data engineering. Strong hands-on expertise in Snowflake development, including data sharing, cloning, and time travel. Proficiency in SQL scripting for query optimization and performance tuning. Experience with ETL tools and frameworks (e.g., DBT, Airflow, Matillion, Talend). Familiarity with cloud platforms (AWS, Azure, or GCP) and integration with Snowflake. Strong understanding of data warehousing concepts, including fact and dimension modeling. Ability to work with semi-structured data formats like JSON, Avro, Parquet, and XML. Knowledge of data security, governance, and access control within Snowflake. Excellent problem-solving and troubleshooting skills. Preferred Qualifications: Experience in Python for data engineering tasks. Familiarity with CI/CD pipelines for Snowflake development and deployment. Exposure to streaming data ingestion and real-time processing. Experience with BI tools such as Tableau, Looker, or Power BI.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France