Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 9.0 years
14 - 24 Lacs
Hyderabad
Remote
Responsibilities Design and implement scalable and efficient data pipelines using dbt and Snowflake. Work collaboratively within a diverse team to spearhead the migration of data from multiple ERPs and SQL Server systems, using Extract and Load tools. Apply your technical expertise to ensure efficient and accurate data integration. Leverage your skills to maintain and enhance existing legacy systems and reports. Engage in reverse engineering to understand these systems and incrementally improve them by applying patches, optimizing functionality, and transitioning data pipelines to the Modern Data Platform (MDP). Practice clean programming techniques, write self-documenting code, and manage the codebase using GIT version control. Contribute to automation efforts by implementing CI/CD pipelines to streamline deployments. Work closely with onshore and offshore team members, as well as global stakeholders, to promote effective teamwork and a solution-oriented mindset. Tackle technical challenges with a 'we got this' mentality to achieve shared goals. Play an active role in continuously improving data integration processes, orchestrating workflows for maximum efficiency and reliability. Preferred candidate profile Experience: 5+ years of experience working with data integration and transformation, including a strong understanding of SQL for data querying and manipulation. Technical Skills: Must have: Cloud Data Warehousing Exposure: Experience with Snowflake or comparable cloud based data systems and tools. Proficiency in Python and SQL. Strong adherence to clean programming practices, producing self-documenting code using coding best practices. Hands-on experience with CI/CD tools (e.g., Jenkins, Github CI, CircleCI). Nice to have: Experience implementing and orchestrating data integrations Experience with containerization and orchestration tools (e.g., Docker, Kubernetes). Familiarity with infrastructure as code tools (e.g., Terraform, CloudFormation). Familiarity with configuration management tools (e.g., Ansible, Puppet, Chef). Knowledge of cloud platforms (AWS, Azure, GCP). Technical Proficiency and Problem-Solving: Deep understanding of data integration tools and methods, coupled with a proven ability to troubleshoot complex technical challenges. Communication and Agile Experience: Excellent communication skills for translating technical concepts to non-technical stakeholders, with comfort in Agile methodologies and project management tools.
Posted 1 month ago
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant- Sr. Snowflake Data Engineer ( Snowflake+ Python+Cloud ) ! In this role, the Sr. Snowflake Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Job Description : E xperience in IT industry W orking experience with building productionized data ingestion and processing data pipelines in Snowflake Strong understanding on Snowflake Architecture Fully well-versed with data warehousing concepts. Expertise and excellent understanding of Snowflake features and integration of Snowflake with other data processing. Able to create the data pipeline for ETL /ELT Good to have DBT experience Excellent presentation and communication skills, both written and verbal Ability to problem solve and architect in an environment with unclear requirements. Able to create the high level and low-level design document based on requirement. Hands on experience in configuration, troubleshooting, testing and managing data platforms, on premises or in the cloud. Awareness on data visualisation tools and methodologies Work independently on business problems and generate meaningful insights Good to have some experience/knowledge on Snowpark or Streamlit or GenAI but not mandatory. Should have experience on implementing Snowflake Best Practices Snowflake SnowPro Core Certification will be add ed an advantage Roles and Responsibilities : Requirement gathering, creating design document, providing solutions to customer, work with offshore team etc. Writing SQL queries against Snowflake , developing scripts to do Extract, Load, and Transform data. Hands-on experience with Snowflake utilities such as SnowSQL , Bulk copy, Snow p ipe , Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs , Snowsight . Have experience with Snowflake cloud data warehouse and AWS S3 bucket or Azure blob storage container for integrating data from multiple source system . Should have have some exp on AWS services (S3, Glue, Lambda) or Azure services ( Blob Storage, ADLS gen2, ADF) Should have good experience in Python / Pyspark . integration with Snowflake and cloud (AWS/Azure) with ability to leverage cloud services for data processing and storage. Proficiency in Python programming language, including knowledge of data types, variables, functions, loops, conditionals, and other Python-specific concepts. Knowledge of ETL (Extract, Transform, Load) processes and tools, and ability to design and develop efficient ETL jobs using Python and Pyspark . Should have some experience on Snowflake RBAC and data security . Should have good experience in implementing CDC or SCD type - 2 . Should have good experience in implementing Snowflake Best Practices In-depth understanding of Data Warehouse, ETL concepts and Data Modelling Experience in requirement gathering, analys is, designing, development, and deployment . Should Have experience building data ingestion pipeline Optimize and tune data pipelines for performance and scalability Able to communicate with clients and lead team. Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in deployment using CI/CD tools and exp in repositories like Azure repo , Github etc. Qualifications we seek in you! Minimum qualifications B.E./ Masters in Computer Science , Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant as Senior Snowflake Data Engineer . Skill Metrix: Snowflake, Python/ PySpark , AWS/Azure, ETL concepts, Data Modeling & Data Warehousing concepts Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos AI Gigafactory, our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Lead Consultant - Sr.Data Engineer (DBT+Snowflake) ! In this role, the Sr.Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Job Description: . Develop, implement, and optimize data pipelines using Snowflake, with a focus on Cortex AI capabilities. . Extract, transform, and load (ETL) data from various sources into Snowflake, ensuring data integrity and accuracy. . Implement Conversational AI solutions using Snowflake Cortex AI to facilitate data interaction through ChatBot agents. . Collaborate with data scientists and AI developers to integrate predictive analytics and AI models into data workflows. . Monitor and troubleshoot data pipelines to resolve data discrepancies and optimize performance. . Utilize Snowflake%27s advanced features, including Snowpark, Streams, and Tasks, to enable data processing and analysis. . Develop and maintain data documentation, best practices, and data governance protocols. . Ensure data security, privacy, and compliance with organizational and regulatory guidelines. Roles and Responsibilities: . . Bachelor&rsquos degree in Computer Science, Data Engineering, or a related field. . . experience in data engineering, with at least 3 years of experience working with Snowflake. . . Proven experience in Snowflake Cortex AI, focusing on data extraction, chatbot development, and Conversational AI. . . Strong proficiency in SQL, Python, and data modeling. . . Experience with data integration tools (e.g., Matillion, Talend, Informatica). . . Knowledge of cloud platforms such as AWS, Azure, or GCP. . . Excellent problem-solving skills, with a focus on data quality and performance optimization. . . Strong communication skills and the ability to work effectively in a cross-functional team. . Proficiency in using DBT%27s testing and documentation features to ensure the accuracy and reliability of data transformations. . Understanding of data lineage and metadata management concepts, and ability to track and document data transformations using DBT%27s lineage capabilities. . Understanding of software engineering best practices and ability to apply these principles to DBT development, including version control, code reviews, and automated testing. . Should have experience building data ingestion pipeline. . Should have experience with Snowflake utilities such as SnowSQL, SnowPipe, bulk copy, Snowpark, tables, Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs, Snowsight. . Should have good experience in implementing CDC or SCD type 2 . Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. . Good to have experience in repository tools like Github/Gitlab, Azure repo Skill Matrix: DBT (Core or Cloud), Snowflake, AWS/Azure, SQL, ETL concepts, Airflow or any orchestration tools, Data Warehousing concepts Qualifications/Minimum qualifications . B.E./ Masters in Computer Science, Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant of working experience as a Sr. Data Engineer with DBT+Snowflake skillsets Why join Genpact . Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation . Make an impact - Drive change for global enterprises and solve business challenges that matter . Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities . Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day . Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 month ago
5.0 - 10.0 years
15 - 25 Lacs
Pune
Remote
Role & responsibilities Design, develop, and manage scalable data models and transformation pipelines using DBT . Optimize and manage data warehousing solutions on Snowflake , including performance tuning and cost optimization. Implement and enforce best practices for data modeling, ELT/ETL processes, and version control. Collaborate with cross-functional teams to gather requirements and deliver actionable data solutions. Monitor and maintain data quality, reliability, and integrity across systems. Automate data workflows and monitor data pipeline performance. Maintain documentation for data models, transformations, and architecture.
Posted 1 month ago
0.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Assistant Vice President- Data Engineering Legacy - Snowflake Architect ! In this role, the Snowflake Architect is responsible for providing technical direction and lead a group of one or more developer to address a goal. Responsibilities Strong experience in building/designing the data warehouse, data lake, and data mart end-to-end implementation experience focusing on large enterprise scale and Snowflake implementations on any of the hyper scalers. Strong understanding on Snowflake Architecture Able to create the design and data modelling independently. Able to create the high level and low-level design document based on requirement. Strong experience with building productionized data ingestion and data pipelines in Snowflake Should have prior experience as an Architect in interacting with customer and Team/Delivery leaders and vendor teams. Should have strong experience in migration/greenfield projects in Snowflake. Should have experience in implementing Snowflake best practices for network policy , Storage integration, data governance, cost optimization, resource monitoring, data ingestion, transformation, consumption layers Should have good exp on Snowflake RBAC and data security. Should have good experience in implementing strategy for CDC or SCD type 2 Strong experience in Snowflake features including new snowflake features. Should have good experience in Python. Should have experience in AWS services (S3, Glue, Lambda, Secrete Manager, DMS) and few Azure services (Blob storage, ADLS, ADF) Should have experience in DBT. Must to have Snowflake SnowPro Core or SnowPro Advanced Architect certification. Should have experience/knowledge in orchestration and scheduling tools experience. Should have good understanding on ETL processes and ETL tools. Good understanding of agile methodology. Good to have some understanding on GenAI. Good to have exposure to other databases such as Redshift, Databricks, SQL Server, Oracle, PostgreSQL etc. Able to create POCs, roadmaps, solution architecture, estimations & implementation plan Experience for Snowflake integrations with other data processing. Qualifications we seek in you! Minimum qualifications B.E./ Masters in Computer Science, Information technology, or Computer engineering or any equivalent degree with good IT experience and strong experience as a Snowflake Architect. Skill Metrix: Snowflake, Python, AWS/Azure, Data Modeling, Design Patterns, DBT, ETL process and Data Warehousing concepts. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
, India
On-site
Job Description : YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we're a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth - bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hireSnowflake Professionals in the following areas : JD as below Snowflake SnowSQL, PL/SQL Any ETL Tool Job Description: 3+ years of IT experience in Analysis, Design, Development and unit testing of Data warehousing applications using industry accepted methodologies and procedures Write complex SQL queries to implement ETL (Extract, Transform, Load) processes and for Business Intelligence reporting. Strong experience with Snowpipe execustion, snowflake Datawarehouse, deep understanding of snowflake architecture and Processing, Creating and managing automated data pipelines for both batch and streaming data using DBT. Designing and implementing data models and schemas to support data warehousing and analytics within Snowflake. Writing and optimizing SQL queries for efficient data retrieval and analysis. Deliver robust solutions through Query optimization ensuring Data Quality. Should have experience in writing Functions and Stored Procedures. Strong understanding of the principles of Data Warehouse using Fact Tables, Dimension Tables, star and snowflake schema modelling Analyse & translate functional specifications /user stories into technical specifications. Good to have experience in Design/ Development in any ETL tool. Good interpersonal skills, experience in handling communication and interactions between different teams. At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment.We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture
Posted 1 month ago
6.0 - 8.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Job Description : YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we're a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth - bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hireSnowflake Professionals in the following areas : JD as below Snowflake SnowSQL, PL/SQL Any ETL Tool Job Description: Min 6-7 years of IT experience in Analysis, Design, Development and unit testing of Data warehousing applications using industry accepted methodologies and procedures Write complex SQL queries to implement ETL (Extract, Transform, Load) processes and for Business Intelligence reporting. Strong experience with Snowpipe execustion, snowflake Datawarehouse, deep understanding of snowflake architecture and Processing, Creating and managing automated data pipelines for both batch and streaming data using DBT. Designing and implementing data models and schemas to support data warehousing and analytics within Snowflake. Writing and optimizing SQL queries for efficient data retrieval and analysis. Deliver robust solutions through Query optimization ensuring Data Quality. Should have experience in writing Functions and Stored Procedures. Strong understanding of the principles of Data Warehouse using Fact Tables, Dimension Tables, star and snowflake schema modelling. Analyse & translate functional specifications /user stories into technical specifications. Good to have experience in Design/ Development in any ETL tool like snaplogic, Informatica, Datastage. Good interpersonal skills, experience in handling communication and interactions between different teams. At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment.We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture
Posted 1 month ago
5.0 - 8.0 years
18 - 30 Lacs
Pune, Chennai, Bengaluru
Work from Office
We're Hiring: Snowflake Developer | 5 to 8 Years Experience Skills : Snowflake , DBT, Python, ADF, Solid SQL development and optimization skills Experience working with modern data pipelines and cloud-based data platforms
Posted 1 month ago
5.0 - 8.0 years
18 - 30 Lacs
Pune, Chennai, Bengaluru
Work from Office
We're Hiring: Snowflake /Python Developer | 5 to 8 Years Experience Skills : Snowflake , Python,DBT, Python, ADF, Solid SQL development and optimization skills Experience working with modern data pipelines and cloud-based data platforms
Posted 1 month ago
9.0 - 14.0 years
20 - 35 Lacs
Pune, Chennai, Bengaluru
Hybrid
Role & responsibilities JD: Primary skill: Datavault 2.0,SNOWFLAKE AND DBT Secondary skill: Data Modelling Responsibilities: Collaborate with business analysts and stakeholders to understand the business needs and data requirements Analyze business processes and identify critical data elements for modeling Conduct data profiling and analysis to identify data quality issues and potential data modeling challenges Develop conceptual data models to capture high-level business entities, attributes and relationships within the database structure Using Data Vault to create data models that details the different parts of a business such as hubs, satellites and links Build automated loading processes and patterns to minimize development expenses and operational costs Good knowledge of Dimensional Modeling and Design patterns Lead and manage data operations projects, demonstrating a strong experience in managing end-to-end data analytics workflow. Lead the data engineering team in designing, building, and maintaining Data Lake on Snowflake using ADF Analyze and interpret data from Snowflake data warehouse to identify key trends and insights. Document work processes and maintain clear documentation for future reference. Solid understanding of data modeling concepts, including dimensional and fact modeling. Bring hands-on experience from at least one implementation project related to reporting and visualization. Apply best practices for dimensional and fact modeling for optimal performance and scalability. Qualifications: 8+ years of experience in dimensional modeling and design patterns Proven experience in designing and developing data models that details business parts such as hubs, satellites and links Familiarity with Snowflake and designing the models on top of it Strong SQL skills for data retrieval and manipulation. Experience with data visualization best practices and principles. Ability to work independently and collaboratively within a team.
Posted 1 month ago
5.0 - 8.0 years
6 - 16 Lacs
Chennai
Hybrid
Essential Skills & Experience: Snowflake: Minimum of 5 years of experience is preferred. Familiarity with virtual data warehouses. Experience with Snowpipe. Understanding of Snowflake scaling behaviors. DBT (Data Build Tool): Minimum of 1.5 years of experience is required. Basic understanding of DBT concepts, including creating models, setup, and testing. Candidates with 5 years of Snowflake experience and basic DBT knowledge are still considered good fits, as DBT is a new technology for the team. SQL: Strong SQL skills are crucial, ranging from basic transformations to complex queries. Candidates must be c omfortable finding solutions independently when encountering unfamiliar query formats or issues (e.g., searching for solutions online and testing them). An independent mindset is highly valued. SSIS Packages: Ability to reverse engineer SSIS packages to understand their functionality. Experience in optimizing and rewriting SSIS package logic in Snowflake (e.g., converting a 15-step SSIS process into a single step in Snowflake). This requires strong data modelling skills. Data Competency: High data competency is essential. Communication Skills: Good communication skills are required. Solutioning & Documentation: Experience in both solutioning and documentation. Desired (Nice-to-Have) Skills: Python: Any experience with Python. PySpark: Any exposure to PySpark. Analysis/Requirement Gathering: While not a primary requirement, the individual will be given opportunities to drive analysis if they demonstrate the competency and confidence. Onshore resources typically handle most analysis and business-facing tasks. Other Important Points: Data Modeling: Implicitly required due to the 5 years of Snowflake experience and the need to reverse engineer SSIS packages and rewrite them efficiently in Snowflake. Candidates are expected to have driven data modeling conversations and potentially smaller-scale data model designs in their past roles. Cloud Experience: Any cloud experience (AWS or Azure) is acceptable, as the interviewer believes that once familiar with one cloud environment, learning another is straightforward. There is no specific preference for Azure. Rate Restrictions: As per Kaushal's team Role & responsibilities Preferred candidate profile
Posted 1 month ago
5.0 - 10.0 years
19 - 30 Lacs
Hyderabad
Work from Office
For Data Engineer Years of experience -3-5 years Number of openings-2 For Sr. Data Engineer Years of experience- 6-10 years Number of openings-2 About Us Logic Pursuits provides companies with innovative technology solutions for everyday business problems. Our passion is to help clients become intelligent, information driven organizations, where fact-based decision-making is embedded into daily operations, which leads to better processes and outcomes. Our team combines strategic consulting services with growth-enabling technologies to evaluate risk, manage data, and leverage AI and automated processes more effectively. With deep, big four consulting experience in business transformation and efficient processes, Logic Pursuits is a game-changer in any operations strategy. Job Description We are looking for an experienced and results-driven Data Engineer to join our growing Data Engineering team. The ideal candidate will be proficient in building scalable, high-performance data transformation pipelines using Snowflake and dbt and be able to effectively work in a consulting setup. In this role, you will be instrumental in ingesting, transforming, and delivering high-quality data to enable data-driven decision-making across the clients organization. Key Responsibilities Design and build robust ELT pipelines using dbt on Snowflake, including ingestion from relational databases, APIs, cloud storage, and flat files . Reverse-engineer and optimize SAP Data Services (SAP DS) jobs to support scalable migration to cloud-based data platforms . Implement layered data architectures (e.g., staging, intermediate, mart layers) to enable reliable and reusable data assets. Enhance dbt/Snowflake workflows through performance optimization techniques such as clustering, partitioning, query profiling, and efficient SQL design. Use orchestration tools like Airflow, dbt Cloud, and Control-M to schedule, monitor, and manage data workflows. Apply modular SQL practices, testing, documentation, and Git-based CI/CD workflows for version-controlled, maintainable code. Collaborate with data analysts, scientists, and architects to gather requirements, document solutions, and deliver validated datasets. Contribute to internal knowledge sharing through reusable dbt components and participate in Agile ceremonies to support consulting delivery. Required Qualifications Data Engineering Skills 3–5 years of experience in data engineering, with hands-on experience in Snowflake and basic to intermediate proficiency in dbt. Capable of building and maintaining ELT pipelines using dbt and Snowflake with guidance on architecture and best practices. Understanding of ELT principles and foundational knowledge of data modeling techniques (preferably Kimball/Dimensional) . Intermediate experience with SAP Data Services (SAP DS), including extracting, transforming, and integrating data from legacy systems. Proficient in SQL for data transformation and basic performance tuning in Snowflake (e.g., clustering, partitioning, materializations). Familiar with workflow orchestration tools like dbt Cloud, Airflow, or Control M . Experience using Git for version control and exposure to CI/CD workflows in team environments. Exposure to cloud storage solutions such as Azure Data Lake, AWS S3, or GCS for ingestion and external staging in Snowflake. Working knowledge of Python for basic automation and data manipulation tasks. Understanding of Snowflake's role-based access control (RBAC) , data security features, and general data privacy practices like GDPR. Data Quality & Documentation Familiar with dbt testing and documentation practices (e.g., dbt tests, dbt docs). Awareness of standard data validation and monitoring techniques for reliable pipeline development. Soft Skills & Collaboration Strong problem-solving skills and ability to debug SQL and transformation logic effectively. Able to document work clearly and communicate technical solutions to a cross-functional team. Experience working in Agile settings, participating in sprints, and handling shifting priorities. Comfortable collaborating with analysts, data scientists, and architects across onshore/offshore teams. High attention to detail, proactive attitude, and adaptability in dynamic project environments. Nice to Have Experience working in client-facing or consulting roles. Exposure to AI/ML data pipelines or tools like feature stores and MLflow Familiarity with enterprise-grade data quality tools Education: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Certifications such as Snowflake SnowPro, dbt Certified Developer Data Engineering are a plus Additional Information Why Join Us? Opportunity to work on diverse and challenging projects in a consulting environment. Collaborative work culture that values innovation and curiosity. Access to cutting-edge technologies and a focus on professional development. Competitive compensation and benefits package. Be part of a dynamic team delivering impactful data solutions Required Qualification Bachelor of Engineering - Bachelor of Technology (B.E./B.Tech.)
Posted 1 month ago
5.0 - 7.0 years
0 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
Job Description: Data Engineer Role Overview The Data Engineer will be responsible for ensuring the availability, quality, and transformation of claims and operational data required for model development and integration. The role demands strong data pipeline design and engineering capabilities to support a scalable forecasting and capacity planning framework. Key Responsibilities Gather and process data from multiple sources including claims systems and operational databases. Build and maintain data pipelines to support segmentation and forecasting models. Ensure data integrity, transformation, and enrichment to align with modeling requirements. Collaborate with the Data Scientist to provide model-ready datasets. Support data versioning, storage, and automation for periodic refreshes. Assist in deployment/integration of data flows into operational dashboards or planning tools. Skills & Experience 5+ years of experience in data engineering or ETL development. Proficiency in SQL, Python, and data pipeline tools (e.g., Airflow, dbt, Spark, etc.). Experience with cloud-based data platforms (e.g., Azure, AWS, GCP). Understanding of data architecture and governance best practices. Prior experience working with insurance or operations-related data is a plus.
Posted 1 month ago
4.0 - 6.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Title: Senior Data Engineer (4-6 Years Experience) Location: Kotak Life HO Department: Data Science & Analytics Employment Type: Full-Time About the Role: We are seeking a highly skilled Data Engineer with 4-6 years of hands-on experience in designing and developing scalable, reliable, and efficient data solutions. The ideal candidate will have a strong background in cloud platforms (AWS or Azure), experience in building both batch and streaming data pipelines, and familiarity with modern data architectures including event-driven and medallion architectures. Key Responsibilities: .Design, build, and maintain scalable data pipelines (batch and streaming) to process structured and unstructured data from various sources. .Develop and implement solutions based on event-driven architectures using technologies like Kafka, Event Hubs, or Kinesis. .Architect and manage data workflows based on the Medallion architecture (Bronze, Silver, Gold layers). .Work with cloud platforms (AWS or Azure) to manage data infrastructure and storage, compute, and orchestration services. .Leverage cloud-native or open-source tools for data transformation, orchestration, monitoring, and quality checks. .Collaborate with data scientists, analysts, and product manager to deliver high-quality data solutions. .Ensure best practices in data governance, security, lineage, and observability. Required Skills & Qualifications: .4-6 years of professional experience in data engineering or related roles. .Strong experience in cloud platforms: AWS (e.g., S3, Glue, Lambda, Redshift) or Azure (e.g., Data Lake, Synapse, Data Factory, Functions). .Proven expertise in building batch and streaming pipelines using tools like Spark, Flink, Kafka, Kinesis, or similar. .Practical knowledge of event-driven architectures and experience with message/event brokers. .Hands-on experience implementing Medallion architecture or similar layered data architectures. .Familiarity with data orchestration tools (e.g., Airflow, Azure Data Factory, AWS Step Functions). .Proficiency in SQL, Python, or Scala for data processing and pipeline development. .Exposure to open-source tools in the modern data stack (e.g., dbt, Delta Lake, Apache Hudi, Great Expectations). Preferred Qualifications: .Experience with containerization and CI/CD for data workflows (Docker, GitHub Actions, etc.). .Knowledge of data quality frameworks and observability tooling. .Experience with Delta Lake or Lakehouse implementations. .Strong problem-solving skills and ability to work in fast-paced environments. What We Offer:
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Req ID: 325282 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Snowflake to join our team in Hyderabad, Telangana (IN-TG), India (IN). Snowflake and Data Vault 2 (optional) Consultant Extensive expertise in DBT , including macros, modeling, and automation techniques. Proficiency in SQL, Python , or other scripting languages for automation. Experience leveraging Snowflake for scalable data solutions. Familiarity with Data Vault 2.0 methodologies is an advantage. Strong capability in optimizing database performance and managing large datasets. Excellent problem-solving and analytical skills. Minimum of 3+ years of relevant experience, with a total of 5+ years of overall experience. About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.
Posted 1 month ago
0.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients . Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant - ML Engineer ! In this role, we are looking for candidates who have relevant years of experienc e in d esigning and developing machine learning and deep learning system . Who have professional software development experience . Hands on r unning machine learning tests and experiments . Implementing appropriate ML algorithms engineers. Responsibilities Drive the vision for modern data and analytics platform to deliver well architected and engineered data and analytics products leveraging cloud tech stack and third-party products Close the gap between ML research and production to create ground-breaking new products, features and solve problems for our customers Design, develop, test, and deploy data pipelines, machine learning infrastructure and client-facing products and services Build and implement machine learning models and prototype solutions for proof-of-concept Scale existing ML models into production on a variety of cloud platforms Analyze and resolve architectural problems, working closely with engineering, data science and operations teams Qualifications we seek in you! Minimum Q ualifications / Skills Good years experience B achelor%27s degree in computer science engineering, information technology or BSc in Computer Science, Mathematics or similar field Master&rsquos degree is a plus Integration - APIs, micro- services and ETL/ELT patterns DevOps (Good to have) - Ansible, Jenkins, ELK Containerization - Docker, Kubernetes etc Orchestration - Google composer Languages and scripting: Python, Scala Java etc Cloud Services - GCP, Snowflake Analytics and ML tooling - Sagemaker , ML Studio Execution Paradigm - low latency/Streaming, batch Preferred Q ualifications / Skills Data platforms - DBT, Fivetran and Data Warehouse (Teradata, Redshift, BigQuery , Snowflake etc.) Visualization Tools - PowerBI , Tableau Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Get to know us at and on , , , and . Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training .
Posted 1 month ago
0.0 years
0 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients . Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant - ML Engineer ! In this role, we are looking for candidates who have relevant years of experienc e in d esigning and developing machine learning and deep learning system . Who have professional software development experience . Hands on r unning machine learning tests and experiments . Implementing appropriate ML algorithms engineers. Responsibilities Drive the vision for modern data and analytics platform to deliver well architected and engineered data and analytics products leveraging cloud tech stack and third-party products Close the gap between ML research and production to create ground-breaking new products, features and solve problems for our customers Design, develop, test, and deploy data pipelines, machine learning infrastructure and client-facing products and services Build and implement machine learning models and prototype solutions for proof-of-concept Scale existing ML models into production on a variety of cloud platforms Analyze and resolve architectural problems, working closely with engineering, data science and operations teams Qualifications we seek in you! Minimum Q ualifications / Skills Good years experience B achelor%27s degree in computer science engineering, information technology or BSc in Computer Science, Mathematics or similar field Master&rsquos degree is a plus Integration - APIs, micro- services and ETL/ELT patterns DevOps (Good to have) - Ansible, Jenkins, ELK Containerization - Docker, Kubernetes etc Orchestration - Google composer Languages and scripting: Python, Scala Java etc Cloud Services - GCP, Snowflake Analytics and ML tooling - Sagemaker , ML Studio Execution Paradigm - low latency/Streaming, batch Preferred Q ualifications / Skills Data platforms - DBT, Fivetran and Data Warehouse (Teradata, Redshift, BigQuery , Snowflake etc.) Visualization Tools - PowerBI , Tableau Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Get to know us at and on , , , and . Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training .
Posted 1 month ago
0.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Inviting applications for the role of Lead Consultant -Data Engineer! . Design, document & implement the data pipelines to feed data models for subsequent consumption in Snowflake using dbt, and airflow. . Ensure correctness and completeness of the data being transformed via engineering pipelines for end consumption in Analytical Dashboards. . Actively monitor and triage technical challenges in critical situations that require immediate resolution. . Evaluate viable technical solutions and share MVPs or PoCs in support of the research . Develop relationships with external stakeholders to maintain awareness of data and security issues and trends . Review work from other tech team members and provide feedback for growth . Implement Data Performance and data security policies that align with governance objectives and regulatory requirements . Effectively mentor and develop your team members . You have experience in data warehousing, data modeling, and the building of data engineering pipelines. . You are well versed in data engineering methods, such as ETL and ELT techniques through scripting and/or tooling. . You are good at analyzing performance bottlenecks and providing enhancement recommendations you have a passion for customer service and a desire to learn and grow as a professional and a technologist. . Strong analytical skills related to working with structured, semi-structured, and unstructured datasets. . Collaborating with product owners to identify requirements, define desired and deliver trusted results. . Building processes supporting data transformation, data structures, metadata, dependency, and workload management. . In this role, SQL is heavily focused. An ideal candidate must have hands-on experience with SQL database design. Plus, Python. . Demonstrably deep understanding of SQL (level: advanced) and analytical data warehouses (Snowflake preferred). . Demonstrated ability to write new code i.e., well-documented and stored in a version control system (we use GitHub & Bitbucket) . Extremely talented in applying SCD, CDC, and DQ/DV framework. . Familiar with JIRA & Confluence. . Must have exposure to technologies such as dbt, Apache airflow, and Snowflake. . Desire to continually keep up with advancements in data engineering practices. Qualifications we seek in you! Minimum qualifications: Essential Education Bachelor%27s degree or equivalent combination of education and experience. Bachelor%27s degree in information science, data management, computer science or related field preferred. Essential Experience & Job Requirements . IT experience with a major focus on data warehouse/database-related projects . Must have exposure to technologies such as dbt, Apache Airflow, and Snowflake. . Experience in other data platforms: Oracle, SQL Server, MDM, etc . Expertise in writing SQL and database objects - Stored procedures, functions, and views. Hands-on experience in ETL/ELT and data security, SQL performance optimization and job orchestration tools and technologies e.g., dbt, APIs, Apache Airflow, etc. . Experience in data modeling and relational database design . Well-versed in applying SCD, CDC, and DQ/DV framework. . Demonstrate ability to write new code i.e., well-documented and stored in a version control system (we use GitHub & Bitbucket) . Good to have experience with Cloud Platforms such as AWS, Azure, GCP and Snowflake . Good to have strong programming/ scripting skills (Python, PowerShell, etc.) . Experience working with agile methodologies (Scrum, Kanban) and Meta Scrum with cross-functional teams (Product Owners, Scrum Master, Architects, and data SMEs) o Excellent written, and oral communication and presentation skills to present architecture, features, and solution recommendations Global functional product portfolio technical leaders (Finance, HR, Marketing, Legal, Risk, IT), product owners, functional area teams across levels o Global Data Product Portfolio Management & teams (Enterprise Data Model, Data Catalog, Master Data Management) Preferred Qualifications Knowledge of AWS cloud, and Python is a plus. . . . . . .
Posted 1 month ago
5.0 - 8.0 years
2 - 11 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Data Warehouse Solution Design & Development : Lead the design and implementation of batch and real-time ingestion architectures for data warehouses. Ensure that solutions are scalable, reliable, and optimized for performance. Team Leadership & Mentoring : Lead and mentor a team of data engineers , fostering a collaborative environment to encourage knowledge sharing and continuous improvement. Ensure that the team meets high standards of quality and performance. Hands-on Technical Delivery : Actively engage in hands-on development and ensure seamless delivery of data solutions. Provide technical direction and hands-on support for complex issues. Issue Resolution & Troubleshooting : Capable of troubleshooting issues that arise during runtime and providing quick resolutions to minimize disruptions and maintain system stability. API Management : Oversee the integration and management of APIs using APIM for seamless communication between internal and external systems. Implement and maintain API gateways and monitor API performance. Client Communication : Interact directly with clients , ensuring clear and convincing communication of technical ideas and project progress. Translate customer requirements into technical solutions and drive the implementation process. Cloud & DevOps : Ensure that the data solutions are designed with cloud-native technologies such as Azure , Snowflake , and DBT . Use Azure DevOps for continuous integration and deployment pipelines. Mentoring & Best Practices : Guide the team on best practices for data engineering , code reviews , and performance optimization . Ensure the adoption of modern tools and techniques to improve delivery efficiency. Mandatory Skills : Python for data engineering Snowflake and Postgres development experience Proficient in API Management (APIM) and DBT Strong experience with Azure DevOps for CI/CD Proven experience in data warehouse solutions design, development, and implementation Desired Skills : Experience with Apache Kafka , Azure Event Hub , Apache Airflow , Apache Flink Familiarity with Grafana , Prometheus , Terraform , Kubernetes Power BI for reporting and data visualization
Posted 1 month ago
5.0 - 10.0 years
3 - 14 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Key Responsibilities : Design & Implement Data Architecture : Design, implement, and maintain the overall data platform architecture ensuring the scalability, security, and performance of the platform. Data Technologies Integration : Select, integrate, and configure data technologies (cloud platforms like AWS , Azure , GCP , data lakes , data warehouses , streaming platforms like Kafka , containerization technologies ). Infrastructure Management : Setup and manage the infrastructure for data pipelines , data storage , and data processing across platforms like Kubernetes and Airflow . Develop Frameworks & Tools : Develop internal frameworks to improve the efficiency and usability of the platform for other teams like Data Engineers and Data Scientists . Data Platform Monitoring & Observability : Implement and manage monitoring and observability for the data platform, ensuring high availability and fault tolerance. Collaboration : Work closely with software engineering teams to integrate the data platform with other business systems and applications. Capacity & Cost Optimization : Involved in capacity planning and cost optimization for data infrastructure, ensuring efficient utilization of resources. Tech Stack Requirements : Apache Iceberg (version 0.13.2): Experience in managing table formats for scalable data storage. Apache Spark (version 3.4 and above): Expertise in building and maintaining batch processing and streaming data processing capabilities. Apache Kafka (version 3.9 and above): Proficiency in managing messaging platforms for real-time data streaming. Role-Based Access Control (RBAC) : Experience with Apache Ranger (version 2.6.0) for implementing and administering security and access controls. RDBMS : Experience working with near real-time data storage solutions , specifically Oracle (version 19c). Great Expectations (version 1.3.4): Familiarity with implementing Data Quality (DQ) frameworks to ensure data integrity and consistency. Data Lineage & Cataloging : Experience with Open Lineage and DataHub (version 0.15.0) for managing data lineage and catalog solutions. Trino (version 4.7.0): Proficiency with query engines for batch processing. Container Platforms : Hands-on experience in managing container platforms such as SKE (version 1.29 on AKS ). Airflow (version 2.10.4): Experience using workflow and scheduling tools for orchestrating and managing data pipelines. DBT (Data Build Tool): Proficiency in using ETL/ELT frameworks like DBT for data transformation and automation. Data Tokenization : Experience with data tokenization technologies like Protegrity (version 9.2) for ensuring data security. Desired Skills : Domain Expertise : Familiarity with the Banking domain is a plus, including working with financial data and regulatory requirements.
Posted 1 month ago
6.0 - 11.0 years
17 - 30 Lacs
Kolkata, Hyderabad/Secunderabad, Bangalore/Bengaluru
Hybrid
Inviting applications for the role of Lead Consultant- Snowflake Data Engineer( Snowflake+Python+Cloud)! In this role, the Snowflake Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Job Description: Experience in IT industry Working experience with building productionized data ingestion and processing data pipelines in Snowflake Strong understanding on Snowflake Architecture Fully well-versed with data warehousing concepts. Expertise and excellent understanding of Snowflake features and integration of Snowflake with other data processing. Able to create the data pipeline for ETL/ELT Excellent presentation and communication skills, both written and verbal Ability to problem solve and architect in an environment with unclear requirements. Able to create the high level and low-level design document based on requirement. Hands on experience in configuration, troubleshooting, testing and managing data platforms, on premises or in the cloud. Awareness on data visualisation tools and methodologies Work independently on business problems and generate meaningful insights Good to have some experience/knowledge on Snowpark or Streamlit or GenAI but not mandatory. Should have experience on implementing Snowflake Best Practices Snowflake SnowPro Core Certification will be added an advantage Roles and Responsibilities: Requirement gathering, creating design document, providing solutions to customer, work with offshore team etc. Writing SQL queries against Snowflake, developing scripts to do Extract, Load, and Transform data. Hands-on experience with Snowflake utilities such as SnowSQL, Bulk copy, Snowpipe, Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs, Snowsight, Steamlit Have experience with Snowflake cloud data warehouse and AWS S3 bucket or Azure blob storage container for integrating data from multiple source system. Should have have some exp on AWS services (S3, Glue, Lambda) or Azure services ( Blob Storage, ADLS gen2, ADF) Should have good experience in Python/Pyspark.integration with Snowflake and cloud (AWS/Azure) with ability to leverage cloud services for data processing and storage. Proficiency in Python programming language, including knowledge of data types, variables, functions, loops, conditionals, and other Python-specific concepts. Knowledge of ETL (Extract, Transform, Load) processes and tools, and ability to design and develop efficient ETL jobs using Python or Pyspark. Should have some experience on Snowflake RBAC and data security. Should have good experience in implementing CDC or SCD type-2. Should have good experience in implementing Snowflake Best Practices In-depth understanding of Data Warehouse, ETL concepts and Data Modelling Experience in requirement gathering, analysis, designing, development, and deployment. Should Have experience building data ingestion pipeline Optimize and tune data pipelines for performance and scalability Able to communicate with clients and lead team. Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in deployment using CI/CD tools and exp in repositories like Azure repo , Github etc. Qualifications we seek in you! Minimum qualifications B.E./ Masters in Computer Science, Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant as Snowflake Data Engineer. Skill Metrix: Snowflake, Python/PySpark, AWS/Azure, ETL concepts, & Data Warehousing concepts
Posted 1 month ago
7.0 - 10.0 years
15 - 30 Lacs
Pune
Hybrid
Looking for 7–10 yrs exp (4+ in data modeling, 2–3 in Data Vault 2.0). Must know DBT, Dagster/Airflow, GCP (BigQuery, CloudSQL), and data modeling. DV 2.0 hands-on is a must. Docker is a plus.
Posted 1 month ago
2.0 - 6.0 years
7 - 17 Lacs
Chennai
Work from Office
About the Role: We are looking for a highly skilled and passionate Data Engineer to join our dynamic data team. In this role, you will be instrumental in designing, building, and optimizing our data infrastructure, with a strong emphasis on leveraging Snowflake for data warehousing and dbt (data build tool) for data transformation. You will work across various cloud environments (AWS, Azure, GCP), ensuring our data solutions are scalable, reliable, and efficient. This position requires a deep understanding of data warehousing principles, ETL/ELT methodologies, and a commitment to data quality and governance. Responsibilities: Design, develop, and maintain robust and scalable data pipelines using various data integration tools and techniques within a cloud environment. Build and optimize data models and transformations in Snowflake using dbt, ensuring data accuracy, consistency, and performance. Manage and administer Snowflake environments, including performance tuning, cost optimization, and security configurations. Develop and implement data ingestion strategies from diverse source systems (APIs, databases, files, streaming data) into Snowflake. Write, optimize, and maintain complex SQL queries for data extraction, transformation, and loading (ETL/ELT) processes. Implement data quality checks, validation rules, and monitoring solutions within dbt and Snowflake. Collaborate closely with data analysts, data scientists, and business stakeholders to understand data requirements and translate them into efficient data solutions. Promote and enforce data governance best practices, including metadata management, data lineage, and documentation. Participate in code reviews, contribute to architectural discussions, and champion best practices in data engineering and dbt development. Troubleshoot and resolve data-related issues, ensuring data availability and reliability. Stay current with industry trends and new technologies in the data engineering space, particularly around Snowflake, dbt, and cloud platforms. Qualifications: Required: Bachelor's degree in Computer Science, Engineering, Information Technology, or a related quantitative field. 3 to 5 years of professional experience as a Data Engineer. Expert-level proficiency with Snowflake for data warehousing, including performance optimization and resource management. Extensive hands-on experience with dbt (data build tool) for data modeling, testing, and documentation. Strong proficiency in SQL, with the ability to write complex, optimized queries. Solid programming skills in Python for data manipulation, scripting, and automation. Experience with at least one major cloud platform (AWS, Azure, or GCP) and its core data services. Proven understanding of data warehousing concepts, dimensional modeling, and ETL/ELT principles. Experience with version control systems (e.g., Git). Excellent analytical, problem-solving, and debugging skills. Strong communication and collaboration abilities, with a capacity to work effectively with cross-functional teams.
Posted 1 month ago
10.0 - 15.0 years
30 - 45 Lacs
Pune
Work from Office
What You'll Do The Global Analytics & Insights (GAI) team is seeking a Data & Analytics Engineering Manager to lead our team in designing, developing, and maintaining data pipelines and analytics infrastructure. As a Data & Analytics Engineering Manager, you will play a pivotal role in empowering a team of engineers to build and enhance analytics applications and a modern data platform using Snowflake, dbt (Data Build Tool), Python, Terraform, and Airflow. You will become an expert in Avalaras financial, marketing, sales, and operations data. The ideal candidate will have deep SQL experience, an understanding of modern data stacks and technology, demonstrated leadership and mentoring experience, and an ability to drive innovation and manage complex projects. This position will report to Senior Manager. What Your Responsibilities Will Be Mentor a team of data engineers, providing guidance and support to ensure a high level of quality and career growth Lead a team of data engineers in the development and maintenance of data pipelines, data modelling, code reviews and data products Collaborate with cross-functional teams to understand requirements and translate them into scalable data solutions Drive innovation and continuous improvements within the data engineering team Build maintainable and scalable processes and playbooks to ensure consistent delivery and quality across projects Drive adoption of best practices in data engineering and data modelling Be the visible lead of the team- coordinate communication, releases, and status to various stakeholders What You'll Need to be Successful Bachelor's degree in Computer Science, Engineering, or related field 10+ years experience in data engineering field, with deep SQL knowledge 2+ years management experience, including direct technical reports 5+ years experience with data warehousing concepts and technologies 4+ years of working with Git, and demonstrated experience using these tools to facilitate growth of engineers 4+ years working with Snowflake 3+ years working with dbt (dbt core preferred) Preferred Qualifications: Snowflake, Dbt, AWS Certified 3+ years working with Infrastructure as Code, preferably Terraform 2+ years working with CI CD, and demonstrated ability to build and operate pipelines Experience and understanding of Snowflake administration and security principles Demonstrated experience with Airflow
Posted 1 month ago
4.0 - 8.0 years
5 - 9 Lacs
Chennai
Work from Office
Skill- Talend with SQL and DBT and snowflake knowledge Talend - Designing, developing, and documenting existing Talend ETL processes, technical architecture, data pipelines, and performance scaling using tools to integrate Talend data and ensure data quality in a big data environment. Snowflake SQL Writing SQL queries against Snowflake Developing scripts Unix, Python, etc. to do Extract, Load, and Transform data. Strong on DBT with Snowflake SQL Perform data analysis, troubleshoot data issues, and provide technical support to end-users. Develop and maintain data warehouse and ETL processes, ensuring data quality and integrity. Complex problem-solving capability and ever improvement approach. Desirable to have Talend / Snowflake Certification Excellent SQL coding skills Excellent communication & documentation skills. Familiar with Agile delivery process. Must be analytic, creative and self-motivated. Work Effectively within a global team environment. Excellent Communication skills Exp-4+ Location-Chennai NP- immediate to 30days max
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France