Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
25 - 37 Lacs
Hyderabad
Work from Office
SQL & Database Management: Deep knowledge of relational databases (PostgreSQL), cloud-hosted data platforms (AWS, Azure, GCP), and data warehouses like Snowflake. ETL/ELT Tools: Experience with SnapLogic, StreamSets, or DBT for building and maintaining data pipelines. / ETL Tools Extensive Experience on data Pipelines Data Modeling & Optimization: Strong understanding of data modeling, OLAP systems, query optimization, and performance tuning. Cloud & Security: Familiarity with cloud platforms and SQL security techniques (e.g., data encryption, TDE). Data Warehousing: Experience managing large datasets, data marts, and optimizing databases for performance. Agile & CI/CD: Knowledge of Agile methodologies and CI/CD automation tools.
Posted 1 month ago
1.0 - 3.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
About Zeta Zeta is a Next-Gen Banking Tech company that empowers banks and fintechs to launch banking products for the future. It was founded by and Ramki Gaddipati in f lagship processing platform - Zeta Tachyon - is the industry's first modern, cloud-native, and fully API-enabled stack that brings together issuance, processing, lending, core banking, fraud & risk, and many more capabilities as a single-vendor stack. 15M+ cards have been issued on our platform is actively working with the largest Banks and Fintechs in multiple global markets transforming customer experience for multi-million card portfolios. Zeta has over 1,700+ employees across the US, EMEA, and Asia, with 70%+ roles in R&D . Backed by SoftBank, Mastercard, and other investors , we raised $330M at a $2B valuation in 2025 more @ , , , About the Role In this role, you'll design robust data models using SQL, dbt, and Redshift, while driving best practices across development, deployment, and monitoring. You'll also collaborate closely with product and engineering to ensure data quality and impactful delivery Responsibilities Create optimized data models with SQL, DBT and Redshift Write functional and column level test for Models Build reports from the data models Collaborate with product to clarify requirement and create design document Get design reviewed from Architect/Principal/Lead Engineer Contribute to code reviews Set up and monitor Airflow DAGs Set up and use CI/CD pipelines Leverage Kubernetes operators for deployment automation Ensure data quality Drive best practices in Data models development, deployment, and monitoring Skills Bachelor's/Master's degree in engineering Strong expertise in SQL for complex data querying and optimization Hands-on experience with Apache Airflow for orchestration and scheduling Good understanding of data modeling and data warehousing concepts Experience with dbt (Data Build Tool) for data transformation and modeling Exposure to Amazon Redshift or other cloud data warehouses Familiarity with CI/CD tools such as Jenkins Experience using Bitbucket for version control Working knowledge of JIRA for agile project tracking Ability to work with cross-functional and dependent teams, think and own on delivering end to end. Excellent problem-solving skills and ability to work independently or as part of a team. Strong communication and interpersonal skills to collaborate effectively with cross-functional teams. Experience and Qualifications Bachelor's/Master's degree in engineering (computer science, information systems) At least 1-2years of experience in working on data, especially on reporting, data analysis
Posted 1 month ago
6.0 - 10.0 years
9 - 17 Lacs
Gurugram, Chennai, Bengaluru
Work from Office
Role & responsibilities Collaborate with DW/BI leads to understand new ETL pipeline development requirements Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler (UC4 and Airflow) Preferred candidate profile Bachelor's and/or masters degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Should have experience at least in 1 end to end implementation of Snowflake cloud data warehouse and 2 End to end data warehouse implementations on-premise. Expertise in Snowflake data modelling, ELT using SQL, implementing stored Procedures and standard DWH and ETL concepts Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe Experience in Data Migration from RDBMS to Snowflake cloud data warehouse Deep understanding of Star and Snowflake dimensional modelling Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Certified in Snowflake (SnowPro Core) (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Self-motivated, collaborative, innovative, eager to learn, and hands on
Posted 1 month ago
4.0 - 6.0 years
20 - 25 Lacs
Pune
Work from Office
Greetings from Peoplefy Infosolutions !!! We are hiring for one of our reputed MNC client based in Pune . We are looking for candidates with 4 + years of experience in below skills - Primary skills : Python DBT SSIS Snowflake Linux Datadog Prometheus Grafana Interested candidates for above position kindly share your CVs on chitralekha.so@peoplefy.com with below details - Experience : CTC : Expected CTC : Notice Period : Location :
Posted 1 month ago
5.0 - 9.0 years
7 - 17 Lacs
Pune
Work from Office
Job Overview: Diacto is looking for a highly capable Data Architect with 5 to 9 years of experience to lead cloud data platform initiatives with a primary focus on Snowflake and Azure Data Hub. This individual will play a key role in defining the data architecture strategy, implementing robust data pipelines, and enabling enterprise-grade analytics solutions. This is an on-site role based in our Baner, Pune office. Qualifications: B.E./B.Tech in Computer Science, IT, or related discipline MCS/MCA or equivalent preferred Key Responsibilities: Design and implement enterprise-level data architecture with a strong focus on Snowflake and Azure Data Hub Define standards and best practices for data ingestion, transformation, and storage Collaborate with cross-functional teams to develop scalable, secure, and high-performance data pipelines Lead Snowflake environment setup, configuration, performance tuning, and optimization Integrate Azure Data Services with Snowflake to support diverse business use cases Implement governance, metadata management, and security policies Mentor junior developers and data engineers on cloud data technologies and best practices Experience and Skills Required: 5 to 9 years of overall experience in data architecture or data engineering roles Strong, hands-on expertise in Snowflake , including design, development, and performance tuning Solid experience with Azure Data Hub and Azure Data Services (Data Lake, Synapse, etc.) Understanding of cloud data integration techniques and ELT/ETL frameworks Familiarity with data orchestration tools such as DBT, Airflow , or Azure Data Factory Proven ability to handle structured, semi-structured, and unstructured data Strong analytical, problem-solving, and communication skills Nice to Have: Certifications in Snowflake and/or Microsoft Azure Experience with CI/CD tools like GitHub for code versioning and deployment Familiarity with real-time or near-real-time data ingestion Why Join Diacto Technologies? Work with a cutting-edge tech stack and cloud-native architectures Be part of a data-driven culture with opportunities for continuous learning Collaborate with industry experts and build transformative data solutions Competitive salary and benefits with a collaborative work environment in Baner, Pune How to Apply: Option 1 (Preferred) Copy and paste the following link on your browser and submit your application for automated interview process : - https://app.candidhr.ai/app/candidate/gAAAAABoRrcIhRQqJKDXiCEfrQG8Rtsk46Etg4-K8eiwqJ_GELL6ewSC9vl4BjaTwUAHzXZTE3nOtgaiQLCso_vWzieLkoV9Nw==/ Option 2 1. Please visit our website's career section at https://www.diacto.com/careers/ 2. Scroll down to the " Who are we looking for ?" section 3. Find the listing for " Data Architect (Snowflake) " and 4. Proceed with the virtual interview by clicking on " Apply Now ."
Posted 1 month ago
5.0 - 10.0 years
15 - 25 Lacs
Pune
Hybrid
Role & responsibilities Designed and implemented end-to-end data pipeline using DBT, Snowflake Created and structure DBT models like staging, transformation, marts, YAML configurations for models and tests, dbt seeds. Hands-on experience on DBT Jinja templating, macro development, dbt jobs and snapshot management for Slowly changing dimensions. Develop python script for data cleaning, transformation and automation of repetitive task. Experienced in loading structured and semi-structured data from AWS S3 to Snowflake by designing file formats, configuring storage integration, and automating data loads using Snow pipe. Designed scalable incremental models for handling large datasets, reducing resource usage Preferred candidate profile Candidate must have 5+ Yrs experience. Early joiner, who can join within a month
Posted 1 month ago
9.0 - 14.0 years
9 - 14 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
Data is at the core of the Aladdin platform, and increasingly, our ability to consume, store, analyze, and gain insight from data is a keycomponentof what differentiates us. As part of Aladdin Studio, The Aladdin Data Cloud (ADC) Engineering teamis responsible forbuilding andmaintainingdata-as-a-service solution for all the data management and transformation needs. We engineer high performance data pipelines, provide a fabric to discover and consume data, and continually evolve our data surface capabilities. As aData engineer in theADCEngineering team,you will: - Work alongside our engineers to help design and build scalable data pipelines while evolving the data surface. Help prove out anddeliverCloud Native Infrastructure and tooling to support scalabledata cloud. Have fun as part of anamazingteam. Specific Responsibilities: Leading and working as part of a multi-disciplinary squad toestablishour next generation of data pipelines and tools. Be involved frominceptionof projects, understanding requirements, designing&developing solutions,and incorporating them into the designs of our platforms. Mentor team members on technology andstandard processes. Maintainexcellent knowledge of the technical landscapefor data & cloud tooling Assistinsolvingissues, support the operation of production software. Designsolutions and document it. Desirable Skills 8+years of industry experience in data engineering area. Passionfor engineeringandoptimizing data sets, data pipelines and architecture . Ability to build processes that support data transformation, workload management, data structures,lineage,and metadata. Knowledge of SQL and performance tuning.Experience with Snowflake is preferred. Good understandingoflanguages such as Python/Java Understandingofsoftware deployment and orchestration technologies such asairflowetc. Experience withdbtishelpful. Working knowledge of building and deploying distributed systems Experience in creating and evolving CI/CD pipelines with GitLab orAzure Dev Ops . Experience in handling multi-disciplinaryteamand mentoring them.
Posted 1 month ago
2.0 - 5.0 years
12 - 15 Lacs
Mumbai, Maharashtra, India
On-site
Responsibilities Actively participate in chapter ceremony meetings and contribute to project planning and estimation. Coordinate work with product managers, data owners, platform teams, and other stakeholders throughout the SDLC cycle. Use Airflow, Python, Snowflake, dbt, and related technologies to enhance and maintain EDP acquisition, ingestion, processing, orchestration and DQ frameworks. Adopt new tools and technologies to enhance framework capabilities. Build and conduct end-to-end tests to ensure production operations run successfully after every release cycle. Document and present accomplishments and challenges to internal and external stakeholders. Demonstrate deep understanding of modern data engineering tools and best practices. Design and build solutions which are performant, consistent, and scalable. Contribute to design decisions for complex systems. Provide L2 / L3 support for technical and/or operational issues. Qualifications At least 5+ years experience as a data engineer Expertise with SQL, stored procedures, UDFs Advanced level Python programming or Advanced Core Java programming. Experience with Snowflake or similar cloud native databases Experience with orchestration tools, especially Airflow Experience with declarative transformation tools like dbt Experience in Azure services, especially ADLS (or equivalent) Exposure to real time streaming platforms and message brokers (e.g., Snowpipe Streaming, Kafka) Experience with Agile development concepts and related tools (ADO, Aha) Experience conducting root cause analysis and solve issues Experience with performance tuning Excellent written and verbal communication skills Ability to operate in a matrixed organization and fast-paced environment Strong interpersonal skills with a can-do attitude under challenging circumstances Bachelors degree in computer science is strongly preferred
Posted 1 month ago
0.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant- Sr. Snowflake Data Engineer ( Snowflake+ Python+Cloud ) ! In this role, the Sr. Snowflake Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Job Description : E xperience in IT industry W orking experience with building productionized data ingestion and processing data pipelines in Snowflake Strong understanding on Snowflake Architecture Fully well-versed with data warehousing concepts. Expertise and excellent understanding of Snowflake features and integration of Snowflake with other data processing. Able to create the data pipeline for ETL /ELT Good to have DBT experience Excellent presentation and communication skills, both written and verbal Ability to problem solve and architect in an environment with unclear requirements. Able to create the high level and low-level design document based on requirement. Hands on experience in configuration, troubleshooting, testing and managing data platforms, on premises or in the cloud. Awareness on data visualisation tools and methodologies Work independently on business problems and generate meaningful insights Good to have some experience/knowledge on Snowpark or Streamlit or GenAI but not mandatory. Should have experience on implementing Snowflake Best Practices Snowflake SnowPro Core Certification will be add ed an advantage Roles and Responsibilities : Requirement gathering, creating design document, providing solutions to customer, work with offshore team etc. Writing SQL queries against Snowflake , developing scripts to do Extract, Load, and Transform data. Hands-on experience with Snowflake utilities such as SnowSQL , Bulk copy, Snow p ipe , Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs , Snowsight . Have experience with Snowflake cloud data warehouse and AWS S3 bucket or Azure blob storage container for integrating data from multiple source system . Should have have some exp on AWS services (S3, Glue, Lambda) or Azure services ( Blob Storage, ADLS gen2, ADF) Should have good experience in Python / Pyspark . integration with Snowflake and cloud (AWS/Azure) with ability to leverage cloud services for data processing and storage. Proficiency in Python programming language, including knowledge of data types, variables, functions, loops, conditionals, and other Python-specific concepts. Knowledge of ETL (Extract, Transform, Load) processes and tools, and ability to design and develop efficient ETL jobs using Python and Pyspark . Should have some experience on Snowflake RBAC and data security . Should have good experience in implementing CDC or SCD type - 2 . Should have good experience in implementing Snowflake Best Practices In-depth understanding of Data Warehouse, ETL concepts and Data Modelling Experience in requirement gathering, analys is, designing, development, and deployment . Should Have experience building data ingestion pipeline Optimize and tune data pipelines for performance and scalability Able to communicate with clients and lead team. Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in deployment using CI/CD tools and exp in repositories like Azure repo , Github etc. Qualifications we seek in you! Minimum qualifications B.E./ Masters in Computer Science , Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant as Senior Snowflake Data Engineer . Skill Metrix: Snowflake, Python/ PySpark , AWS/Azure, ETL concepts, Data Modeling & Data Warehousing concepts Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
Data Engineer - Azure This is a hands on data platform engineering role that places significant emphasis on consultative data engineering engagements with a wide range of customer stakeholders Business Owners, Business Analytics, Data Engineering teams, Application Development, End Users and Management teams. You Will: .Design and build resilient and efficient data pipelines for batch and real-time streaming .Collaborate with product managers, software engineers, data analysts, and data scientists to build scalable and data-driven platforms and tools. .Provide technical product expertise, advise on deployment architectures, and handle in-depth technical questions around data infrastructure, PaaS services, design patterns and implementation approaches. .Collaborate with enterprise architects, data architects, ETL developers & engineers, data scientists, and information designers to lead the identification and definition of required data structures, formats, pipelines, metadata, and workload orchestration capabilities .Address aspects such as data privacy & security, data ingestion & processing, data storage & compute, analytical & operational consumption, data modeling, data virtualization, self-service data preparation & analytics, AI enablement, and API integrations. .Execute projects with an Agile mindset. .Build software frameworks to solve data problems at scale. Technical Requirements: .3+ years of data engineering experience leading implementations of large-scale lakehouses on Databricks, Snowflake, or Synapse. Prior experience using DBT and PowerBI will be a plus. .Extensive experience with Azure data services (Databricks, Synapse, ADF) and related azure infrastructure services like firewall, storage, key vault etc. is required. .Strong programming / scripting experience using SQL and python and Spark. .Knowledge of software configuration management environments and tools such as JIRA, Git, Jenkins, TFS, Shell, PowerShell, Bitbucket. .Experience with Agile development methods in data-oriented projects Other Requirements: .Highly motivated self-starter and team player and demonstrated success in prior roles. .Track record of success working through technical challenges within enterprise organizations .Ability to prioritize deals, training, and initiatives through highly effective time management .Excellent problem solving, analytical, presentation, and whiteboarding skills .Track record of success dealing with ambiguity (internal and external) and working collaboratively with other departments and organizations to solve challenging problems .Strong knowledge of technology and industry trends that affect data analytics decisions for enterprise organizations .Certifications on Azure Data Engineering and related technologies.
Posted 1 month ago
3.0 - 8.0 years
6 - 15 Lacs
Bengaluru
Work from Office
Role & responsibilities Design, build, and maintain scalable data pipelines using DBT and Airflow. Develop and optimize SQL queries and data models in Snowflake. Implement ETL/ELT workflows, ensuring data quality, performance, and reliability. Work with Python for data processing, automation, and integration tasks. Handle JSON data structures for data ingestion, transformation, and APIs. Leverage AWS services (e.g., S3, Lambda, Glue, Redshift) for cloud-based data solutions. Collaborate with data analysts, engineers, and business teams to deliver high-quality data products. Preferred candidate profile Strong expertise in SQL, Snowflake, and DBT for data modeling and transformation. Proficiency in Python and Airflow for workflow automation. Experience working with AWS cloud services. Ability to handle JSON data formats and integrate APIs. Strong problem-solving skills and experience in optimizing data pipelines
Posted 1 month ago
13.0 - 20.0 years
40 - 45 Lacs
Bengaluru
Work from Office
Principal Architect - Platform & Application Architect Experience 15+ years in software/data platform architecture 5+ years in architectural leadership roles Architecture & Data Platform Expertise Education Bachelors/Master’s in CS, Engineering, or related field Title: Principal Architect Location: Onsite Bangalore Experience: 15+ years in software & data platform architecture and technology strategy Role Overview We are seeking a Platform & Application Architect to lead the design and implementation of a next-generation, multi-domain data platform and its ecosystem of applications. In this strategic and hands-on role, you will define the overall architecture, select and evolve the technology stack, and establish best practices for governance, scalability, and performance. Your responsibilities will span across the full data lifecycle—ingestion, processing, storage, and analytics—while ensuring the platform is adaptable to diverse and evolving customer needs. This role requires close collaboration with product and business teams to translate strategy into actionable, high-impact platform & products. Key Responsibilities 1. Architecture & Strategy Design the end-to-end architecture for a On-prem / hybrid data platform (data lake/lakehouse, data warehouse, streaming, and analytics components). Define and document data blueprints, data domain models, and architectural standards. Lead build vs. buy evaluations for platform components and recommend best-fit tools and technologies. 2. Data Ingestion & Processing Architect batch and real-time ingestion pipelines using tools like Kafka, Apache NiFi, Flink, or Airbyte. Oversee scalable ETL/ELT processes and orchestrators (Airflow, dbt, Dagster). Support diverse data sources: IoT, operational databases, APIs, flat files, unstructured data. 3. Storage & Modeling Define strategies for data storage and partitioning (data lakes, warehouses, Delta Lake, Iceberg, or Hudi). Develop efficient data strategies for both OLAP and OLTP workloads. Guide schema evolution, data versioning, and performance tuning. 4. Governance, Security, and Compliance Establish data governance , cataloging , and lineage tracking frameworks. Implement access controls , encryption , and audit trails to ensure compliance with DPDPA, GDPR, HIPAA, etc. Promote standardization and best practices across business units. 5. Platform Engineering & DevOps Collaborate with infrastructure and DevOps teams to define CI/CD , monitoring , and DataOps pipelines. Ensure observability, reliability, and cost efficiency of the platform. Define SLAs, capacity planning, and disaster recovery plans. 6. Collaboration & Mentorship Work closely with data engineers, scientists, analysts, and product owners to align platform capabilities with business goals. Mentor teams on architecture principles, technology choices, and operational excellence. Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 12+ years of experience in software engineering, including 5+ years in architectural leadership roles. Proven expertise in designing and scaling distributed systems, microservices, APIs, and event-driven architectures using Java, Python, or Node.js. Strong hands-on experience with building scalable data platforms on premise/Hybrid/cloud environments. Deep knowledge of modern data lake and warehouse technologies (e.g., Snowflake, BigQuery, Redshift) and table formats like Delta Lake or Iceberg. Familiarity with data mesh, data fabric, and lakehouse paradigms. Strong understanding of system reliability, observability, DevSecOps practices, and platform engineering principles. Demonstrated success in leading large-scale architectural initiatives across enterprise-grade or consumer-facing platforms. Excellent communication, documentation, and presentation skills, with the ability to simplify complex concepts and influence at executive levels. Certifications such as TOGAF or AWS Solutions Architect (Professional) and experience in regulated domains (e.g., finance, healthcare, aviation) are desirable.
Posted 1 month ago
5.0 - 10.0 years
5 - 10 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Senior Data Engineer Key Responsibilities As a Senior Data Engineer, you will: Data Pipeline Development: Design, build, and maintain scalable data pipelines using PySpark and Python. AWS Cloud Integration: Work with AWS cloud services (S3, Lambda, Glue, EMR, Redshift) for data ingestion, processing, and storage. ETL Workflow Management: Implement and maintain ETL workflows using DBT and orchestration tools (e.g., Airflow). Data Warehousing: Design and manage data models in Snowflake, ensuring performance and reliability. SQL Optimization: Utilize SQL for querying and optimizing datasets across different databases. Data Integration: Integrate and manage data from MongoDB, Kafka, and other streaming or NoSQL sources. Collaboration & Support: Collaborate with data scientists, analysts, and other engineers to support advanced analytics and Machine Learning (ML) initiatives. Data Quality & Governance: Ensure data quality, lineage, and governance through best practices and tools. Mandatory Skills & Experience Strong programming skills in Python and PySpark . Hands-on experience with AWS data services (S3, Lambda, Glue, EMR, Redshift). Proficiency in SQL and experience with DBT for data transformation. Experience with Snowflake for data warehousing. Knowledge of MongoDB , Kafka , and data streaming concepts. Good understanding of data architecture, data modeling, and data governance . Familiarity with large-scale data platforms. Essential Professional Skills Excellent problem-solving skills . Ability to work independently or as part of a team . Experience with CI/CD and DevOps practices in a data engineering environment (Plus). Qualifications Proven hands-on experience working with large-scale data platforms . Strong background in Python, PySpark, AWS , and modern data warehousing tools such as Snowflake and DBT . Familiarity with NoSQL databases like MongoDB and real-time streaming platforms like Kafka.
Posted 1 month ago
5.0 - 9.0 years
7 - 17 Lacs
Pune
Work from Office
Job Overview: Diacto is seeking an experienced and highly skilled Data Architect to lead the design and development of scalable and efficient data solutions. The ideal candidate will have strong expertise in Azure Databricks, Snowflake (with DBT, GitHub, Airflow), and Google BigQuery. This is a full-time, on-site role based out of our Baner, Pune office. Qualifications: B.E./B.Tech in Computer Science, IT, or related discipline MCS/MCA or equivalent preferred Key Responsibilities: Design, build, and optimize robust data architecture frameworks for large-scale enterprise solutions Architect and manage cloud-based data platforms using Azure Databricks, Snowflake, and BigQuery Define and implement best practices for data modeling, integration, governance, and security Collaborate with engineering and analytics teams to ensure data solutions meet business needs Lead development using tools such as DBT, Airflow, and GitHub for orchestration and version control Troubleshoot data issues and ensure system performance, reliability, and scalability Guide and mentor junior data engineers and developers Experience and Skills Required: 5 to12 years of experience in data architecture, engineering, or analytics roles Hands-on expertise in Databricks , especially Azure Databricks Proficient in Snowflake , with working knowledge of DBT, Airflow, and GitHub Experience with Google BigQuery and cloud-native data processing workflows Strong knowledge of modern data architecture, data lakes, warehousing, and ETL pipelines Excellent problem-solving, communication, and analytical skills Nice to Have: Certifications in Azure, Snowflake, or GCP Experience with containerization (Docker/Kubernetes) Exposure to real-time data streaming and event-driven architecture Why Join Diacto Technologies? Collaborate with experienced data professionals and work on high-impact projects Exposure to a variety of industries and enterprise data ecosystems Competitive compensation, learning opportunities, and an innovation-driven culture Work from our collaborative office space in Baner, Pune How to Apply: Option 1 (Preferred) Copy and paste the following link on your browser and submit your application for the automated interview process : - https://app.candidhr.ai/app/candidate/gAAAAABoRrTQoMsfqaoNwTxsE_qwWYcpcRyYJk7NzSUmO3LKb6rM-8FcU58CUPYQKc65n66feHor-TGdCEfyouj0NmKdgYcNbA==/ Option 2 1. Please visit our website's career section at https://www.diacto.com/careers/ 2. Scroll down to the " Who are we looking for ?" section 3. Find the listing for " Data Architect (Data Bricks) " and 4. Proceed with the virtual interview by clicking on " Apply Now ."
Posted 1 month ago
9.0 - 10.0 years
9 - 10 Lacs
Chennai, Tamil Nadu, India
On-site
Qualification Total 9 years of experience with minimum 5 years of experience working as DBT administrator DBT Core Cloud Manage DBT projects, models, tests, snapshots, and deployments in both DBT Core and DBT Cloud Administer and manage DBT Cloud environments including users, permissions, job scheduling, and Git integration Onboarding and enablement of DBT users on DBT Cloud platform Work closely with users to support DBT adoption and usage SQL Warehousing Write optimized SQL and work with data warehouses like Snowflake, BigQuery, Redshift, or Databricks Cloud Platforms Use AWS, GCP, or Azure for data storage (e.g., S3, GCS), compute, and resource management Orchestration Tools Automate DBT runs using Airflow, Prefect, or DBT Cloud job scheduling Version Control CI CD Integrate DBT with Git and manage CI/CD pipelines for model promotion and testing Monitoring Logging Track job performance and errors using tools like dbt-artifacts, Datadog, or cloud-native logging Access Security Configure IAM roles, secrets, and permissions for secure DBT and data warehouse access Documentation Collaboration Maintain model documentation, use dbt docs, and collaborate with data teams
Posted 1 month ago
5.0 - 10.0 years
5 - 10 Lacs
Chennai, Tamil Nadu, India
On-site
Design, develop, and maintain data pipelines and ETL processes using AWS and Snowflake. Implement data transformation workflows using DBT (Data Build Tool). Write efficient, reusable, and reliable code in Python. Optimize and tune data solutions for performance and scalability. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Ensure data quality and integrity through rigorous testing and validation. Stay updated with the latest industry trends and technologies in data engineering. Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Proven experience as a Data Engineer or similar role. Strong proficiency in AWS and Snowflake. Expertise in DBT and Python programming. Experience with data modeling, ETL processes, and data warehousing. Familiarity with cloud platforms and services. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities.
Posted 1 month ago
5.0 - 10.0 years
5 - 10 Lacs
Chennai, Tamil Nadu, India
On-site
Key Responsibilities As a Senior Data Engineer, you will: Data Pipeline Development: Design, build, and maintain scalable data pipelines using PySpark and Python. AWS Cloud Integration: Work with AWS cloud services (S3, Lambda, Glue, EMR, Redshift) for data ingestion, processing, and storage. ETL Workflow Management: Implement and maintain ETL workflows using DBT and orchestration tools (e.g., Airflow). Data Warehousing: Design and manage data models in Snowflake, ensuring performance and reliability. SQL Optimization: Utilize SQL for querying and optimizing datasets across different databases. Data Integration: Integrate and manage data from MongoDB, Kafka, and other streaming or NoSQL sources. Collaboration & Support: Collaborate with data scientists, analysts, and other engineers to support advanced analytics and Machine Learning (ML) initiatives. Data Quality & Governance: Ensure data quality, lineage, and governance through best practices and tools. Mandatory Skills & Experience Strong programming skills in Python and PySpark . Hands-on experience with AWS data services (S3, Lambda, Glue, EMR, Redshift). Proficiency in SQL and experience with DBT for data transformation. Experience with Snowflake for data warehousing. Knowledge of MongoDB , Kafka , and data streaming concepts. Good understanding of data architecture, data modeling, and data governance . Familiarity with large-scale data platforms. Essential Professional Skills Excellent problem-solving skills . Ability to work independently or as part of a team . Experience with CI/CD and DevOps practices in a data engineering environment (Plus). Qualifications Proven hands-on experience working with large-scale data platforms . Strong background in Python, PySpark, AWS , and modern data warehousing tools such as Snowflake and DBT . Familiarity with NoSQL databases like MongoDB and real-time streaming platforms like Kafka.
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Work from Office
We are looking for a Data Engineer to join our team and help us to improve the platform that supports one of the best experimentation tools in the world. You will work side by side with other data engineers and site reliability engineers to improve the reliability, scalability, maintenance and operations of all the data products that are part of the experimentation tool at Booking.com. Your day to day work includes but is not limited to: maintenance and operations of data pipelines and products that handles data at big scale; the development of capabilities for monitoring, alerting, testing and troubleshooting of the data ecosystem of the experiment platform; and the delivery of data products that produce metrics for experimentation at scale. You will collaborate with colleagues in Amsterdam to achieve results the right way. This will include engineering managers, product managers, engineers and data scientists. Key Responsibilities and Duties Take ownership of multiple data pipelines and products and provide innovative solutions to reduce the operational workload required to maintain them Rapidly developing next-generation scalable, flexible, and high-performance data pipelines. Contribute to the development of data platform capabilities such as testing, monitoring, debugging and alerting to improve the development environment of data products Solve issues with data and data pipelines, prioritizing based on customer impact. End-to-end ownership of data quality in complex datasets and data pipelines. Experiment with new tools and technologies, driving innovative engineering solutions to meet business requirements regarding performance, scaling, and data quality. Provide self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise. Serve as the main point of contact for technical and business stakeholders regarding data engineering issues, such as pipeline failures and data quality concerns Role requirements Minimum 5 years of hands-on experience in data engineering as a Data Engineer or as a Software Engineer developing data pipelines and products. Bachelors degree in Computer Science, Computer or Electrical Engineering, Mathematics, or a related field or 5 years of progressively responsible experience in the specialty as equivalent Solid experience in at least one programming language. We use Java and Python Experience building production data pipelines in the cloud, setting up data-lakes and server-less solutions Hands-on experience with schema design and data modeling Experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc) Knowledge of Flink, CDC, Kafka, Airflow, Snowflake, DBT or equivalent tools Practical experience building data platform capabilities like testing, alerting, monitoring, debugging, security Experience working with big data. Experience working with teams located in different timezones is a plus Experience with experimentation, statistics and A/B testing is a plus
Posted 1 month ago
4.0 - 8.0 years
3 - 13 Lacs
Pune, Maharashtra, India
On-site
What Your Responsibilities Will Be You will Design, develop, and maintain efficient ETL pipelines using DBT,Airflow to move and transform data from multiple sources into a data warehouse. You will Lead the development and optimization of data models (e.g., star, snowflake schemas) and data structures to support reporting. You will Leverage cloud platforms (e.g., AWS, Azure, Google Cloud) to manage and scale data storage, processing, and transformation processes. You will Work with business teams, marketing, and sales departments to understand data requirements and translate them into actionable insights and efficient data structures. You will Use advanced SQL and Python skills to query, manipulate, and transform data for multiple use cases and reporting needs. You will Implement data quality checks and ensure that the data adheres to governance best practices, maintaining consistency and integrity across datasets. You will Experience using Git for version control and collaborating on data engineering projects. What Youll Need to be Successful Bachelors degree with 6+ years of experience in Data Engineering. ETL/ELT Expertise : experience in building, improving ETL/ELT processes. Data Modeling : experience with designing and implementing data models such as star and snowflake schemas, and working with denormalized tables to optimize reporting performance. Experience with cloud-based data platforms (AWS, Azure, Google Cloud) SQL and Python Proficiency : Advanced SQL skills for querying large datasets and Python for automation, data processing, and integration tasks. DBT Experience : Hands-on experience with DBT (Data Build Tool) for transforming and managing data models. Good to have Skills: Familiarity with AI concepts such as machine learning (ML), (NLP), and generative AI. Work with AI-driven tools and models for data analysis, reporting, and automation. Oversee and implement DBT models to improve the data transformation process. Experience in the marketing and sales domain, with lead management, marketing analytics, and sales data integration. Familiarity with business intelligence reporting tools, Power BI, for building data models and generating insights.
Posted 1 month ago
8.0 - 13.0 years
2 - 11 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
What Your Responsibilities Will Be Avalara is looking for data analytics engineer who can solve and scale real world big data challenges. Have end to end analytics experience and a complex data story with data models and reliable and applicable metrics. Build and deploy data science models using complex SQL, Python, DBT data modelling and re-useable visualization components (PowerBI/Tableau/Hex/R-shiny etc.) Expert level experience in PowerBI, SQL and Snowflake Solve needs on a large scale by applying your software engineering and complex data. Lead and help develop a roadmap for the area and the team. Analyze fault tolerance and high availability issues, performance, and scale challenges, and solve them. Lead programs and collaborate with engineers, product managers, and technical program managers across teams. Understand the trade-offs between consistency, durability, and costs to build solutions that can meet the demands of growing services. Ensure the operational readiness of the services and meet the commitments to our customers regarding availability and performance. Manage end-to-end project plans and ensure on-time delivery. Communicate the status and big picture to the project team and management. Work with business and engineering teams to identify scope, constraints, dependencies, and risks. Identify risks and opportunities across the business and guide solutions. What Youll Need to be Successful What Youll Need to be Successful Bachelors Engineering degree in Computer Science or a related field. 8+ years of experience of enterprise-class experience with large-scale cloud solutions in data science/analytics projects and engineering projects. Expert level experience in PowerBI, SQL and Snowflake Experience with data visualization, Python, Data Modeling and data storytelling. Experience architecting complex data marts applying DBT. Architect and build data solutions that use data quality and anomaly detection best practices. Experience building production analytics using the Snowflake data platform. Experience in AWS and Snowflake tools and services Good to have: Certificate in Snowflake is plus Relevant certifications in data warehousing or cloud platform. Experience architecting complex data marts applying DBT and Airflow.
Posted 1 month ago
8.0 - 13.0 years
3 - 11 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
What Your Responsibilities Will Be Avalara is looking for data analytics engineer who can solve and scale real world big data challenges. Have end to end analytics experience and a complex data story with data models and reliable and applicable metrics. Build and deploy data science models using complex SQL, Python, DBT data modelling and re-useable visualization components (PowerBI/Tableau/Hex/R-shiny etc.) Expert level experience in PowerBI, SQL and Snowflake Solve needs on a large scale by applying your software engineering and complex data. Lead and help develop a roadmap for the area and the team. Analyze fault tolerance and high availability issues, performance, and scale challenges, and solve them. Lead programs and collaborate with engineers, product managers, and technical program managers across teams. Understand the trade-offs between consistency, durability, and costs to build solutions that can meet the demands of growing services. Ensure the operational readiness of the services and meet the commitments to our customers regarding availability and performance. Manage end-to-end project plans and ensure on-time delivery. Communicate the status and big picture to the project team and management. Work with business and engineering teams to identify scope, constraints, dependencies, and risks. Identify risks and opportunities across the business and guide solutions. What Youll Need to be Successful What Youll Need to be Successful Bachelors Engineering degree in Computer Science or a related field. 8+ years of experience of enterprise-class experience with large-scale cloud solutions in data science/analytics projects and engineering projects. Expert level experience in PowerBI, SQL and Snowflake Experience with data visualization, Python, Data Modeling and data storytelling. Experience architecting complex data marts applying DBT. Architect and build data solutions that use data quality and anomaly detection best practices. Experience building production analytics using the Snowflake data platform. Experience in AWS and Snowflake tools and services Good to have: Certificate in Snowflake is plus Relevant certifications in data warehousing or cloud platform. Experience architecting complex data marts applying DBT and Airflow.
Posted 1 month ago
5.0 - 10.0 years
5 - 10 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Senior Data Engineer Key Responsibilities As a Senior Data Engineer, you will: Data Pipeline Development: Design, build, and maintain scalable data pipelines using PySpark and Python. AWS Cloud Integration: Work with AWS cloud services (S3, Lambda, Glue, EMR, Redshift) for data ingestion, processing, and storage. ETL Workflow Management: Implement and maintain ETL workflows using DBT and orchestration tools (e.g., Airflow). Data Warehousing: Design and manage data models in Snowflake, ensuring performance and reliability. SQL Optimization: Utilize SQL for querying and optimizing datasets across different databases. Data Integration: Integrate and manage data from MongoDB, Kafka, and other streaming or NoSQL sources. Collaboration & Support: Collaborate with data scientists, analysts, and other engineers to support advanced analytics and Machine Learning (ML) initiatives. Data Quality & Governance: Ensure data quality, lineage, and governance through best practices and tools. Mandatory Skills & Experience Strong programming skills in Python and PySpark . Hands-on experience with AWS data services (S3, Lambda, Glue, EMR, Redshift). Proficiency in SQL and experience with DBT for data transformation. Experience with Snowflake for data warehousing. Knowledge of MongoDB , Kafka , and data streaming concepts. Good understanding of data architecture, data modeling, and data governance . Familiarity with large-scale data platforms. Essential Professional Skills Excellent problem-solving skills . Ability to work independently or as part of a team . Experience with CI/CD and DevOps practices in a data engineering environment (Plus). Qualifications Proven hands-on experience working with large-scale data platforms . Strong background in Python, PySpark, AWS , and modern data warehousing tools such as Snowflake and DBT . Familiarity with NoSQL databases like MongoDB and real-time streaming platforms like Kafka.
Posted 1 month ago
7.0 - 8.0 years
1 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
As a Technical Lead Azure Snowflake DBT, you will be a part of an Agile team to build healthcare applications and implement new features while adhering to the best coding development standards. Responsibilities: - Design, develop, and maintain data processing systems using Azure Snowflake. Design and develop robust data integration solutions using Data Build Tool (DBT) and other data pipeline tools. Work with complex SQL functions and transform large data sets to meet business requirements. Drive creation and maintenance of data models that support analytics use cases and business objectives. Collaborate with various stakeholders, including technical teams, functional SMEs, and business users, to understand and address the data needs. Create low-level design documents and unit test strategies and plans in adherence to defined processes and guidelines. Perform code reviews and unit test plan reviews to ensure high quality of code and deliverables. Ensure data quality and integrity through validation, cleansing, and enrichment processes. Support end-to-end testing and validation, including UAT and product testing. Take ownership of problems, demonstrate a proactive approach to problem solving, and Lead solutions to completion. Educational Qualifications: Engineering Degree BE/ME/BTech/MTech/BSc/MSc. Technical certification in multiple technologies is desirable. Skills: Mandatory Technical Skills:- Over 5 years of experience in Cloud data architecture and analytics Proficient in Azure, Snowflake, SQL, and DBT Extensive experience in designing and developing data integration solutions using DBT and other data pipeline tools Excellent communication and teamwork skills Self-initiated, problem solver with a strong sense of ownership Good to Have Skills: Experience in other data processing tools and technologies Familiarity with agile development methodologies Strong analytical and problem-solving skills Experience in the healthcare domain
Posted 1 month ago
5.0 - 10.0 years
12 - 18 Lacs
Pune, Bengaluru, Delhi / NCR
Work from Office
SQL, SNOWFLAKE, TABLEAU SQL, SNOWFLAKE,DBT, Datawarehousing SQL, SNOWFLAKE, Python, DBT, Datawarehousing SQL, SNOWFLAKE, Datawarehousing, any ETL tool(preffered is Matillion) SQL, SNOWFLAKE, TABLEAU
Posted 1 month ago
6.0 - 11.0 years
15 - 30 Lacs
Noida, Pune, Bengaluru
Hybrid
We are looking for a Snowflake Developer with deep expertise in Snowflake and DBT or SQL to help us build and scale our modern data platform. Key Responsibilities: Design and build scalable ELT pipelines in Snowflake using DBT/SQL . Develop efficient, well-tested DBT models (staging, intermediate, and marts layers). Implement data quality, testing, and monitoring frameworks to ensure data reliability and accuracy. Optimize Snowflake queries, storage, and compute resources for performance and cost-efficiency. Collaborate with cross-functional teams to gather data requirements and deliver data solutions. Required Qualifications: 5+ years of experience as a Data Engineer, with at least 4 years working with Snowflake . Proficient with DBT (Data Build Tool) including Jinja templating, macros, and model dependency management. Strong understanding of ELT patterns and modern data stack principles. Advanced SQL skills and experience with performance tuning in Snowflake. Interested candidates share your CV at himani.girnar@alikethoughts.com with below details Candidate's name- Email and Alternate Email ID- Contact and Alternate Contact no- Total exp- Relevant experience- Current Org- Notice period- CCTC- ECTC- Current Location- Preferred Location- Pancard No-
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France