Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
4 - 9 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Role & responsibilities Advanced understanding of AWS services Understanding of Cloud based services (GitHub, ServiceNow, Orca, Datadog, Broadcom, Fivetran) Hands on experience with Release Management and deployment Advanced understanding of Linux Administration (log files, command line, system services, custom and managed package installations). Knowledge of network protocols, security and compliance Strong knowledge of scripting (Python, PHP, Bash) Knowledge of application integration technologies (API, Middleware, Webhooks)
Posted 3 months ago
6.0 - 11.0 years
20 - 25 Lacs
Noida, Mumbai
Work from Office
Responsibilities: Act as Data domain expert for Snowflake in a collaborative environment to provide demonstrated understanding of data management best practices and patterns. Design and implement robust data architectures to meet and support business requirements leveraging Snowflake platform capabilities. Develop and enforce data modeling standards and best practices for Snowflake environments. Develop, optimize, and maintain Snowflake data warehouses. Leverage Snowflake features such as clustering, materialized views, and semi-structured data processing to enhance data solutions. Ensure data architecture solutions meet performance, security, and scalability requirements. Stay current with the latest developments and features in Snowflake and related technologies, continually enhancing our data capabilities. Collaborate with cross-functional teams to gather business requirements, translate them into effective data solutions in Snowflake and provide data-driven insights. Stay updated with the latest trends and advancements in data architecture and Snowflake technologies. Provide mentorship and guidance to junior data engineers and architects. Troubleshoot and resolve data architecture-related issues effectively. Skills Requirement: 5+ years of proven experience as a Data Engineer with 3+ years as Data Architect. Proficiency in Snowflake with Hands-on experience with Snowflake features such as clustering, materialized views, and semi-structured data processing. Experience in designing and building manual or auto ingestion data pipeline using Snowpipe. Design and Develop automated monitoring processes on Snowflake using combination of Python, PySpark, Bash with SnowSQL. SnowSQL Experience in developing stored Procedures writing Queries to analyze and transform data Working experience on ETL tools like Fivetran, DBT labs, MuleSoft Expertise in Snowflake concepts like setting up Resource monitors, RBAC controls, scalable virtual warehouse, SQL performance tuning, zero copy clone, time travel and automating them. Excellent problem-solving skills and attention to detail. Effective communication and collaboration abilities. Relevant certifications (e.g., SnowPro Core / Advanced) are a must have. Must have expertise in AWS. Azure, Salesforce Platform as a Service (PAAS) model and its integration with Snowflake to load/unload data. Strong communication and exceptional team player with effective problem-solving skills Educational Qualification Required: Masters degree in Business Management (MBA / PGDM) / Bachelor's degree in computer science, Information Technology, or related field.
Posted 3 months ago
8.0 - 13.0 years
25 - 35 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Looking for Cloud Engineering and Operations Specialist deep Understanding of AWS services, Cloud based services (GitHub, ServiceNow, Orca, Datadog, Broadcom, Five Tran) Hands on experience with Release Management and deployment
Posted 3 months ago
3.0 - 8.0 years
7 - 17 Lacs
Pune
Work from Office
vConstruct, a Pune-based Construction Technology company is seeking a Data Engineer for its Data Science and Analytics team, a close-knit group of analysts and engineers supporting all data aspects of the business. You will be responsible for designing, developing, and maintaining our data infrastructure, ensuring data integrity, and supporting various data-driven projects. You will work closely with cross-functional teams to integrate, process, and manage data from various sources, enabling business insights and enhancing operational efficiency. Responsibilities Design, develop, and maintain robust, scalable data pipelines and ETL/ELT processes to efficiently ingest, transform, and store data from diverse sources. Collaborate with cross-functional teams to design, implement, and sustain data-driven solutions that optimize data flow and system integration. Develop and maintain pipelines to move data in real-time (streaming), on-demand, and batch modeswhether inbound to a central data warehouse, outbound to other systems, or point-to-pointfocusing on security, reusability, and data quality. Implement pipelines with comprehensive error-handling mechanisms that are visible to both technical and functional teams. Ensure optimized pipeline performance with timely data delivery, including appropriate alerts and notifications. Adhere to data engineering best practices for code management and automated deployments, incorporating validation and test automation across all data engineering efforts. Perform debugging, application issue resolution, root cause analysis, and assist in proactive/preventive maintenance. Collaborate with the extended data team to define and enforce standards, guidelines, and data models that ensure data quality and promote best practices. Write and execute complete testing plans, protocols, and documentation for assigned portions of the data system or components; identify defects and create solutions for issues with code and integration into data system architecture. Work closely with data analysts, business users, and developers to ensure the accuracy, reliability, and performance of data solutions. Monitor data performance, troubleshooting issues, and optimize existing solutions. Create and maintain technical documentation related to data architecture, integration flows, and processes. Organize and lead discussions with business and operational data stakeholders to understand requirements and deliver solutions. Partner with analysts, developers, and business users to build data solutions that are scalable, maintainable, and aligned with business objectives. Qualifications 3 to 6 years of experience as a Data Engineer, with a focus on building scalable data solutions. Over 3 years of experience in scripting languages such as Python for data processing, automation, and ETL development. 3+ years of hands-on experience working with Snowflake. 3+ years of experience with data integration tools such as Azure Data Factory, Fivetran, or Matillion. Strong experience in writing complex, highly optimized SQL queries on large datasets (3+ years). Deep expertise in SQL, with a focus on database performance tuning and optimization. Experience working with data platforms like Snowflake, Azure Synapse, or Microsoft Fabric. Proven experience integrating APIs and handling diverse data sources. Ability to understand, consume, and utilize APIs, JSON, and web services for building data pipelines. Experience designing and implementing data pipelines using cloud platforms such as Azure or AWS. Familiarity with orchestration tools like Apache Airflow or equivalent. Experience with CI/CD practices and automation in data engineering workflows. Knowledge of dbt or similar tools for data transformation is a plus. Familiarity with Power BI or other data visualization tools is a plus. Strong problem-solving skills with the ability to troubleshoot complex data issues. Excellent communication skills and a collaborative mindset to work effectively in team environments. Education Bachelors or Masters degree in Computer Science/Information technology or related field. Equivalent academic and work experience can be considered. About vConstruct : vConstruct specializes in providing high quality Building Information Modeling and Construction Technology services geared towards construction projects. vConstruct is a wholly owned subsidiary of DPR Construction. For more information, please visit www.vconstruct.com About DPR Construction: DPR Construction is a national commercial general contractor and construction manager specializing in technically challenging and sustainable projects for the advanced technology, biopharmaceutical, corporate office, and higher education and healthcare markets. With the purpose of building great things, great teams, great buildings, great relationshipsDPR is a truly great company. For more information, please visit www.dpr.com
Posted 3 months ago
10.0 - 12.0 years
1 - 1 Lacs
Hyderabad
Hybrid
Role: Lead Data Engineer Experience: 10+ years Contract: 6+ months Job Summary: We are seeking an experienced and results-oriented Lead Data Engineer to drive the design, development, and optimization of enterprise data solutions. This onsite role requires deep expertise in FiveTran, Snowflake, SQL, Python, and data modeling, as well as a demonstrated ability to lead teams and mentor both Data Engineers and BI Engineers. The role will play a critical part in shaping the data architecture, improving analytics readiness, and enabling self-service business intelligence through scalable star schema designs. Key Responsibilities: Lead end-to-end data engineering efforts, including architecture, ingestion, transformation, and delivery. Architect and implement FiveTran-based ingestion pipelines and Snowflake data models. Create optimized Star Schemas to support analytics, self-service BI, and KPI reporting. Analyze and interpret existing report documentation and KPIs to guide modeling and transformation strategies. Design and implement efficient, scalable data workflows using SQL and Python. Review and extend existing reusable data engineering templates and frameworks. Provide technical leadership and mentorship to Data Engineers and BI Engineers, ensuring best practices in coding, modeling, performance tuning, and documentation. Collaborate with business stakeholders to gather requirements and translate them into scalable data solutions. Work closely with BI teams to enable robust reporting and dashboarding capabilities. Required Skills: 7+ years of hands-on data engineering experience, with 2+ years in a technical leadership or lead role. Deep expertise in FiveTran, Snowflake, and SQL development. Proficiency in Python for data transformation and orchestration. Strong understanding of data warehousing principles, including star schema design and dimensional modeling. Experience in analysing business KPIs and reports to influence data model design. Demonstrated ability to mentor both Data Engineers and BI Engineers and provide architectural guidance. Excellent problem-solving, communication, and stakeholder management skills Share CV to: Careers@rwavesoftech.com
Posted 3 months ago
5.0 - 10.0 years
5 - 10 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Role & responsibilities Design, develop, and optimize scalable data pipelines for ETL/ELT processes. Develop and maintain Python-based data processing scripts and automation tools. Write and optimize complex SQL queries (preferably in Snowflake) for data transformation and analytics. Experience with Jenkins or other CI/CD tools. Experience developing with Snowflake as the data platform. Experience with ETL/ELT tools (preferably Fivetran, dbt). Implement version control best practices using Git or other tools to manage code changes. Collaborate with cross-functional teams (analysts, product managers, and engineers) to understand business needs and translate them into technical data solutions. Ensure data integrity, security, and governance across multiple data sources. Optimize query performance and database architecture for efficiency and scalability. Lead troubleshooting and debugging efforts for data-related issues. Document data workflows, architectures, and best practices to ensure maintainability and knowledge sharing. Preferred candidate profile 5+ years of experience in Data Engineering, Software Engineering, or a related field. Bachelors or masters degree in computer science, Computer Engineering, or a related discipline High proficiency in SQL (preferably Snowflake) for data modeling, performance tuning, and optimization. Strong expertise in Python for data processing and automation. Experience with Git or other version control tools in a collaborative development environment. Strong communication skills and ability to collaborate with cross-functional teams for requirements gathering and solution design. Experience working with large-scale, distributed data systems and cloud data warehouse.
Posted 3 months ago
5 - 7 years
15 - 25 Lacs
Pune, Mumbai (All Areas)
Hybrid
DUTIES AND RESPONSIBILITIES: Build ETL (extract, transform, and loading) jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce, and AWS technologies Monitoring active ETL jobs in production. Build out data lineage artifacts to ensure all current and future systems are properly documented • Assist with the build out design/mapping documentation to ensure development is clear and testable for QA and UAT purposes Assess current and future data transformation needs to recommend, develop, and train new data integration tool technologies Discover efficiencies with shared data processes and batch schedules to help ensure no redundancy and smooth operations Assist the Data Quality Analyst to implement checks and balances across all jobs to ensure data quality throughout the entire environment for current and future batch jobs. Hands-on experience in developing and implementing large-scale data warehouses, Business Intelligence and MDM solutions, including Data Lakes/Data Vaults. SUPERVISORY RESPONSIBILITIES: This job has no supervisory responsibilities. QUALIFICATIONS: Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work 3-5 years experience with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate and transfer large volumes of data and perform quality checks Hands-on experience with ETL tools (e.g Informatica, Talend, dbt, Azure Data Factory) • Experience working in the healthcare industry with PHI/PII Creative, lateral, and critical thinker • Excellent communicator Well-developed interpersonal skills Good at prioritizing tasks and time management Ability to describe, create and implement new solutions Experience with related or complementary open source software platforms and languages (e.g. Java, Linux, Apache, Perl/Python/PHP, Chef) Knowledge / Hands-on experience with BI tools and reporting software (e.g. Cognos, Power BI, Tableau) Big Data stack (e.g.Snowflake(Snowpark), SPARK, MapReduce, Hadoop, Sqoop, Pig, HBase, Hive, Flume)fepa
Posted 4 months ago
8 - 13 years
12 - 22 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Work from Office
Greetings of The Day...!!! We have an URGENT on-rolls opening for the position of "Snowflake Architect" at One of our reputed clients for WFH. Name of the Company - Confidential Rolls - Onrolls Mode of Employment - FTE / Sub-Con / Contract Job Location - Remote Job Work Timings Night Shift – 06.00 pm to 03.00 am IST Nature of Work – Work from Home Working Days – 5 Days Weekly Educational Qualification - Bachelor's degree in computer science, BCA, engineering, or a related field. Salary – Maximum CTC Would be 23LPA (Salary & benefits package will be commensurate with experience and qualifications, PF, Medical Insurance cover available) Language Known - English, Hindi, & local language. Experience – 9 Years + of relevant experience in the same domain. Job Summary: We are seeking a highly skilled and experienced Snowflake Architect to lead the design, development, and implementation of scalable, secure, and high-performance data warehousing solutions on the Snowflake platform. The ideal candidate will possess deep expertise in data modelling, cloud architecture, and modern ELT frameworks. You will be responsible for architecting robust data pipelines, optimizing query performance, and ensuring enterprise-grade data governance and security. In this role, you will collaborate with data engineers, analysts, and business stakeholders to deliver efficient data solutions that drive informed decision-making across the organization. Key Responsibilities: Manage and maintain the Snowflake platform to ensure optimal performance and reliability. Collaborate with data engineers and analysts to design and implement data pipelines. Develop and optimize SQL queries for efficient data retrieval and manipulation. Create custom scripts and functions using JavaScript and Python to automate platform tasks. Troubleshoot platform issues and provide timely resolutions. Implement security best practices to protect data within the Snowflake platform. Stay updated on the latest Snowflake features and best practices to continuously improve platform performance. Required Qualifications: Bachelor’s degree in computer science, Engineering, or a related field. Minimum of Nine years of experience in managing any Database platform. Proficiency in SQL for data querying and manipulation. Strong programming skills in JavaScript and Python. Experience in optimizing and tuning Snowflake for performance. Preferred Skills: Technical Expertise Cloud & Integration Performance & Optimization Security & Governance Soft Skills THE PERSON SHOULD BE WILLING TO JOIN IN 07-10 DAYS TIME OR IMMEDIATE JOINER. Request for interested candidates; Please share your updated resume with us below Email-ID executivehr@monalisammllp.com, also candidate can call or WhatsApp us at 9029895581. Current /Last Net in Hand - Salary will be offered based on the interview /Technical evaluation process -- Notice Period & LWD was/will be - Reason for Changing the job - Total Years of Experience in Specific Field – Please specify the location which you are from – Do you hold any offer from any other association - ? Regards, Monalisa Group of Services HR Department 9029895581 – Call / WhatsApp executivehr@monalisammllp.com
Posted 4 months ago
5 - 8 years
0 - 1 Lacs
Hyderabad
Hybrid
Job Title: Sr Data Engineer (Fivetran SDK Connector / High touch Developer). Work Location: Hyderabad. Years of Experience: 5 to 8 Years Shift Timings: 3 PM to 12 AM Skill Set: Fivetran and Fivetran SDK Development Expertise in Python for connector development Understanding of High touch. Roles & Responsibilities: Design, build, and maintain custom connectors using the Fivetran SDK Develop and manage Reverse ETL pipelines using Hightouch Integrate data from diverse APIs and source systems into cloud data warehouses Ensure data reliability, quality, and performance across pipelines Optimize SQL transformations and data workflows Collaborate with data engineers, analysts, and stakeholders to deliver high-quality data solutions Monitor and troubleshoot connector issues, ensuring robust logging and error handling. Other Specifications: 3 years of hands-on experience with Fivetran and Fivetran SDK Strong proficiency in Python, especially for SDK-based connector development Advanced SQL skills for data manipulation and transformation Practical experience with Hightouch for Reverse ETL use cases Experience with cloud data warehouses: Snowflake, BigQuery, or Redshift Strong understanding of REST APIs, webhooks, and authentication mechanisms Solid knowledge of ETL/ELT pipelines, data modeling, and data syncing Excellent problem-solving, debugging, and documentation skills.
Posted 4 months ago
1 - 4 years
3 - 6 Lacs
Pune
Work from Office
The Data Integration Engineer will play a key role in designing, building, and maintaining data integrations between core business systems such as Salesforce and SAP and our enterprise data warehouse on Snowflake. This position is ideal for an early-career professional (1 to 4 years of experience) eager to contribute to transformative data integration initiatives and learn in a collaborative, fast-paced environment. Duties Responsibilities: Collaborate with cross-functional teams to understand business requirements and translate them into data integration solutions. Develop and maintain ETL/ELT pipelines using modern tools like Informatica IDMC to connect source systems to Snowflake. Ensure data accuracy, consistency, and security in all integration workflows. Monitor, troubleshoot, and optimize data integration processes to meet performance and scalability goals. Support ongoing integration projects, including Salesforce and SAP data pipelines, while adhering to best practices in data governance. Document integration designs, workflows, and operational processes for effective knowledge sharing. Assist in implementing and improving data quality controls at the start of processes to ensure reliable outcomes. Stay informed about the latest developments in integration technologies and contribute to team learning and improvement. Qualifications: Required Skills and Experience: 5+ years of hands-on experience in data integration, ETL/ELT development, or data engineering. Proficiency in SQL and experience working with relational databases such as Snowflake, PostgreSQL, or SQL Server. Familiarity with data integration tools such as FiveTran, Informatica Intelligent Data Management Cloud (IDMC), or similar platforms. Basic understanding of cloud platforms like AWS, Azure, or GCP. Experience working with structured and unstructured data in varying formats (e.g., JSON, XML, CSV). Strong problem-solving skills and the ability to troubleshoot data integration issues effectively. Excellent verbal and written communication skills, with the ability to document technical solutions clearly. Preferred Skills and Experience: Exposure to integrating business systems such as Salesforce or SAP into data platforms. Knowledge of data warehousing concepts and hands-on experience with Snowflake. Familiarity with APIs, event-driven pipelines, and automation workflows. Understanding of data governance principles and data quality best practices. Education: Bachelors degree in Computer Science, Data Engineering, or a related field, or equivalent practical experience.
Posted 4 months ago
5.0 - 8.0 years
22 - 30 Lacs
gurugram
Work from Office
Overview: We are looking for highly experienced Data Engineers to drive the data platform modernization strategy for a global organization. In this senior role, you will architect and oversee the development of scalable, high-performance data pipelines and implement advanced data engineering patterns on Snowflake. Youll lead the integration of diverse data sources, optimize transformation workflows, and contribute to enterprise-wide data architecture initiatives. Key Responsibilities: Architect and lead the development of robust, scalable data pipelines to ingest and process data from sources such as Azure Blob Storage, Business Central ERP, CRM, Marketing platforms, and offline channels. Oversee the implementation and optimization of ingestion and orchestration tools like Fivetran and Dagster (or similar), ensuring pipeline resilience and extensibility. Drive the implementation of the Medallion architecture on Snowflake, collaborating with Data Architects and BI teams to ensure alignment with broader data strategy. Lead data transformation and modeling initiatives using DBT, ensuring accuracy, reusability, and maintainability of transformation logic. Design and manage domain-specific DataMarts across business verticals such as Marketing, Supply Chain, and Finance, supporting both operational and analytical use cases. Define and enforce standards for data quality, observability, performance optimization, and data governance within the engineering lifecycle. Act as a technical lead for offshore/onshore engineering teams, guiding best practices around CI/CD, version control, testing, and delivery frameworks. Contribute to the evolution of data engineering tooling and practices, including metadata management, cataloging, and lineage tracking. Partner with downstream teams using Power BI and other analytics tools to ensure alignment of data structures with reporting requirements. Qualifications: Experience in data engineering with a strong focus on cloud data platforms (Snowflake preferred). Deep expertise in data ingestion frameworks (Fivetran, custom pipelines) and workflow orchestration tools (Dagster, Airflow, Prefect). Advanced proficiency in DBT and data modeling techniques (dimensional, star/snowflake schemas, Medallion architecture). Proven experience designing and optimizing cloud-based data architectures integrating multiple structured and semi-structured data sources. Solid understanding of CI/CD, Git-based workflows, and production-grade pipeline management. Experience with data quality frameworks, lineage, and governance solutions. Strong cross-functional collaboration skills and the ability to lead technical conversations with Architects, Analysts, and Business Stakeholders. Familiarity with BI tools such as Power BI and their data dependencies is a strong plus. Prior leadership experience or mentoring of junior engineers preferred.
Posted Date not available
5.0 - 8.0 years
22 - 30 Lacs
chennai
Work from Office
Overview: We are looking for highly experienced Data Engineers to drive the data platform modernization strategy for a global organization. In this senior role, you will architect and oversee the development of scalable, high-performance data pipelines and implement advanced data engineering patterns on Snowflake. Youll lead the integration of diverse data sources, optimize transformation workflows, and contribute to enterprise-wide data architecture initiatives. Key Responsibilities: Architect and lead the development of robust, scalable data pipelines to ingest and process data from sources such as Azure Blob Storage, Business Central ERP, CRM, Marketing platforms, and offline channels. Oversee the implementation and optimization of ingestion and orchestration tools like Fivetran and Dagster (or similar), ensuring pipeline resilience and extensibility. Drive the implementation of the Medallion architecture on Snowflake, collaborating with Data Architects and BI teams to ensure alignment with broader data strategy. Lead data transformation and modeling initiatives using DBT, ensuring accuracy, reusability, and maintainability of transformation logic. Design and manage domain-specific DataMarts across business verticals such as Marketing, Supply Chain, and Finance, supporting both operational and analytical use cases. Define and enforce standards for data quality, observability, performance optimization, and data governance within the engineering lifecycle. Act as a technical lead for offshore/onshore engineering teams, guiding best practices around CI/CD, version control, testing, and delivery frameworks. Contribute to the evolution of data engineering tooling and practices, including metadata management, cataloging, and lineage tracking. Partner with downstream teams using Power BI and other analytics tools to ensure alignment of data structures with reporting requirements. Qualifications: Experience in data engineering with a strong focus on cloud data platforms (Snowflake preferred). Deep expertise in data ingestion frameworks (Fivetran, custom pipelines) and workflow orchestration tools (Dagster, Airflow, Prefect). Advanced proficiency in DBT and data modeling techniques (dimensional, star/snowflake schemas, Medallion architecture). Proven experience designing and optimizing cloud-based data architectures integrating multiple structured and semi-structured data sources. Solid understanding of CI/CD, Git-based workflows, and production-grade pipeline management. Experience with data quality frameworks, lineage, and governance solutions. Strong cross-functional collaboration skills and the ability to lead technical conversations with Architects, Analysts, and Business Stakeholders. Familiarity with BI tools such as Power BI and their data dependencies is a strong plus. Prior leadership experience or mentoring of junior engineers preferred.
Posted Date not available
5.0 - 8.0 years
22 - 30 Lacs
hyderabad
Work from Office
Overview: We are looking for highly experienced Data Engineers to drive the data platform modernization strategy for a global organization. In this senior role, you will architect and oversee the development of scalable, high-performance data pipelines and implement advanced data engineering patterns on Snowflake. Youll lead the integration of diverse data sources, optimize transformation workflows, and contribute to enterprise-wide data architecture initiatives. Key Responsibilities: Architect and lead the development of robust, scalable data pipelines to ingest and process data from sources such as Azure Blob Storage, Business Central ERP, CRM, Marketing platforms, and offline channels. Oversee the implementation and optimization of ingestion and orchestration tools like Fivetran and Dagster (or similar), ensuring pipeline resilience and extensibility. Drive the implementation of the Medallion architecture on Snowflake, collaborating with Data Architects and BI teams to ensure alignment with broader data strategy. Lead data transformation and modeling initiatives using DBT, ensuring accuracy, reusability, and maintainability of transformation logic. Design and manage domain-specific DataMarts across business verticals such as Marketing, Supply Chain, and Finance, supporting both operational and analytical use cases. Define and enforce standards for data quality, observability, performance optimization, and data governance within the engineering lifecycle. Act as a technical lead for offshore/onshore engineering teams, guiding best practices around CI/CD, version control, testing, and delivery frameworks. Contribute to the evolution of data engineering tooling and practices, including metadata management, cataloging, and lineage tracking. Partner with downstream teams using Power BI and other analytics tools to ensure alignment of data structures with reporting requirements. Qualifications: Experience in data engineering with a strong focus on cloud data platforms (Snowflake preferred). Deep expertise in data ingestion frameworks (Fivetran, custom pipelines) and workflow orchestration tools (Dagster, Airflow, Prefect). Advanced proficiency in DBT and data modeling techniques (dimensional, star/snowflake schemas, Medallion architecture). Proven experience designing and optimizing cloud-based data architectures integrating multiple structured and semi-structured data sources. Solid understanding of CI/CD, Git-based workflows, and production-grade pipeline management. Experience with data quality frameworks, lineage, and governance solutions. Strong cross-functional collaboration skills and the ability to lead technical conversations with Architects, Analysts, and Business Stakeholders. Familiarity with BI tools such as Power BI and their data dependencies is a strong plus. Prior leadership experience or mentoring of junior engineers preferred.
Posted Date not available
5.0 - 8.0 years
22 - 30 Lacs
pune
Work from Office
Overview: We are looking for highly experienced Data Engineers to drive the data platform modernization strategy for a global organization. In this senior role, you will architect and oversee the development of scalable, high-performance data pipelines and implement advanced data engineering patterns on Snowflake. Youll lead the integration of diverse data sources, optimize transformation workflows, and contribute to enterprise-wide data architecture initiatives. Key Responsibilities: Architect and lead the development of robust, scalable data pipelines to ingest and process data from sources such as Azure Blob Storage, Business Central ERP, CRM, Marketing platforms, and offline channels. Oversee the implementation and optimization of ingestion and orchestration tools like Fivetran and Dagster (or similar), ensuring pipeline resilience and extensibility. Drive the implementation of the Medallion architecture on Snowflake, collaborating with Data Architects and BI teams to ensure alignment with broader data strategy. Lead data transformation and modeling initiatives using DBT, ensuring accuracy, reusability, and maintainability of transformation logic. Design and manage domain-specific DataMarts across business verticals such as Marketing, Supply Chain, and Finance, supporting both operational and analytical use cases. Define and enforce standards for data quality, observability, performance optimization, and data governance within the engineering lifecycle. Act as a technical lead for offshore/onshore engineering teams, guiding best practices around CI/CD, version control, testing, and delivery frameworks. Contribute to the evolution of data engineering tooling and practices, including metadata management, cataloging, and lineage tracking. Partner with downstream teams using Power BI and other analytics tools to ensure alignment of data structures with reporting requirements. Qualifications: Experience in data engineering with a strong focus on cloud data platforms (Snowflake preferred). Deep expertise in data ingestion frameworks (Fivetran, custom pipelines) and workflow orchestration tools (Dagster, Airflow, Prefect). Advanced proficiency in DBT and data modeling techniques (dimensional, star/snowflake schemas, Medallion architecture). Proven experience designing and optimizing cloud-based data architectures integrating multiple structured and semi-structured data sources. Solid understanding of CI/CD, Git-based workflows, and production-grade pipeline management. Experience with data quality frameworks, lineage, and governance solutions. Strong cross-functional collaboration skills and the ability to lead technical conversations with Architects, Analysts, and Business Stakeholders. Familiarity with BI tools such as Power BI and their data dependencies is a strong plus. Prior leadership experience or mentoring of junior engineers preferred.
Posted Date not available
5.0 - 8.0 years
22 - 30 Lacs
bengaluru
Work from Office
Overview: We are looking for highly experienced Data Engineers to drive the data platform modernization strategy for a global organization. In this senior role, you will architect and oversee the development of scalable, high-performance data pipelines and implement advanced data engineering patterns on Snowflake. Youll lead the integration of diverse data sources, optimize transformation workflows, and contribute to enterprise-wide data architecture initiatives. Key Responsibilities: Architect and lead the development of robust, scalable data pipelines to ingest and process data from sources such as Azure Blob Storage, Business Central ERP, CRM, Marketing platforms, and offline channels. Oversee the implementation and optimization of ingestion and orchestration tools like Fivetran and Dagster (or similar), ensuring pipeline resilience and extensibility. Drive the implementation of the Medallion architecture on Snowflake, collaborating with Data Architects and BI teams to ensure alignment with broader data strategy. Lead data transformation and modeling initiatives using DBT, ensuring accuracy, reusability, and maintainability of transformation logic. Design and manage domain-specific DataMarts across business verticals such as Marketing, Supply Chain, and Finance, supporting both operational and analytical use cases. Define and enforce standards for data quality, observability, performance optimization, and data governance within the engineering lifecycle. Act as a technical lead for offshore/onshore engineering teams, guiding best practices around CI/CD, version control, testing, and delivery frameworks. Contribute to the evolution of data engineering tooling and practices, including metadata management, cataloging, and lineage tracking. Partner with downstream teams using Power BI and other analytics tools to ensure alignment of data structures with reporting requirements. Qualifications: Experience in data engineering with a strong focus on cloud data platforms (Snowflake preferred). Deep expertise in data ingestion frameworks (Fivetran, custom pipelines) and workflow orchestration tools (Dagster, Airflow, Prefect). Advanced proficiency in DBT and data modeling techniques (dimensional, star/snowflake schemas, Medallion architecture). Proven experience designing and optimizing cloud-based data architectures integrating multiple structured and semi-structured data sources. Solid understanding of CI/CD, Git-based workflows, and production-grade pipeline management. Experience with data quality frameworks, lineage, and governance solutions. Strong cross-functional collaboration skills and the ability to lead technical conversations with Architects, Analysts, and Business Stakeholders. Familiarity with BI tools such as Power BI and their data dependencies is a strong plus. Prior leadership experience or mentoring of junior engineers preferred.
Posted Date not available
7.0 - 12.0 years
0 - 3 Lacs
hyderabad, pune, bengaluru
Hybrid
We are seeking a Sr. Data Engineer to join our Data Engineering team within our Enterprise Data Insights organization to build data solutions, design and implement ETL/ELT processes and manage our data platform to enable our cross functional stakeholders. As a part of our Corporate Engineering division, our vision is to spearhead technology and data-led solutions and experiences to drive growth & innovation at scale. The ideal candidate will have a strong Data Engineering background, advanced Python knowledge and experience with cloud services and SQL/NoSQL databases. You will work closely with our cross functional stakeholders in Product, Finance and GTM along with Business and Enterprise Technology teams. As a Senior Data Engineer, you will: Collaborating closely with various stakeholders to prioritize requests, identify improvements, and offer recommendations. Taking the lead in analyzing, designing, and implementing data solutions, which involves constructing and designing data models and ETL processes. Cultivating collaboration with corporate engineering, product teams, and other engineering groups. Leading and mentoring engineering discussions, advocating for best practices. Actively participating in design and code reviews. Accessing and exploring third-party data APIs to determine the data required to meet business needs. Ensuring data quality and integrity across different sources and systems. Managing data pipelines for both analytics and operational purposes. Continuously enhancing processes and policies to improve SLA and SOX compliance. You'll be a great addition to the team if you have: Hold a B.S., M.S., or Ph.D. in Computer Science or a related technical field. Possess over 5 years of experience in Data Engineering, focusing on building and maintaining data environments. Demonstrate at least 5 years of experience in designing and constructing ETL/ELT processes, managing data solutions within an SLA-driven environment. Exhibit a strong background in developing data products, APIs, and maintaining testing, monitoring, isolation, and SLA processes. Possess advanced knowledge of SQL/NoSQL databases (such as Snowflake, Redshift, MongoDB). Proficient in programming with Python or other scripting languages. Have familiarity with columnar OLAP databases and data modeling. Experience in building ELT/ETL processes using tools like dbt, AirFlow, Fivetran, CI/CD using GitHub, and reporting in Tableau. Possess excellent communication and interpersonal skills to effectively collaborate with various business stakeholders and translate requirements. Added bonus if you also have: A good understanding of Salesforce & Netsuite systems Experience in SAAS environments Designed and deployed ML models Experience with events and streaming data
Posted Date not available
8.0 - 12.0 years
7 - 17 Lacs
hyderabad
Hybrid
We are looking for a Senior DevOps Engineer to join our Data & Analytics team and lead infrastructure and deployment efforts for our modern data platform. In this role, you will work closely with data engineers, architects, and analysts to operationalize and support critical components of our cloud-based data ecosystem. This is a contract role for an experienced engineer with deep expertise in GitHub Actions, Terraform, and Microsoft Azure, along with hands-on experience managing infrastructure and deployments for Snowflake, Databricks, Coalesce, Confluent (Kafka), Astronomer (Airflow), and Azure Data Factory. Youll also help establish and manage engineering standards around GitHub repositories, branching strategies, access policies, and team documentation. Key Responsibilities Design, implement, and manage CI/CD pipelines using GitHub Actions to support infrastructure and data pipeline deployments Develop and maintain infrastructure using Terraform across the Azure cloud platform Define and implement branching strategies, access controls, and repository policies in GitHub Set up and maintain GitHub repositories, actions, secrets, and environment configurations for the Data & Analytics team Own and support deployment and infrastructure management for: Snowflake (data warehouse) Databricks (analytics and compute) Coalesce (data transformation) Confluent (Kafka) (event streaming) Astronomer / Apache Airflow (workflow orchestration) Azure Data Factory (data integration and ETL) Collaborate with engineering teams to enable scalable, secure, and reliable infrastructure Implement monitoring, alerting, and logging to ensure system health and availability Contribute to best practices for security, automation, and operational efficiency Required Qualifications 7+ years of experience in DevOps, Cloud Infrastructure, or Platform Engineering roles Strong, hands-on experience with: GitHub Actions (CI/CD automation) Terraform (Infrastructure-as-Code) Microsoft Azure (cloud services and infrastructure) Proven track record managing infrastructure and deployments for: Snowflake Databricks Coalesce Confluent (Kafka) Astronomer / Airflow Azure Data Factory Proficiency in scripting (Python, Bash, or PowerShell) Experience with containerization (Docker), and understanding of cloud networking, IAM, and monitoring tools Preferred Qualifications Experience with Fivetran, Azure SQL Database, or similar data integration tools • Familiarity with data security, access management, and compliance in cloud environments • Strong communication and collaboration skills with distributed teams
Posted Date not available
8.0 - 12.0 years
9 - 19 Lacs
hyderabad
Remote
We are looking for an experienced Fivetran Developer to configure, manage, and troubleshoot data replication pipelines as part of our cloud data integration initiatives. The role focuses on setting up Change Data Capture (CDC) jobs and ensuring accurate, real-time or near-real-time data replication from various source systems such as SQL Server, Oracle, Salesforce, and APIs into Snowflake, Azure Data Lake, and Iceberg formats/catalogs. Key Responsibilities: Configure and manage Fivetran connectors for replicating data from SQL Server, Oracle, Salesforce, and APIs. Set up, monitor, and troubleshoot CDC jobs on source systems to maintain real-time or near-real-time data replication. Ensure accurate and reliable data delivery into Snowflake, Azure Data Lake, and Iceberg formats/catalogs. Collaborate with source system owners to validate CDC configurations and necessary permissions. Monitor the performance, reliability, and data consistency of Fivetran pipelines. Resolve replication issues, data synchronization failures, and adapt to schema changes in partnership with technical and business teams. Maintain thorough documentation of all pipeline configurations, changes, and troubleshooting steps. Required Experience & Skills: Minimum 2+ years of hands-on experience with Fivetran, focusing on data replication use cases. Expertise in configuring CDC for SQL Server, Oracle, Salesforce, and/or APIs. Proven experience replicating data into Snowflake and Azure Data Lake; knowledge of Iceberg formats/catalogs is a plus. Strong understanding of data replication concepts, incremental loads, and sync frequency. Proficient in SQL for data validation and troubleshooting. Excellent communication and documentation skills. Preferred Qualifications: Experience with data security and compliance during data transfer processes. Familiarity with cloud data ecosystems such as Azure, AWS, or GCP. Need to work in Dubai Time Zone
Posted Date not available
6.0 - 11.0 years
6 - 14 Lacs
pune
Hybrid
Project Role Description: A Snowflake Developer will be responsible for designing and developing data solutions within the Snowflake cloud data platform using SNOWPARK, Apache Airflow, Data Build Tool (DBT) and Fivetran. Work location: Pune/Remote. Graduate or Post-Graduate in Computer Science/ Information Technology/Engineering. Job Requirements: Must Have Skills: 6 to 11 years IT Experience as Snowflake Developer . Experience in Telcom Domain BSS/OSS. Minimum experience with 4+ years on Snowflake is MUST. Strong experience with Snowflake (data modeling, performance tuning, security). Proficient in dbt (Data Build Tool) for data transformation is MUST (model creation, Jinja templates, macros and testing). Advanced skills in SQL is MUST - writing, debugging, and performance tuning queries. Workflow Orchestration proficiency with tool Apache Airflow is MUST (developing, scheduling & monitoring). Experience in Integration tool Fivetran is MUST. Experience working with dataframes using Snowpark is MUST. Experience in Automate data workflows and integrate with Azure Devops CI/CD pipelines is MUST . Strong Python & Java scripting for data transformation and automation. Hands-on experience with Snowflake utilities such as SnowSQL, SnowPipe, Python, Tasks, Streams, Time travel, Optimizer, Metadata Manager, data sharing, and stored procedures. Managing sets of XML, JSON, and CSV from different sources. Build, monitor, and optimize ETL and ELT processes with data models. Continually review and audit data models for enhancement. Hands on experience in Code updates, new code development, and reverse engineering. Possess ownership right from start to finish for the allocated project work. Experience with client interaction is must for demonstrating multiple data solutions. Preferred Snowflake SnowPro Certified professionals. Regular engagement with teams for status reporting and routine activities. Implementation of data streaming solutions from different sources for data migration & transformation. Soft Skills: Hands-on analytical, problem solving and debugging skills. Ability to work under pressure. The person Should be flexible to work independently or in a team. Excellent communication skills and ability to present results in a concise manner to technical & non-technical stakeholders.
Posted Date not available
10.0 - 12.0 years
12 - 14 Lacs
chennai, bengaluru
Work from Office
Role Overview Join the My We team to lead the transformation of the pet owner digital experience. As a Senior Full Stack Engineer, you'll help build scalable, modern experiences across the UX, API, and service layers. What You'll Do Develop new features end-to-end: UX to APIs to services Write clean, scalable, and maintainable code Implement REST/GraphQL APIs & database migrations Collaborate with designers & PMs to deliver intuitive user experiences Lead discussions around architecture and technical decisions Participate in scrum ceremonies and peer code reviews Optimize performance and reduce technical debt Uphold accessibility and testing best practices What You Should Have Degree in CS or related field (or equivalent experience) 5+ yrs client-side JS (Vue.js) & REST/GraphQL API experience 5+ yrs with relational DBs (MySQL preferred) 2+ yrs with server-driven UI frameworks (e.g., Symfony UX, Laravel Livewire) Familiarity with Figma, TailwindCSS and UI systems Experience writing tests (unit, E2E, feature) Strong Git/version control and SDLC understanding Experience working in cross-functional agile teams Excellent communication, code review, and architecture leadership skills A problem-solving mindset and team-oriented spirit Location: Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted Date not available
5.0 - 9.0 years
15 - 30 Lacs
pune, chennai, bengaluru
Hybrid
Key Skills - Snowflake , Fivetran , Azure , Python Overview: We are looking for highly experienced Snowflake Data Engineers to drive the data platform modernization strategy for a global organization. In this senior role, you will architect and oversee the development of scalable, high-performance data pipelines and implement advanced data engineering patterns on Snowflake. Youll lead the integration of diverse data sources, optimize transformation workflows, and contribute to enterprise-wide data architecture initiatives. Key Responsibilities: Architect and lead the development of robust, scalable data pipelines to ingest and process data from sources such as Azure Blob Storage, Business Central ERP, CRM, Marketing platforms, and offline channels. Oversee the implementation and optimization of ingestion and orchestration tools like Fivetran and Dagster (or similar), ensuring pipeline resilience and extensibility. Drive the implementation of the Medallion architecture on Snowflake, collaborating with Data Architects and BI teams to ensure alignment with broader data strategy. Lead data transformation and modeling initiatives using DBT, ensuring accuracy, reusability, and maintainability of transformation logic. Design and manage domain-specific DataMarts across business verticals such as Marketing, Supply Chain, and Finance, supporting both operational and analytical use cases. Define and enforce standards for data quality, observability, performance optimization, and data governance within the engineering lifecycle. Act as a technical lead for offshore/onshore engineering teams, guiding best practices around CI/CD, version control, testing, and delivery frameworks. Contribute to the evolution of data engineering tooling and practices, including metadata management, cataloging, and lineage tracking. Partner with downstream teams using Power BI and other analytics tools to ensure alignment of data structures with reporting requirements. Qualifications: Experience in data engineering with a strong focus on cloud data platforms (Snowflake preferred). Deep expertise in data ingestion frameworks (Fivetran, custom pipelines) and workflow orchestration tools (Dagster, Airflow, Prefect). Advanced proficiency in DBT and data modeling techniques (dimensional, star/snowflake schemas, Medallion architecture). Proven experience designing and optimizing cloud-based data architectures integrating multiple structured and semi-structured data sources. Solid understanding of CI/CD, Git-based workflows, and production-grade pipeline management. Experience with data quality frameworks, lineage, and governance solutions. Strong cross-functional collaboration skills and the ability to lead technical conversations with Architects, Analysts, and Business Stakeholders. Familiarity with BI tools such as Power BI and their data dependencies is a strong plus. Prior leadership experience or mentoring of junior engineers preferred.
Posted Date not available
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |