Jobs
Interviews

252 Etl Pipelines Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

20 - 32 Lacs

Bengaluru

Hybrid

Job Title: Senior Data Engineer Experience: 9+ Years Location: Whitefield, Bangalore Notice Period: Serving or Immediate joiners. Role & Responsibilities: Design and implement scalable data pipelines for ingesting, transforming, and loading data from diverse sources and tools. Develop robust data models to support analytical and reporting requirements. Automate data engineering processes using appropriate scripting languages and frameworks. Collaborate with engineers, process managers, and data scientists to gather requirements and deliver effective data solutions. Serve as a liaison between engineering and business teams on all data-related initiatives. Automate monitoring and alerting for data pipelines, products, and dashboards; provide support for issue resolution including on-call responsibilities. Write optimized and modular SQL queries, including view and table creation as required. Define and implement best practices for data validation, ensuring alignment with enterprise standards. Manage QA data environments, including test data creation and maintenance. Qualifications: 9+ years of experience in data engineering or a related field. Proven experience with Agile software development practices. Strong SQL skills and experience working with both RDBMS and NoSQL databases. Hands-on experience with cloud-based data warehousing platforms such as Snowflake and Amazon Redshift . Proficiency with cloud technologies, preferably AWS . Deep knowledge of data modeling , data warehousing , and data lake concepts. Practical experience with ETL/ELT tools and frameworks. 5+ years of experience in application development using Python , SQL , Scala , or Java . Experience in working with real-time data streaming and associated platforms. Note: The professional should be based out of Bangalore, as one technical round has to be taken F2F from Bellandur, Bangalore office.

Posted 1 month ago

Apply

0.0 - 5.0 years

0 - 5 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

We are interested in individuals who are passionate about engineering, motivated by solving complex problems, and inspired by building systems that scale. You should be comfortable navigating modern data platforms, cloud technologies, and DevOps practices. Responsibilities Partner with strategists and portfolio managers to deliver tools that enable quantitative research, automation, and operational efficiency Implement DevOps best practices including CI/CD, infrastructure as code, observability, and automated testing to ensure reliability and scalability of systems Participate in code reviews, design discussions, and technical architecture decisions Contribute to the technical direction of engineering projects, ensuring maintainability, security, and performance Qualifications Bachelor's or advanced degree in Computer Science, Engineering, or a related technical field Proficient in at least one programming language, and proven experience in production software development life cycle (SDLC) Solid understanding of DevOps principles, including containerization, CI/CD workflows, and infrastructure automation Familiarity with data modeling, ETL pipelines, and data governance concepts beneficial Excellent communication skills A self-starter with the ability to thrive in a fast-paced, global team environment

Posted 1 month ago

Apply

4.0 - 9.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Bachelor s or master s degree in computer science, Information Technology, Data Science, or a related field. Must have minimum 4 years of relevant experience Proficient in Python with hands-on experience building ETL pipelines for data extraction, transformation, and validation. Strong SQL skills for working with structured data. Familiar with Grafana or Kibana for data visualization and monitoring/dashboards. Experience with databases such as MongoDB, Elasticsearch, and MySQL. Comfortable working in Linux environments using common Unix tools. Hands-on experience with Git, Docker and virtual machines.

Posted 1 month ago

Apply

2.0 - 7.0 years

5 - 10 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Within Goldman Sachs Asset Management, software engineers are at the heart of our platform, building scalable systems, designing robust frameworks, and delivering high-quality software that drives our business. Working in close collaboration with quantitative researchers and portfolio managers, software engineers enable critical capabilities across our asset management business. As a member of our engineering team, you will bring strong design and development experience to create modern, resilient, and data-driven systems. Who We Look for We are interested in individuals who are passionate about engineering, motivated by solving complex problems, and inspired by building systems that scale. You should be comfortable navigating modern data platforms, cloud technologies, and DevOps practices. Responsibilities Partner with strategists and portfolio managers to deliver tools that enable quantitative research, automation, and operational efficiency Implement DevOps best practices including CI/CD, infrastructure as code, observability, and automated testing to ensure reliability and scalability of systems Participate in code reviews, design discussions, and technical architecture decisions Contribute to the technical direction of engineering projects, ensuring maintainability, security, and performance Qualifications Bachelor's or advanced degree in Computer Science, Engineering, or a related technical field Proficient in at least one programming language, and proven experience in production software development life cycle (SDLC) Solid understanding of DevOps principles, including containerization, CI/CD workflows, and infrastructure automation Familiarity with data modeling, ETL pipelines, and data governance concepts beneficial Excellent communication skills A self-starter with the ability to thrive in a fast-paced, global team environment

Posted 1 month ago

Apply

8.0 - 13.0 years

18 - 22 Lacs

Hyderabad

Remote

Roles: SQL Data Engineer - ETL, DBT & Snowflake Specialist Location: Remote Duration: 14+ Months Timings: 5:30pm IST 1:30am IST Note: Immediate Joiners Only Required Experience: Advanced SQL Proficiency Writing and optimizing complex queries, stored procedures, functions, and views. Experience with query performance tuning and database optimization. ETL/ELT Development Building, and maintaining ETL/ELT pipelines. Familiarity with ETL tools or processes and orchestration frameworks. Data Modeling Designing and implementing data models Understanding of dimensional modeling and normalization. Snowflake Expertise Hands-on experience with Snowflakes architecture and features Experience with Snowflake database, schema, procedures, functions. DBT (Data Build Tool) Building data models, transformations using DBT. Implementing DBT best practices including testing, documentation, and CI/CD integration. Programming and Automation Proficiency in Python is a plus. Experience with version control systems (e.g., Git, Azure DevOps). Experience with Agile methodologies and DevOps practices. Collaboration and Communication Working effectively with data analysts, and business stakeholders. Translating technical concepts into clear, actionable insights. Prior experience in a fast-paced, data-driven environment.

Posted 1 month ago

Apply

2.0 - 4.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Overview We are an integral part of Annalect Global and Omnicom Group, one of the largest media and advertising agency holding companies in the world. Omnicom’s branded networks and numerous specialty firms provide advertising, strategic media planning and buying, digital and interactive marketing, direct and promotional marketing, public relations, and other specialty communications services. Our agency brands are consistently recognized as being among the world’s creative best. Annalect India plays a key role for our group companies and global agencies by providing stellar products and services in areas of Creative Services, Technology, Marketing Science (data & analytics), Market Research, Business Support Services, Media Services, Consulting & Advisory Services. We currently have 4000+ awesome colleagues (in Annalect India) who are committed to solve our clients’ pressing business issues. We are growing rapidly and looking for talented professionals like you to be part of this journey. Let us build this, together. Responsibilities Collaborate internally between departments and act as a data facilitator to identify potential erroneous data and report and fix identified issues. Act as a data entry specialist while maintaining speed and accuracy in day-to-day operation. Provide support to internal members with the agency’s Hyperlocal platform. Ensure the security, integrity, and data governance of all stored information. Possess and maintain awareness of best practices related to data acumen, business trends, and evolving technologies. Develop a strong understanding of internal and external data sources. Must be a strong, honest, and proactive communicator, acting as a collaborative liaison between business and technology teams. Assist the Retail Tech Data team in regular data audits Knowledge of AdTech, MarTech, CRM metrics, and related business concepts is a big plus. Understand best data practices, normalization, and data governance. Effectively and efficiently explain and understand the agency’s basic data needs. Qualifications B.A./B.S. degree or equivalent in Information Systems, Statistics, or a comparable field of study. Hands-on experience working with data, data integration technologies, and databases. Experience with data governance rules and models. Comfortable with new technologies and iterating quickly. Able to balance multiple concurrent projects. Experience with Bigquery, ETL pipelines, API requirements, and BI tools is a plus Strong attention to detail and communication skills when validating data and reporting on data quality and integrity

Posted 1 month ago

Apply

4.0 - 8.0 years

8 - 11 Lacs

Bengaluru

Work from Office

Job Title: Business Intelligence Developer Engineering & Delivery Metrics Role Overview: Skyhigh Security is hiring a Business Intelligence Developer to join our Engineering Excellence organization. Reporting to the VP of Engineering Excellence, youll play a key role in helping the company measure, visualize, and optimize software delivery performance by integrating data from across the engineering toolchain and building actionable, insightful dashboards. About the Role Skyhigh Security is hiring a Business Intelligence Developer to join our Engineering Excellence organization. Reporting to the VP of Engineering Excellence , youll play a key role in helping the company measure, visualize, and optimize software delivery performance by integrating data from across the engineering toolchain and building actionable, insightful dashboards. Working alongside engineering teams, DevOps, and engineering-quality leaders, youll design and implement dashboards that surface DORA metrics and other critical indicators of engineering velocity, quality, and stability. Your work will provide engineering leaders and teams with the visibility they need to drive continuous improvement, reduce bottlenecks, and increase delivery confidence. This role combines deep technical BI skills with an understanding of software development workflows . You will own the integration of data from tools like Jira, GitHub, Jenkins, and CI/CD pipelines , and apply your expertise in SQL, Python, and visualization tools to transform raw data into insights that matter. Responsibilities: Design and build dashboards that track DORA metrics and other Agile delivery KPIs. Integrate data from developer tools (e.g., Jira, GitHub, CI/CD platforms) via APIs, webhooks, and ETL pipelines. Define data models and transformations to support clear, reliable visualizations. Partner with engineering leaders to define metric strategies aligned to business and technical outcomes. Ensure reporting solutions are scalable, accurate, and tailored to various stakeholder needs. Requirements: Strong skills in SQL and Python for data processing and integration. Hands-on experience with Tableau, EazyBI, Power BI, or similar tools. Deep understanding of DORA metrics, Agile metrics, and engineering performance indicators. Familiarity with toolchain APIs (e.g., Jira, GitHub, Jenkins, ArgoCD) and ETL practices. Strong data modeling and storytelling skills with a focus on clarity and actionability.

Posted 1 month ago

Apply

4.0 - 7.0 years

10 - 17 Lacs

Chennai, Coimbatore, Bengaluru

Work from Office

Job Summary : We are looking for a Senior Engineer to join our data team and drive the development of reliable, scalable, and high-performance data systems. This role requires a strong foundation in cloud platforms, data engineering best practices, and data warehousing. The ideal candidate has hands-on experience in building robust ETL/ELT pipelines and designing data models to support business analytics and reporting needs. Key Responsibilities: Design, develop, and maintain scalable ETL/ELT data pipelines for batch and real-time data processing. Build and optimise cloud-native data platforms and data warehouses (e.g., Snowflake, Redshift, BigQuery). Design and implement data models, including normalised and dimensional models (star/snowflake schema). Collaborate with cross-functional teams to gather requirements and deliver reliable data solutions. Ensure data quality, consistency, governance, and security across data platforms. Optimise and tune SQL queries and data workflows for performance and cost efficiency. Lead or mentor junior data engineers and contribute to team-level planning and design. Must-Have Qualifications: Cloud Expertise: Strong experience with at least one cloud platform (AWS, Azure, or GCP). Programming: Proficiency in Python, SQL, and shell scripting. Data Warehousing & Modeling: Deep understanding of warehousing concepts and best practices. ETL/ELT Pipelines: Proven experience with building pipelines using orchestration tools like Airflow or DBT. Experience with CI/CD tools and version control (Git). Familiarity with distributed data processing and performance optimisation. Good-to-Have Skills: Hands-on experience with UI-based ETL tools like Talend , Informatica , or Azure Data Factory . Exposure to visualisation and BI tools such as Power BI , Tableau , or Looker . Knowledge of data governance frameworks and metadata management tools (e.g., Collibra, Alation). Experience in leading data engineering teams or mentoring team members . Understanding of data security , access control , and compliance standards (e.g., GDPR, HIPAA).

Posted 1 month ago

Apply

1.0 - 3.0 years

1 - 3 Lacs

Mumbai, Maharashtra, India

On-site

We are seeking a skilledSr.Production Support Engineerto join our dynamic Engineering team. The ideal candidate will take ownership of debugging day-to-day issues, identifying root causes, improving broken processes, and ensuring the smooth operation of our systems. You will work closely with cross-functional teams to analyze, debug, and enhance system performance, contributing to a more efficient and reliable infrastructure. Key Responsibilities: Incident Debugging and Resolution: Investigate and resolve daily production issues, minimizing downtime and ensuring stability. Perform root cause analysis and implement solutions to prevent recurring issues. Data Analysis and Query Writing: Write and optimize custom queries forMySQL,Postgres,MongoDB,Redshift, or other data systems to debug processes and verify data integrity. Analyze system and application logs to identify bottlenecks or failures. Scripting and Automation: Develop and maintain custom Python scripts for data exports, data transformation, and debugging. Create automated solutions to address inefficiencies in broken processes. Process Improvement: Collaborate with engineering and operations teams to enhance system performance and reliability. Proactively identify and implement process improvements to optimize workflows. Collaboration: Act as the first point of contact for production issues, working closely with developers, QA teams, and other stakeholders. Document findings, resolutions, and best practices to build a knowledge base. Required Skills and Qualifications: Experience : 35 years of hands-on experience in debugging,Python scripting, and production support in a technical environment. Technical Proficiency: Strong experience withPythonfor scripting and automation withPandas. Proficient in writing and optimizing queries forMySQL,Postgres,MongoDB,Redshift, or similar databases. Familiarity with ETL pipelines, APIs, or data integration tools is a plus. Problem-Solving : Exceptional analytical and troubleshooting skills to quickly diagnose and resolve production issues. Process Improvement : Ability to identify inefficiencies and implement practical solutions to enhance system reliability and workflows. Communication : Excellent verbal and written communication skills for cross-functional collaboration and documentation. Nice-to-Have Skills: Exposure to tools likeAirflow,Pandas, orNumPyfor data manipulation and debugging. Familiarity with production monitoring tools likeNew Relic or Datadog. Experience with cloud platforms such as AWS, GCP, or Azure. Basic knowledge of CI/CD pipelines for deployment support.

Posted 1 month ago

Apply

4.0 - 9.0 years

13 - 18 Lacs

Bengaluru

Work from Office

ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it , our most valuable asset is our people. Here you’ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage an d passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems—the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. ZS’s India Capability & Expertise Center (CEC) houses more than 60% of ZS people across three offices in New Delhi, Pune and Bengaluru. Our teams work with colleagues around the world to deliver real-world solutions to the clients who drive our business. The CEC maintains standards of analytical, operational and technologic excellence to deliver superior results to our clients. ZS’s Beyond Healthcare Analytics (BHCA) Team is shaping one of the key growth vectors for ZS. Beyond Healthcare engagements are comprised of clients from industries like Quick service restaurants, Technology, Food & Beverage, Hospitality, Travel, Insurance, Consumer Products Goods & other such industries across North America, Europe & South East Asia region. BHCA India team currently has presence across New Delhi, Pune and Bengaluru offices and is continuously expanding further at a great pace. BHCA India team works with colleagues across clients and geographies to create and deliver real world pragmatic solutions leveraging AI SaaS products & platforms, Generative AI applications, and other Advanced analytics solutions at scale. WhatYou’llDo Design and implement highly available data pipelines using spark and other big data technologies Work with data science team to develop new features to increase model accuracy and performance Create standardized data models to increase standardization across client deployments Troubleshooting and resolve issues in existing ETL pipelines. Complete proofs of concept to demonstrate capabilities and connect to new data sources Instill best practices for software development, ensure designs meet requirements, and deliver high-quality work on schedule. Document application changes and development updates. WhatYou’llBring A master’s or bachelor’s degree in computer science or related field from a top university. 4+ years' overall experience; 2+ years’ experience in data engineering using Apache Spark and SQL. 2+ years of experience in building and leading a strong data engineering team. Experience with full software lifecycle methodology, including coding standards, code reviews, source control management, build processes, testing, and operations. In-depth knowledge of python, sql, pyspark, distributed computing, analytical databases and other big data technologies. Strong knowledge of one or more cloud environments such as aws, gcp, and azure. Familiarity with the data science and machine learning development process Familiarity with orchestration tools such as Apache Airflow Strong analytical skills and the ability to develop processes and methodologies. Experience working with cross-functional teams, including UX, business (e.g. Marketing, Sales), product management and/or technology/IT/engineering) is a plus. Characteristics of a forward thinker and self-starter that thrives on new challenges and adapts quickly to learning new knowledge. Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At www.zs.com

Posted 1 month ago

Apply

1.0 - 6.0 years

8 - 12 Lacs

Pune

Work from Office

ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it , our most valuable asset is our people. Here you’ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage an d passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems—the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. Data Engineer - Data Engineering & Analytics What you'll do: Create and maintain optimal data pipeline architecture. Identify, design, and implement internal process improvements, automating manual processes, optimizing data delivery, re-designing infrastructure for scalability. Design, develop and deploy high volume ETL pipelines to manage complex and near-real time data collection. Develop and optimize SQL queries and stored procedures to meet business requirements. Design, implement, and maintain REST APIs for data interaction between systems. Ensure performance, security, and availability of databases. Handle common database procedures such as upgrade, backup, recovery, migration, etc. Collaborate with other team members and stakeholders. Prepare documentations and specifications. What you'll bring: Bachelor’s degree in computer science, Information Technology, or related field 1+ years of experience SQL, TSQL, Azure Data Factory or Synapse or relevant ETL technology. Prepare documentations and specifications. Strong analytical skills (impact/risk analysis, root cause analysis, etc.) Proven ability to work in a team environment, creating partnerships across multiple levels. Demonstrated drive for results, with appropriate attention to detail and commitment. Hands-on experience with Azure SQL Database Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At www.zs.com

Posted 1 month ago

Apply

2.0 - 5.0 years

14 - 17 Lacs

Mysuru

Work from Office

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 1 month ago

Apply

2.0 - 5.0 years

14 - 17 Lacs

Navi Mumbai

Work from Office

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 1 month ago

Apply

5.0 - 8.0 years

8 - 12 Lacs

Hyderabad

Work from Office

JD Analysis: Data Service Engineer Role Summary: A Data Service Engineer is expected to act as an individual contributor responsible for designing, building, and maintaining data pipelines and integrations that support enterprise applications. The role requires hands-on experience in Python, SQL, ETL tools , and API-based integrations . Core Responsibilities: Design, develop, and maintain robust, scalable data pipelines using Python and SQL. Work on end-to-end integration of enterprise applications using RESTful APIs , webhooks , or other middleware tools. Optimize ETL workflows and data transformation frameworks for performance and reliability. Monitor and troubleshoot data flows and integrated services to ensure smooth operations. Collaborate with application teams and stakeholders to understand data requirements and build integration solutions accordingly. Ensure data quality, security , and governance standards are met across all integrations. Key Skills & Experience: Skill Area Requirement Experience ~5 years (4.56 years acceptable range) Programming Strong with Python and SQL Data Pipelines / ETL Solid experience in building pipelines using ETL frameworks or tools APIs & Integration Strong understanding of REST APIs , JSON/XML parsing, integration tools. Cloud Exposure (Preferred) Familiarity with cloud platforms (AWS/GCP/Azure) is a plus Soft Skills Self-starter, excellent problem-solving ability, stakeholder communication Preferred Qualifications: Bachelor’s/Master’s degree in Computer Science, IT, or related field. Certifications in ETL, API integration , or cloud data services Prior experience in enterprise-scale application integration is highly desirable

Posted 1 month ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Noida, Pune, Bengaluru

Hybrid

We are looking for a Snowflake Data Engineer with deep expertise in Snowflake and DBT to help us build and scale our modern data platform. Key Responsibilities: Design and build scalable ELT pipelines in Snowflake using DBT . Develop efficient, well-tested DBT models (staging, intermediate, and marts layers). Implement data quality, testing, and monitoring frameworks to ensure data reliability and accuracy. Optimize Snowflake queries, storage, and compute resources for performance and cost-efficiency. Collaborate with cross-functional teams to gather data requirements and deliver data solutions. Required Qualifications: 5+ years of experience as a Data Engineer, with at least 4 years working with Snowflake . Proficient with DBT (Data Build Tool) including Jinja templating, macros, and model dependency management. Strong understanding of ELT patterns and modern data stack principles. Advanced SQL skills and experience with performance tuning in Snowflake. Interested candidates share your CV at himani.girnar@alikethoughts.com with below details Candidate's name- Email and Alternate Email ID- Contact and Alternate Contact no- Total exp- Relevant experience- Current Org- Notice period- CCTC- ECTC- Current Location- Preferred Location- Pancard No-

Posted 2 months ago

Apply

2.0 - 5.0 years

14 - 17 Lacs

Kochi

Work from Office

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Sparks architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 2 months ago

Apply

4.0 - 8.0 years

10 - 20 Lacs

Kolkata, Gurugram, Bengaluru

Work from Office

Job Opportunity for GCP Data Engineer Role: Data Engineer Location: Gurugram/ Bangalore/Kolkata (5 Days work from office) Experience : 4+ Years Key Skills: Data Analysis / Data Preparation - Expert Dataset Creation / Data Visualization - Expert Data Quality Management - Advanced Data Engineering - Advanced Programming / Scripting - Intermediate Data Storytelling- Intermediate Business Analysis / Requirements Analysis - Intermediate Data Dashboards - Foundation Business Intelligence Reporting - Foundation Database Systems - Foundation Agile Methodologies / Decision Support - Foundation Technical Skills: • Cloud - GCP - Expert • Database systems (SQL and NoSQL / Big Query / DBMS) - Expert • Data warehousing solutions - Advanced • ETL Tools - Advanced • Data APIs - Advanced • Python, Java, and Scala etc. - Intermediate • Some knowledge understanding the basics of distributed systems - Foundation • Some knowledge of algorithms and optimal data structures for analytics - Foundation • Soft Skills and time management skills - Foundation

Posted 2 months ago

Apply

6.0 - 8.0 years

10 - 12 Lacs

Hyderabad

Work from Office

Seeking ETL Developer with expertise in Informatica IICS/PowerCenter, strong SQL skills, Snowflake integration, and cloud apps. Design high-quality ETL pipelines, ensure performance, mentor juniors, and collaborate across teams.

Posted 2 months ago

Apply

0.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Introduction In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities As a Data Engineer at IBM, youll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise Experience with Apache Spark (PySpark): In-depth knowledge of Spark architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 2 months ago

Apply

0.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Introduction In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your role and responsibilities As a Data Engineer at IBM, youll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise Experience with Apache Spark (PySpark): In-depth knowledge of Sparks architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 2 months ago

Apply

0.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Introduction In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities As a Data Engineer at IBM, youll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise Experience with Apache Spark (PySpark): In-depth knowledge of Sparks architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 2 months ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Pytest Good to have skills : DevOps, AWS GlueMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time educationJob Title:Data QA EngineerKey Responsibilities:Ensure the quality and reliability of data pipelines and workflows within the AWS ecosystem.Design and implement comprehensive test strategies for data validation, transformation, and integration processes.Collaborate with development teams to modernize applications and ensure data quality across strategic platforms and technologies, such as Amazon Web Services.Develop automated testing frameworks and scripts using Python to validate medium to complex data processing logic.Perform rigorous testing of ETL pipelines, ensuring scalability, efficiency, and adherence to data quality standards.Maintain detailed documentation for test cases, data validation logic, and quality assurance processes.Follow agile methodologies and CI/CD practices to integrate testing seamlessly into development workflows. Technical Experience:Expertise in testing scalable ETL pipelines developed using AWS Glue (PySpark) for large-scale data processing.Proficiency in validating data integration from diverse sources, including Amazon S3, Redshift, RDS, APIs, and on-prem systems.Experience in testing data ingestion, validation, transformation, and enrichment processes to ensure high data quality and consistency.Advanced skills in data cleansing, deduplication, transformation, and enrichment testing.Familiarity with job monitoring, error handling, and alerting mechanisms using AWS CloudWatch and SNS.Experience in maintaining technical documentation for data workflows, schema logic, and business transformations.Proficiency in agile methodologies and CI/CD practices with tools like GitLab and Docker.Good to have:Experience in Power BI for data visualization and reporting.Familiarity with building CI/CD pipelines using Git. Professional Attributes:Excellent communication, collaboration, and analytical skills.Flexibility to work shifts if necessary. Qualification 15 years full time education

Posted 2 months ago

Apply

6.0 - 8.0 years

32 - 37 Lacs

Pune

Work from Office

: Job TitleAFC Transaction Monitoring - Senior Engineer, VP LocationPune, India Role Description You will be joining the Anti-Financial Crime (AFC) Technology team and will work as part of a multi-skilled agile squad, specializing in designing, developing, and testing engineering solutions, as well as troubleshooting and resolving technical issues to enable the Transaction Monitoring (TM) systems to identify Money Laundering or Terrorism Financing. You will have the opportunity to work on challenging problems, with large complex datasets and play a crucial role in managing and optimizing the data flows within Transaction Monitoring. You will have the opportunity to work across Cloud and BigData technologies, optimizing the performance of existing data pipelines as well as designing and creating new ETL Frameworks and solutions. You will have the opportunity to work on challenging problems, building high-performance systems to process large volumes of data, using the latest technologies. Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities As a Vice President, your role will include management and leadership responsibilities, such as: Leading by example, by creating efficient ETL workflows to extract data from multiple sources, transform it according to business requirements, and load it into the TM systems. Implementing data validation and cleansing techniques to maintain high data quality and detective controls to ensure the integrity and completeness of data being prepared through our Data Pipelines. Work closely with other developers and architects to design and implement solutions that meet business needs whilst ensuring that solutions are scalable, supportable and sustainable. Ensuring that all engineering work complies with industry and DB standards, regulations, and best practices Your skills and experience Good analytical problem-solving capabilities with excellent communication skills written and oral enabling authoring of documents that will support a technical team in performing development work. Experience in Google Cloud Platform is preferred but other the cloud solutions such as AWS would be considered 5+ years experience in Oracle, Control M, Linux and Agile methodology and prior experience of working in an environment using internally engineered components (database, operating system, etc.) 5+ years experience in Hadoop, Hive, Oracle, Control M, Java development is required whilst experience in OpenShift, PySpark is preferred Strong understanding of designing and delivering complex ETL pipelines in a regulatory space How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs

Posted 2 months ago

Apply

4.0 - 5.0 years

6 - 7 Lacs

Karnataka

Work from Office

Develop and manage ETL pipelines using Python. Responsible for transforming and loading data efficiently from source to destination systems, ensuring clean and accurate data.

Posted 2 months ago

Apply

4.0 - 5.0 years

6 - 7 Lacs

Karnataka

Work from Office

Focus on designing, developing, and maintaining Snowflake data environments. Responsible for data modeling, ETL pipelines, and query optimization to ensure efficient and secure data processing.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies