Jobs
Interviews

905 Data Flow Jobs - Page 34

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

15.0 - 20.0 years

5 - 9 Lacs

hyderabad

Work from Office

Project Role :Data Modeler Project Role Description : Work with key business representatives, data owners, end users, application designers and data architects to model current and new data. Must have skills : Python (Programming Language) Good to have skills : Large Language ModelsMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Modeler, you will engage with key business representatives, data owners, end users, application designers, and data architects to model both current and new data. Your typical day will involve collaborating with various stakeholders to understand their data needs, analyzing existing data structures, and designing effective data models that support business objectives. You will also be responsible for ensuring that the data models are aligned with best practices and meet the requirements of the organization, facilitating smooth data flow and accessibility across different systems. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate training sessions for junior team members to enhance their understanding of data modeling.- Continuously evaluate and improve data modeling processes to ensure efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Python (Programming Language).- Good To Have Skills: Experience with Large Language Models.- Strong understanding of data modeling techniques and methodologies.- Experience with database management systems and data warehousing concepts.- Familiarity with data governance and data quality principles. Additional Information:- The candidate should have minimum 5 years of experience in Python (Programming Language).- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted Date not available

Apply

15.0 - 20.0 years

4 - 8 Lacs

bengaluru

Work from Office

Project Role :Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Data Engineering Good to have skills : Oracle Procedural Language Extensions to SQL (PLSQL), Python (Programming Language), Google BigQueryMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Engineering, Oracle Procedural Language Extensions to SQL (PLSQL), Python (Programming Language), Google BigQuery.- Good To Have Skills: Experience with data warehousing solutions and cloud-based data platforms.- Strong understanding of ETL processes and data pipeline architecture.- Experience with data modeling and database design principles.- Familiarity with data governance and data quality frameworks. Additional Information:- The candidate should have minimum 5 years of experience in Data Engineering.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted Date not available

Apply

2.0 - 7.0 years

18 - 22 Lacs

navi mumbai

Work from Office

About The Role Job Title Data Science/Data Engineering ML9 - Sales Excellence COE - Data Engineering Specialist Management Level :ML9 Location:Open Must have skills: strong GCP cloud technology experience, Big query, Data Science basics. Good to have skills: Build and maintain data models from different sources. Experience: Minimum 5 year(s) of experience is required Educational Qualification: Graduate/Post graduate Job Summary :The Center of Excellence (COE) makes sure that the sales and pricing methods and offerings of Sales Excellence are effective. The COE supports salespeople through its business partners and Analytics and Sales Operations teams. The Data Engineer helps manage data sources and environments, utilizing large data sets and maintaining their integrity to create models and apps that deliver insights to the organization. Roles & Responsibilities: Build and manage data models that bring together data from different sources. Understand the existing Data Model in SQL Server and help redesign/migrate in GCP Bigquery. Help consolidate and cleanse data for use by the modeling and development teams. Structure data for use in analytics applications. Lead a team of Data Engineers effectively. Professional & Technical Skills: A bachelors degree or equivalent A Minimum of2 years of strong GCP cloud technology experience A minimum of 2 years Advanced SQL knowledge and experience working with relational databases A minimum of 2 years Familiarity and hands on experience in different SQL objects like stored procedures, functions, views etc., A minimum of 2 years Building of data flow components and processing systems to extract, transform, load and integrate data from various sources. A basic knowledge of Data Science models and tools. Additional Information: Extra credit if you have: Understanding of sales processes and systems. Masters degree in a technical field. Experience with Python. Experience with quality assurance processes. Experience in project management. You May Also Need: Ability to work flexible hours according to business needs. Must have good internet connection and a distraction-free environment for working at home, in accordance with local guidelines. About Our Company | Accenture (do not remove the hyperlink) Qualification Experience: Minimum 5 year(s) of experience is required Educational Qualification: Graduate/Post graduate

Posted Date not available

Apply

15.0 - 20.0 years

4 - 8 Lacs

bengaluru

Work from Office

Project Role :Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide innovative solutions that enhance data accessibility and usability. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Develop and optimize data pipelines to ensure efficient data flow and processing.- Monitor and troubleshoot data quality issues to maintain high standards of data integrity. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with Apache Spark and cloud-based data solutions.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and data integration techniques.- Familiarity with data governance and data security best practices. Additional Information:- The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted Date not available

Apply

15.0 - 20.0 years

3 - 7 Lacs

pune

Work from Office

Project Role :Business and Integration Practitioner Project Role Description : Assists in documenting the integration strategy endpoints and data flows. Is familiar with the entire project life-cycle, including requirements analysis, coding, testing, deployment, and operations to ensure successful integration. Under the guidance of the Architect, ensure the integration strategy meets business goals. Must have skills : Personal Insurance Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Business and Integration Practitioner, your typical day involves collaborating with various teams to document integration strategies, endpoints, and data flows. You will engage in discussions to ensure that the integration aligns with business objectives while navigating through the project life-cycle, which includes requirements analysis, coding, testing, deployment, and operations. Your role is pivotal in facilitating successful integration by working closely with the Architect and other stakeholders to address any challenges that arise throughout the process. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of integration strategies and data flows to ensure clarity and alignment with business goals.- Collaborate with cross-functional teams to gather requirements and provide insights during the project life-cycle. Professional & Technical Skills: - Must To Have Skills: Proficiency in Personal Insurance.- Strong understanding of integration strategies and data flow documentation.- Experience with project life-cycle management, including requirements analysis and deployment.- Ability to work collaboratively in a team environment and contribute to discussions.- Familiarity with testing methodologies and operational processes to ensure successful integration. Additional Information:- The candidate should have minimum 3 years of experience in Personal Insurance.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education

Posted Date not available

Apply

15.0 - 20.0 years

13 - 18 Lacs

pune

Work from Office

Project Role :Data Architect Project Role Description : Define the data requirements and structure for the application. Model and design the application data structure, storage and integration. Must have skills : Microsoft Azure Databricks Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Architect, you will define the data requirements and structure for the application. Your typical day will involve modeling and designing the application data structure, storage, and integration, ensuring that the architecture aligns with business needs and technical specifications. You will collaborate with various teams to ensure that data flows seamlessly and efficiently throughout the organization, contributing to the overall success of data-driven initiatives. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Develop and maintain documentation related to data architecture and design. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks.- Strong understanding of data modeling techniques and best practices.- Experience with data integration tools and ETL processes.- Familiarity with cloud data storage solutions and architectures.- Ability to design scalable and efficient data pipelines. Additional Information:- The candidate should have minimum 5 years of experience in Microsoft Azure Databricks.- This position is based in Pune.- A 15 years full time education is required. Qualification 15 years full time education

Posted Date not available

Apply

3.0 - 7.0 years

11 - 15 Lacs

bengaluru

Work from Office

A Data Platform Engineer specialises in the design, build, and maintenance of cloud-based data infrastructure and platforms for data-intensive applications and services. They develop Infrastructure as Code and manage the foundational systems and tools for efficient data storage, processing, and management. This role involves architecting robust and scalable cloud data infrastructure, including selecting and implementing suitable storage solutions, data processing frameworks, and data orchestration tools. Additionally, a Data Platform Engineer ensures the continuous evolution of the data platform to meet changing data needs and leverage technological advancements, while maintaining high levels of data security, availability, and performance. They are also tasked with creating and managing processes and tools that enhance operational efficiency, including optimising data flow and ensuring seamless data integration, all of which are essential for enabling developers to build, deploy, and operate data-centric applications efficiently. Job Description - Grade Specific A strong grasp of the principles and practices associated with data platform engineering, particularly within cloud environments, and demonstrates proficiency in specific technical areas related to cloud-based data infrastructure, automation, and scalability.Key responsibilities encompass: Community Engagement: Actively participating in the professional data platform engineering community, sharing insights, and staying up-to-date with the latest trends and best practices. Project Contributions: Making substantial contributions to client delivery, particularly in the design, construction, and maintenance of cloud-based data platforms and infrastructure. Technical Expertise: Demonstrating a sound understanding of data platform engineering principles and knowledge in areas such as cloud data storage solutions (e.g., AWS S3, Azure Data Lake), data processing frameworks (e.g., Apache Spark), and data orchestration tools. Independent Work and Initiative: Taking ownership of independent tasks, displaying initiative and problem-solving skills when confronted with intricate data platform engineering challenges. Emerging Leadership: Commencing leadership roles, which may encompass mentoring junior engineers, leading smaller project teams, or taking the lead on specific aspects of data platform projects.

Posted Date not available

Apply

7.0 - 10.0 years

25 - 35 Lacs

hyderabad

Remote

Role: Salesforce Architect Job Summary: We are looking for an experienced Salesforce Architect to lead the design, implementation, and optimization of Salesforce solutions across the enterprise. The ideal candidate will have deep technical expertise in Salesforce architecture, a strong understanding of business processes, and the ability to align platform capabilities with strategic goals. You will serve as a trusted advisor to stakeholders, guiding technical teams and ensuring scalable and maintainable Salesforce solutions. Key Responsibilities: Design and architect scalable Salesforce solutions across Sales Cloud, CPQ, PSA (Certinia), and other Salesforce products and managed packages. Lead end-to-end solution architecture for complex business problems. Collaborate with business stakeholders, developers, and administrators to define requirements and translate them into technical designs. Ensure architectural best practices are followed, including performance tuning, security, and integration. Review and approve technical designs and code to ensure quality and alignment with enterprise standards. Oversee data modeling, integration design, and data migration planning. Serve as the SME for all Salesforce-related initiatives, providing mentorship to technical teams. Stay updated on Salesforce platform enhancements and industry trends to guide long-term strategies. Required Qualifications: 8-10+ years of experience with Salesforce, including 5+ years in an architectural role. Salesforce certifications such as Application Architect, System Architect, and/or Technical Architect (CTA) highly preferred. Hands-on experience with Apex, Lightning Components, SOQL, and Salesforce APIs. Deep knowledge of Salesforce architecture patterns, governor limits, and security models. Proven experience in Salesforce integrations using middleware tools like Snaplogic, Mulesoft, Dell Boomi, etc. Strong communication and stakeholder management skills. Preferred Skills: Experience in Agile methodologies and DevOps practices. Background in other CRM platforms or enterprise applications is a plus. Knowledge of Sales cloud, CPQ, PSA, or industry-specific Salesforce clouds.

Posted Date not available

Apply

8.0 - 13.0 years

6 - 10 Lacs

hyderabad

Work from Office

Skill Extensive experience with Google Data Products (Cloud Data Fusion,BigQuery, Dataflow, Dataproc, AI Building Blocks, Looker, Dataprep, etc.). Expertise in Cloud Data Fusion,BigQuery & Dataproc Experience in MDM, Metadata Management, Data Quality and Data Lineage tools. E2E Data Engineering and Lifecycle (including non-functional requirements and operations) management. Experience with SQL and NoSQL modern data stores. E2E Solution Design skills - Prototyping, Usability testing and data visualization literacy. Excellent knowledge of the software development life cycle

Posted Date not available

Apply

5.0 - 10.0 years

18 - 27 Lacs

chennai, coimbatore, bengaluru

Work from Office

Senior Data Engineer (GCP) Chennai (5 to 8 Years Experience) Location: Chennai (4 days in-office) Work Mode: Hybrid Experience: 5+ Years About the Role: We are seeking a highly skilled Senior Data Engineer with strong expertise in Google Cloud Platform (GCP) to join our dynamic team supporting one of the worlds leading automotive clients. If you have solid experience in designing and implementing scalable data pipelines and a passion for data engineering, we want to hear from you! Key Responsibilities: Design and develop scalable, robust data pipelines on GCP Develop and optimize complex SQL queries for data transformation and analysis Work extensively with BigQuery, Cloud Storage, Dataflow, and Pub/Sub Ensure data quality, security, and high performance across pipelines Collaborate closely with data analysts, architects, and business stakeholders Participate in incident management, troubleshooting, and performance tuning Must-Have Skills: Hands-on experience with SQL and Google BigQuery Strong knowledge of GCP services: BigQuery, Cloud Dataflow, Cloud Composer, Pub/Sub, Cloud Functions, Cloud Storage Experience with Python or Java for data engineering tasks (preferred) Solid understanding of data warehousing, ETL/ELT processes, and data modeling Ability to work in a fast-paced, collaborative environment Why Join Us? Work on cutting-edge GCP technologies with industry-leading clients Hybrid work model with 4 days in-office at a prime Chennai location Competitive salary with attractive benefits and growth opportunities Collaborative and innovative work culture Interested? Send your updated resume to layeeq@reveilletechnologies.com

Posted Date not available

Apply

2.0 - 5.0 years

4 - 8 Lacs

mumbai

Work from Office

About The Role BA experience in Investment Banking; Financial Services domain In-depth understanding and hands-on experience of working on Equity; Non-equity Indices & ETF including Compositions data. Should have worked with reference data and pricing vendors for Index data like SOLA; Reuters; BBG; Ultumus etc. Excellent communication and coordination skills Experience of Instrument data (ETD / Equities / Bonds) will be a distinct advantage. Documentation of data flows. Hands-on experience of SQL queries. Demonstrated experience of working on data flows and data analysis Well-verse with SDLC concepts and ability to work cohesively with Dev and QA team-members Secondary Skills Capability to work in a dynamic and global team environment. Work with multiple stakeholders for requirements gathering. Create Functional Requirements documentation. Documentation data flows and impact assessment for any changes Conduct vendor data analysis and track DQ issues; Work with development team to ensure technical solution is developed in line with business requirements Assist in coordinating implementation details and handle release-related activities. Provide Root Cause analysis in case of any issues encountered

Posted Date not available

Apply

3.0 - 8.0 years

15 - 25 Lacs

hyderabad, chennai

Hybrid

Job Title: GCP Data Engineer BigQuery, Airflow, SQL, Python, dbt Experience Required: 3+ Years Location: Chennai / Hyderabad (Preferred – 2nd round will be F2F) Notice Period: Immediate Joiners preferred / Candidates with 30 days notice period (serving notice welcome) Employment Type: Full-time Job Description: We are looking for a skilled GCP Data Engineer with strong hands-on experience in BigQuery, Airflow, SQL, Python, and dbt to work on high-impact data engineering projects. Key Responsibilities: Design, develop, and optimize data pipelines on GCP Work with BigQuery for data warehousing and analytics Orchestrate workflows using Airflow Develop and maintain data transformation scripts using Python and dbt Collaborate with analytics and business teams to deliver data solutions Ensure best practices in performance optimization, data quality, and security Required Skills & Experience: Minimum 3 years experience as a Data Engineer Hands-on experience with Google Cloud Platform services Strong SQL skills Experience with Airflow for job scheduling/orchestration Expertise in Python scripting for data processing Experience with dbt for data transformation Strong problem-solving and communication skills Interview Process: 3 technical rounds 2nd round will be Face-to-Face at Chennai or Hyderabad office How to Apply: Interested candidates (Chennai / Hyderabad profiles preferred) can share their CV to ngongadala@randomtrees.com with subject line: "GCP Data Engineer – Chennai/Hyderabad

Posted Date not available

Apply

1.0 - 4.0 years

3 - 7 Lacs

gurugram

Work from Office

About The Role Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Amazon Connect Good to have skills : Python (Programming Language), AngularMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Support Engineer, you will act as software detectives, providing a dynamic service that identifies and resolves issues within various components of critical business systems. Your typical day will involve collaborating with team members to troubleshoot problems, analyzing system performance, and ensuring that all applications run smoothly to support business operations effectively. You will engage with different teams to gather insights and feedback, which will help in enhancing the overall system functionality and user experience. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor system performance and proactively address potential issues. Professional & Technical Skills: - Must To Have Skills: Proficiency in Amazon Connect.- Good To Have Skills: Experience with Python (Programming Language), Angular.- Strong understanding of cloud-based application support.- Experience in troubleshooting and resolving application issues.- Familiarity with system integration and data flow management. Additional Information:- The candidate should have minimum 7.5 years of experience in Amazon Connect.- This position is based at our Gurugram office.- A 15 years full time education is required. Qualification 15 years full time education

Posted Date not available

Apply

7.0 - 11.0 years

13 - 18 Lacs

bengaluru

Work from Office

About The Role Project Role : Data Architect Project Role Description : Define the data requirements and structure for the application. Model and design the application data structure, storage and integration. Must have skills : Data & AI Solution Architecture Good to have skills : NAMinimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Architect, you will define the data requirements and structure for the application. Your typical day will involve modeling and designing the application data structure, storage, and integration, ensuring that the architecture aligns with business needs and technical specifications. You will collaborate with various teams to ensure that data flows seamlessly throughout the organization, contributing to the overall efficiency and effectiveness of data management practices. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Engage in continuous learning to stay updated with industry trends and best practices.- Assist in the development of data governance policies and procedures. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data & AI Solution Architecture.- Strong understanding of data modeling techniques and best practices.- Experience with cloud-based data storage solutions.- Familiarity with data integration tools and methodologies.- Ability to design scalable and efficient data architectures. Additional Information:- The candidate should have minimum 2 years of experience in Data & AI Solution Architecture.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted Date not available

Apply

2.0 - 4.0 years

4 - 8 Lacs

mumbai

Work from Office

About The Role Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Google BigQuery, Microsoft SQL Server, GitHub, Google Cloud Data Services Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in enhancing the overall data infrastructure and ensuring that data is accessible and reliable for decision-making purposes. Project Role :Analytics and Modelor Project Role Description :Analyze and model client, market and key performance data. Use analytical tools and techniques to develop business insights and improve decision-making. Must have Skills :Google BigQuery, SSI:NON SSI:Good to Have Skills :SSI:No Technology Specialization NON SSI :Job Requirements : Roles & Responsibilities:- 1:Data Proc, Pub,Sub, Data flow, Kalka Streaming, Looker, SQL (No FLEX)2:Proven track record of delivering data integration, data warehousing soln3:Strong SQL And Hands-on (No FLEX)4:Exp with data integration and migration projects3:Proficient in BigQuery SQL language (No FLEX)5:understanding on cloud native services :bucket storage, GBQ, cloud function, pub sub, composer, and Kubernetes6:Exp in cloud solutions, mainly data platform services , GCP Certifications5:Exp in Shell Scripting, Python (NO FLEX), Oracle, SQL Technical Experience : Professional & Technical Skills: -1:Expert in Python (NO FLEX). Strong hands-on and strong knowledge in SQL(NO FLEX), Python programming using Pandas, NumPy, deep understanding of various data structure dictionary, array, list, tree etc, experiences in pytest, code coverage skills are preferred2:Strong hands-on experience with building solutions using cloud native services:bucket storage, Big Query, cloud function, pub sub, composer, and Kubernetes etc. (NO FLEX)3:Proficiency with tools to automate AZDO CI CD pipelines like Control-M , GitHub, JIRA, confluence , CI CD Pipeline4:Open mindset, ability to quickly adapt new technologies5:Performance tuning of BigQuery SQL scripts6:GCP Certified preferred7:Working in agile environment Professional Attributes :1:Must have good communication skills2:Must have ability to collaborate with different teams and suggest solutions3:Ability to work independently with little supervision or as a team4:Good analytical problem solving skills 5:Good team handling skills Educational Qualification:15 years of Full time education Additional Information :Candidate should be ready for Shift B and work as individual contributor Qualification 15 years full time education

Posted Date not available

Apply

2.0 - 4.0 years

4 - 8 Lacs

mumbai

Work from Office

About The Role Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Google BigQuery, Microsoft SQL Server, GitHub, Google Cloud Data Services Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in enhancing the overall data infrastructure and ensuring that data is accessible and reliable for decision-making purposes. Project Role :Analytics and Modelor Project Role Description :Analyze and model client, market and key performance data. Use analytical tools and techniques to develop business insights and improve decision-making. Must have Skills :Google BigQuery, SSI:NON SSI:Good to Have Skills :SSI:No Technology Specialization NON SSI :Job Requirements : Roles & Responsibilities:- 1:Data Proc, Pub,Sub, Data flow, Kalka Streaming, Looker, SQL (No FLEX)2:Proven track record of delivering data integration, data warehousing soln3:Strong SQL And Hands-on (No FLEX)4:Exp with data integration and migration projects3:Proficient in BigQuery SQL language (No FLEX)5:understanding on cloud native services :bucket storage, GBQ, cloud function, pub sub, composer, and Kubernetes6:Exp in cloud solutions, mainly data platform services , GCP Certifications5:Exp in Shell Scripting, Python (NO FLEX), Oracle, SQL Technical Experience : Professional & Technical Skills: -1:Expert in Python (NO FLEX). Strong hands-on and strong knowledge in SQL(NO FLEX), Python programming using Pandas, NumPy, deep understanding of various data structure dictionary, array, list, tree etc, experiences in pytest, code coverage skills are preferred2:Strong hands-on experience with building solutions using cloud native services:bucket storage, Big Query, cloud function, pub sub, composer, and Kubernetes etc. (NO FLEX)3:Proficiency with tools to automate AZDO CI CD pipelines like Control-M , GitHub, JIRA, confluence , CI CD Pipeline4:Open mindset, ability to quickly adapt new technologies5:Performance tuning of BigQuery SQL scripts6:GCP Certified preferred7:Working in agile environment Professional Attributes :1:Must have good communication skills2:Must have ability to collaborate with different teams and suggest solutions3:Ability to work independently with little supervision or as a team4:Good analytical problem solving skills 5:Good team handling skills Educational Qualification:15 years of Full time education Additional Information :Candidate should be ready for Shift B and work as individual contributor Qualification 15 years full time education

Posted Date not available

Apply

7.0 - 11.0 years

13 - 18 Lacs

bengaluru

Work from Office

About The Role Project Role : Data Architect Project Role Description : Define the data requirements and structure for the application. Model and design the application data structure, storage and integration. Must have skills : Data & AI Solution Architecture Good to have skills : NAMinimum 15 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Architect, you will define the data requirements and structure for the application. Your typical day will involve modeling and designing the application data structure, storage, and integration, ensuring that the architecture aligns with business needs and technical specifications. You will collaborate with various teams to ensure that data flows seamlessly across systems, contributing to the overall efficiency and effectiveness of the application. Your role will also require you to stay updated with the latest trends in data architecture and apply best practices to enhance the data management processes within the organization. Roles & Responsibilities:- Expected to be a Subject Matter Expert with deep knowledge and experience.- Should have influencing and advisory skills.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Facilitate workshops and discussions to gather requirements and feedback from stakeholders.- Mentor junior professionals in data architecture best practices and methodologies. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data & AI Solution Architecture.- Strong understanding of data modeling techniques and best practices.- Experience with cloud-based data solutions and architectures.- Proficient in data integration tools and methodologies.- Ability to design scalable and efficient data storage solutions. Additional Information:- The candidate should have minimum 15 years of experience in Data & AI Solution Architecture.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted Date not available

Apply

6.0 - 10.0 years

2 - 6 Lacs

hyderabad

Work from Office

Develop SAP BW-IP data flow in S4 HANA system. Provide inputs on data modelling between BW 7.4 on HANA and native HANA using Composite provider, ADSO and open ODS view Excellent communication skills both verbal & written in English are required. Self-motivated, capable to manage own workload with minimum supervision. Creating complex, enterprise-transforming applications within a dynamic, progressive, technically diverse environment Location : Pan India

Posted Date not available

Apply

10.0 - 12.0 years

6 - 9 Lacs

chennai, bengaluru

Work from Office

Location: Bangalore, Chennai Extract data from source system using Data Factory pipelines Massaging and Cleansing the data Transform data based on business rules Expose the data for reporting needs and exchange data with downstream applications. Standardize the various integration flows (e.g decom ALDML Init integration, simplify ALDML Delta integration).

Posted Date not available

Apply

3.0 - 8.0 years

0 - 0 Lacs

hyderabad

Work from Office

Hiring for GCP Cloud Engineer , GCP Data Engineer We are Looking for 3+ years of Experience Skills - Airflow , GCP Cloud , Hadoop , SQL , ETL , Python , Big Query We are Looking for Immediate Joiners ( 15 - 30 Days )

Posted Date not available

Apply

5.0 - 8.0 years

20 - 30 Lacs

bengaluru

Work from Office

Cloud Data Engineer Req number: R5934 Employment type: Full time Worksite flexibility: Remote Who we are CAI is a global technology services firm with over 8,500 associates worldwide and a yearly revenue of $1 billion+. We have over 40 years of excellence in uniting talent and technology to power the possible for our clients, colleagues, and communities. As a privately held company, we have the freedom and focus to do what is right—whatever it takes. Our tailor-made solutions create lasting results across the public and commercial sectors, and we are trailblazers in bringing neurodiversity to the enterprise. Job Summary We are seeking a motivated Cloud Data Engineer that has experience in building data products using Databricks and related technologies. This is a Full-time and Remote position. Job Description What You’ll Do Analyze and understand existing data warehouse implementations to support migration and consolidation efforts. Reverse-engineer legacy stored procedures (PL/SQL, SQL) and translate business logic into scalable Spark SQL code within Databricks notebooks. Design and develop data lake solutions on AWS using S3 and Delta Lake architecture, leveraging Databricks for processing and transformation. Build and maintain robust data pipelines using ETL tools with ingestion into S3 and processing in Databricks. Collaborate with data architects to implement ingestion and transformation frameworks aligned with enterprise standards. Evaluate and optimize data models (Star, Snowflake, Flattened) for performance and scalability in the new platform. Document ETL processes, data flows, and transformation logic to ensure transparency and maintainability. Perform foundational data administration tasks including job scheduling, error troubleshooting, performance tuning, and backup coordination. Work closely with cross-functional teams to ensure smooth transition and integration of data sources into the unified platform. Participate in Agile ceremonies and contribute to sprint planning, retrospectives, and backlog grooming. Triage, debug and fix technical issues related to Data Lakes. Maintain and Manage Code repositories like Git. What You'll Need 5+ years of experience working with Databricks , including Spark SQL and Delta Lake implementations. 3 + years of experience in designing and implementing data lake architectures on Databricks. Strong SQL and PL/SQL skills with the ability to interpret and refactor legacy stored procedures. Hands-on experience with data modeling and warehouse design principles. Proficiency in at least one programming language (Python, Scala, Java). Bachelor’s degree in Computer Science, Information Technology, Data Engineering, or related field. Experience working in Agile environments and contributing to iterative development cycles. Experience working on Agile projects and Agile methodology in general. Databricks cloud certification is a big plus. Exposure to enterprise data governance and metadata management practices. Physical Demands This role involves mostly sedentary work, with occasional movement around the office to attend meetings, etc. Ability to perform repetitive tasks on a computer, using a mouse, keyboard, and monitor. Reasonable accommodation statement If you require a reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employment selection process, please direct your inquiries to application.accommodations@cai.io or (888) 824 – 8111.

Posted Date not available

Apply

6.0 - 11.0 years

17 - 30 Lacs

hyderabad/secunderabad, bangalore/bengaluru, delhi / ncr

Hybrid

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of GCP Sr Data Engineer We are seeking a highly experienced and visionary Senior Google Cloud Data Engineer to spearhead the design, development, and optimization of our data infrastructure and pipelines on the Google Cloud Platform (GCP). With over 10 years of hands-on experience in data engineering, you will be instrumental in building scalable, reliable, and performant data solutions that power our advanced analytics, machine learning initiatives, and real-time reporting. You will provide technical leadership, mentor team members, and champion best practices for data engineering within a GCP environment. Responsibilities Architect, design, and implement end-to-end data pipelines on GCP using services like Dataflow, Cloud Composer (Airflow), Pub/Sub, and BigQuery. Build and optimize data warehousing solutions leveraging BigQuery's capabilities for large-scale data analysis. Design and implement data lakes on Google Cloud Storage, ensuring efficient data organization and accessibility. Develop and maintain scalable ETL/ELT processes to ingest, transform, and load data from diverse sources into GCP. Implement robust data quality checks, monitoring, and alerting mechanisms within the GCP data ecosystem. Collaborate closely with data scientists, analysts, and business stakeholders to understand their data requirements and deliver high-impact solutions on GCP. Lead the evaluation and adoption of new GCP data engineering services and technologies. Implement and enforce data governance policies, security best practices, and compliance requirements within the Google Cloud environment. Provide technical guidance and mentorship to other data engineers on the team, promoting knowledge sharing and skill development within the GCP context. Troubleshoot and resolve complex data-related issues within the GCP infrastructure. Contribute to the development of data engineering standards, best practices, and comprehensive documentation specific to GCP. Qualifications we seek in you! Minimum Qualifications / Skills • Bachelor's or Master's degree in Computer Science, Engineering, or a related field. • 10+ years of progressive experience in data engineering roles, with a strong focus on cloud technologies. • Deep and demonstrable expertise with the Google Cloud Platform (GCP) and its core data engineering services (e.g., BigQuery, Dataflow, Cloud Composer, Cloud Storage, Pub/Sub, Cloud Functions). • Extensive experience designing, building, and managing large-scale data pipelines and ETL/ELT workflows specifically on GCP. • Strong proficiency in SQL and at least one programming language relevant to data engineering on GCP (e.g., Python). • Comprehensive understanding of data warehousing concepts, data modeling techniques optimized for BigQuery, and NoSQL database options on GCP (e.g., Cloud Bigtable, Firestore). • Solid grasp of data governance principles, data security best practices within GCP (IAM, KMS), and compliance frameworks. • Excellent problem-solving, analytical, and debugging skills within a cloud environment. • Exceptional communication, collaboration, and presentation skills, with the ability to articulate technical concepts clearly to various audiences. Preferred Qualifications/ Skills Google Cloud certifications relevant to data engineering (e.g., Professional Data Engineer). Experience with infrastructure-as-code tools for GCP (e.g., Terraform, Deployment Manager). Familiarity with data streaming technologies on GCP (e.g., Dataflow, Pub/Sub). Experience with machine learning workflows and MLOps on GCP (e.g., Vertex AI). Knowledge of containerization technologies (Docker, Kubernetes) and their application within GCP data pipelines (e.g., Dataflow FlexRS). Experience with data visualization tools that integrate well with GCP (e.g., Looker). Familiarity with data cataloging and data lineage tools on GCP (e.g., Data Catalog). Experience in [mention specific industry or domain relevant to your company]. Proven experience in leading technical teams and mentoring junior engineers in a GCP environment. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted Date not available

Apply

6.0 - 8.0 years

20 - 27 Lacs

hyderabad, chennai, bengaluru

Work from Office

Roles and Responsibilities Design, develop, and maintain large-scale data pipelines using BigQuery, Cloud Storage, and Pub/Sub. Develop real-time simulation models to analyze complex systems behavior under various scenarios. Collaborate with cross-functional teams to identify business requirements and design solutions that meet those needs. Troubleshoot issues related to data flow, storage, and processing in a distributed computing environment.

Posted Date not available

Apply

2.0 - 4.0 years

5 - 8 Lacs

bengaluru

Work from Office

Contract duration - 1 year We are looking for a passionate and skilled Data Engineer with minimum 2 years of experience to join our team. As a Data Engineer, you will play a key role in building and maintaining scalable data pipelines and architectures on Google Cloud Platform (GCP). You will work with cross-functional teams to ensure the efficient management of data systems, ensuring the quality and availability of data for analytics and business intelligence. Key Responsibilities: Design, implement, and optimize data pipelines using GCP tools such as BigQuery, Cloud Storage, Dataflow, and Pub/Sub and third party tools. Develop and maintain ETL processes to transform and load data from multiple sources into cloud-based data storage solutions. Work with Solution architects and analysts to deliver high-quality datasets that support various business intelligence and machine learning models. Collaborate with the team to manage and scale data infrastructure on Google Cloud Platform. Ensure data integrity and quality through automated validation checks and monitoring. Troubleshoot and optimize data pipelines and storage solutions for performance and reliability. Automate routine data engineering tasks and workflows for operational efficiency. Implement best practices for data security and compliance on cloud platforms. Requirements Required Skills & Qualifications: Bachelors degree in Computer Science, Information Technology, or a related field. At least 2 years of experience working as a Data Engineer or in a similar role. Hands-on with data modeling, ETL processes, and data pipeline development. Hands-on experience with Google Cloud Platform (GCP) services like BigQuery, Cloud Storage, Dataflow, Pub/Sub, and Cloud Functions. Strong proficiency in SQL and experience with NoSQL databases. Programming experience in languages like Python, Java, or Scala. Knowledge of cloud computing concepts and cloud services architecture. Experience with containerization tools like Docker is a plus. Good understanding of data security practices in the cloud. Preferred Skills: Familiarity with Apache Spark, Apache Beam, or similar data processing frameworks. Experience with CI/CD processes and cloud automation tools like Terraform. GCP certifications (e.g., Google Cloud Professional Data Engineer certification) are a plus. Experience with data warehousing (e.g., BigQuery or Redshift).

Posted Date not available

Apply

8.0 - 12.0 years

3 - 6 Lacs

bengaluru

Work from Office

Responsibilities Be responsible for the development of the conceptual, logical, and physical data models, the implementation of RDBMS, operational data store (ODS), data marts, and data lakes on target platforms. Implement business and IT data requirements through new data strategies and designs across all data platforms (relational & dimensional - MUST and NoSQL-optional) and data tools (reporting, visualization, analytics, and machine learning). Work with business and application/solution teams to implement data strategies, build data flows, and develop conceptual/logical/physical data models Define and govern data modeling and design standards, tools, best practices, and related development for enterprise data models. Identify the architecture, infrastructure, and interfaces to data sources, tools supporting automated data loads, security concerns, analytic models, and data visualization. Hands-on modeling, design, configuration, installation, performance tuning, and sandbox POC. Work proactively and independently to address project requirements and articulate issues/challenges to reduce project delivery risks. Must have Payments Background Skills Hands-on relational, dimensional, and/or analytic experience (using RDBMS, dimensional, NoSQL data platform technologies, and ETL and data ingestion protocols). Experience with data warehouse, data lake, and enterprise big data platforms in multi-data-center contexts required. Good knowledge of metadata management, data modeling, and related tools (Erwin or ER Studio or others) required. Experience in team management, communication, and presentation. Experience with Erwin, Visio or any other relevant tool.

Posted Date not available

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies