Jobs
Interviews

905 Data Flow Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Path/Level: R3 Note:u00A0 Roles are posted at the lowest level of a band, however, employees should search across all levels of the band to identify all opportunities.u00A0 Employees hired on banded positions (ex: P1-P3, R1-R2, B1-B3, etc.) transfer at their current level, despite the level indicated on the job posting.u00A0 For example, if a P2 candidate is selected for a P1-P3 banded position, the candidate will remain a P2 in the new role. u00A0At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our 39,000 employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first. Weu2019re looking for people who are determined to make life better for people around the world. The Clinical Study Build Programmer - eDC is responsible for programming and testing clinical trial data collection databases, including the mapping, testing and normalization of data into a clinical data warehouse. This requires an in depth understanding of data technology, data flow, data standards, database programming, normalization and testing. The Clinical Study Build Programmer will collaborate with Data and Analytics colleagues such as the Clinical Data Associate, Clinical Data Manager and other key stakeholders to deliver standardized data collection methods and innovative validation solutions for use in global clinical trials. Responsibilities: This job description is intended to provide a general overview of the job requirements at the time it was prepared. The job requirements of any position may change over time and may include additional responsibilities not specifically described in the job description. Consult with your supervision regarding your actual job responsibilities and any related duties that may be required for the position. Portfolio Delivery Program and test data collection systems and associated data repository mappings for a trial or set of trials within a program using data standards library components Ensure data collection systems and data warehouse mappings are delivered accurately, efficiently and in alignment with study objectives Provide insights into study level deliverables (i.e. Data Management Plan, Project Plan, database, and observed datasets) Support submission, inspection and regulatory response activities Lead cross Business Unit/Therapeutic Area projects or programs with high complexity Develops and tests new ideas and/or applies innovative solutions that create value to the portfolio Project Management Increase speed, accuracy, and consistency in the development of systems solutions Enable metrics reporting of study development timelines and pre and postproduction changes to database Partner with Data and Analytics colleagues such as the Clinical Data Associate, Clinical Data Management Associate to deliver study database per business need and before first patient visit Comply with and influence data standard decisions and strategies for a study and/or program Utilize therapeutic knowledge and possess a deep understanding of the technology used to collect clinical trial data Effectively apply knowledge of applicable internal, external and regulatory requirements/expectations (MQA, CSQ, MHRA, FDA, ICH, GCP, PhRMA, Privacy knowledge, etc.) to study build deliverables Integrates cross-functional and/or external information and applies technical knowledge to data-driven decision Making Enterprise Leadership Continually seek and implement means of improving processes to reduce study build cycle time, decrease work effort and enable the normalization of various sources of data into a common data repository in a way that allows for improved integration, consumption and downstream analysis Represent Data and Analytics processes in cross-functional initiatives Actively participate in shared learning across Data and Analytics organization Work to Increase re-usability of forms and edits by improving the initial design Work to reduce postproduction changes change control process Anticipate and resolve key technical, operational or business problems that impact the Data and Analytics organization Interacts with regulators, business partners and outside stakeholders on business issues Thinks with end to end in mind consistently managing risk to minimize impact on delivery Lilly is dedicated to helping individuals with disabilities to actively engage in the workforce, ensuring equal opportunities when vying for positions. If you require accommodation to submit a resume for a position at Lilly, please complete the accommodation request form () for further assistance. Please note this is for individuals to request an accommodation as part of the application process and any other correspondence will not receive a response. Lillyu00A0does not discriminate on the basis of age, race, color, religion, gender, sexual orientation, gender identity, gender expression, national origin, protected veteran status, disability or any other legally protected status. #WeAreLilly

Posted 1 week ago

Apply

3.0 - 6.0 years

12 - 15 Lacs

gurugram

Work from Office

Job Purpose This role is central to managing the airlines operational data, building scalable data pipelines, enabling organization-wide reporting, and driving automation using the Microsoft ecosystem. Key Accountabilities Functional Activities Leading a team of experienced data engineers and analyst Design and maintain end-to-end data pipelines using SQL, Microsoft Fabric, and Python to support operations and planning analytics. Develop and manage centralized databases and reusable data models for seamless integration across Power BI, Power Apps, and other analytics platforms. Build and optimize Power BI dashboards and data models, implementing Row-Level Security (RLS), performance tuning, and enterprise-wide report distribution. Collaborate with stakeholders to gather reporting requirements and deliver actionable insights via automated dashboards and self-service tools. Integrate Power Automate and Power Apps for workflow automation and operational process digitization. Maintain data accuracy, schedule refreshes, and monitor Power BI services and dataflows for uninterrupted delivery. Support adoption of Microsoft Fabric ecosystem including Dataflows, Lakehouses, Notebooks, and Pipelines for modern data architecture. Any other additional r esponsibility could be assigned to the role holder from time to time as a standalone project or regular work. The same would be suitably represented in the Primary responsibilities and agreed between the incumbent, reporting officer and HR.

Posted 1 week ago

Apply

4.0 - 7.0 years

0 - 2 Lacs

gurugram

Work from Office

Consultant- GCP Snowflake: Elevate Your Impact Through Innovation and Learning Evalueserve is a global leader in delivering innovative and sustainable solutions to a diverse range of clients, including over 30% of Fortune 500 companies. With a presence in more than 45 countries across five continents, we excel in leveraging state-of-the-art technology, artificial intelligence, and unparalleled subject matter expertise to elevate our clients' business impact and strategic decision-making. Our team of over 4, 500 talented professionals operates in countries such as India, China, Chile, Romania, the US, and Canada. Our global network also extends to emerging markets like Colombia, the Middle East, and the rest of Asia-Pacific. Recognized by Great Place to Work in India, Chile, Romania, the US, and the UK in 2022, we offer a dynamic, growth-oriented, and meritocracy-based culture that prioritizes continuous learning and skill development and work-life balance. About Data Analytics Data Analytics is one of the highest growth practices within Evalueserve , providing you rewarding career opportunities. Established in 2014, the global DA team extends beyond 1000+ (and growing) data science professionals across data engineering, business intelligence, digital marketing, advanced analytics, technology, and product engineering. Our more tenured teammates, some of whom have been with Evalueserve since it started more than 20 years ago, have enjoyed leadership opportunities in different regions of the world across our seven business lines. What you will be doing at Evalueserve Data Pipeline Development: Design and implement scalable ETL (Extract, Transform, Load) pipelines using tools like Cloud Dataflow, Apache Beam or Spark and BigQuery . Data Integration: Integrate various data sources into unified data warehouses or lakes, ensuring seamless data flow. Data Transformation: Transform raw data into analyzable formats using tools like dbt (data build tool) and Dataflow. Performance Optimization: Continuously monitor and optimize data pipelines for speed, scalability, and cost-efficiency. Data Governance: Implement data quality standards, validation checks, and anomaly detection mechanisms. Collaboration: Work closely with data scientists, analysts, and business stakeholders to align data solutions with organizational goals. Documentation: Maintain detailed documentation of workflows and adhere to coding standards. What were looking for Proficiency in **Python/ PySpark and SQL for data processing and querying. Expertise in GCP services like BigQuery , Cloud Storage, Pub/Sub, Cloud composure and Dataflow. Good knowledge of snowflake, have completed one working project in snowflake not just snowflake migration Familiarity with Datawarehouse and lake house principles and distributed data architectures. Strong problem-solving skills and the ability to handle complex projects under tight deadlines. Knowledge of data security and compliance best practices. Certification : GCP Professional Data engineer Follow us on https://www.linkedin.com/compan y/evalueserve/ Click here to learn more about what our Leaders talking on achievements AI-powered supply chain optimization solution built on Google Cloud. How Evalueserve is now Leveraging NVIDIA NIM to enhance our AI and digital transformation solutions and to accelerate AI Capabilities . Know more about ho w Evalueserve has climbed 16 places on the 50 Best Firms for Data Scientists in 2024! Want to learn more about our culture and what its like to work with us? Write to us at: careers@evalueserve.com Disclaimer : The following job description serves as an informative reference for the tasks you may be required to perform. However, it does not constitute an integral component of your employment agreement and is subject to periodic modifications to align with evolving circumstances. Please Note: We appreciate the accuracy and authenticity of the information you provide, as it plays a key role in your candidacy. As part of the Background Verification Process, we verify your employment, education, and personal details. Please ensure all information is factual and submitted on time. For any assistance, your TA SPOC is available to support you.

Posted 1 week ago

Apply

3.0 - 8.0 years

25 - 37 Lacs

hyderabad, chennai

Hybrid

Job Title: GCP Data Engineer (3+ Years Experience) Location: Hyderabad & Chennai (Hybrid Mode) Experience: 3+ Years Employment Type: Full-Time Joining Preference: Immediate Joiners (Max 20 Days Notice Period) About the Company Random Trees is a leading Data & AI company, headquartered in Texas, USA, with development centers in Hyderabad and Chennai, India. With a strong team of 500+ professionals and growing 3X year-over-year, we specialize in AI product innovation and enterprise-scale strategic services. We are a proud strategic partner of many global clients across industries such as Pharma, Banking, Oil & Gas, Manufacturing, and Retail. Job Description We are looking for a GCP Data Engineer with strong expertise in building scalable data pipelines and modern cloud technologies. The role involves designing, optimizing, and managing data solutions for business-critical applications. Key Responsibilities Design, develop, and maintain data pipelines on GCP. Optimize SQL queries and ensure high-performance data processing. Work with Big Query, Airflow, Dataflow for pipeline orchestration. Implement ETL/ELT workflows and data models. Collaborate with cross-functional teams for scalable data solutions. Key Skills Required Advanced SQL (query optimization, performance tuning) 3+ Years Python programming for data engineering – 3+ Years Hands-on experience in Google Cloud Platform (GCP) Experience with Big Query, Airflow, Dataflow DBT knowledge/familiarity (hands-on is a plus) Strong background in ETL/ELT, data modeling, and pipeline orchestration Good to Have Terraform / Infrastructure as Code (IaC) Kubernetes, Data Fusion, Cloud Functions Streaming data pipeline experience (Pub/Sub, Kafka) Contact Interested candidates can share profiles at ngongadala@randomtrees.com

Posted 1 week ago

Apply

5.0 - 10.0 years

18 - 27 Lacs

hyderabad, pune

Work from Office

Job Description: GCP Data Engineer We are looking for a highly skilled and experienced GCP Data Engineer as a Lead Programmer Analyst. Experience should have 4 to 7 years of data engineering. Should have strong development skills in GCP services like BigQuery, DataProc, Dataflow, Dataform. Should have good knowledge in SQL. Should have good knowledge in Python. Should have good knowledge in Pyspark. Good Experience on Airflow, Git, CI/CD pipelines. Good to have Knowledge in Cloud SQL, Dataplex. Understanding on the SDLC. Understanding on the Agile methodologies. Communication with customer and producing the Daily status report. Should have good oral and written communication. Should be a good team player. Should be proactive and adaptive. Strong communication skills. Analytical & Problem-solving skills.

Posted 1 week ago

Apply

5.0 - 8.0 years

25 - 40 Lacs

pune, gurugram, bengaluru

Hybrid

Salary: 25 to 40 LPA Exp: 5 to 10 years Location: Gurgaon/Pune/Bengalore Notice: Immediate to 30 days..!! Job Profile: Experienced Data Engineer with a strong foundation in designing, building, and maintaining scalable data pipelines and architectures. Skilled in transforming raw data into clean, structured formats for analytics and business intelligence. Proficient in modern data tools and technologies such as SQL, T-SQL, Python, Databricks, and cloud platforms (Azure). Adept at data wrangling, modeling, ETL/ELT development, and ensuring data quality, integrity, and security. Collaborative team player with a track record of enabling data-driven decision-making across business units. As a Data engineer, Candidate will work on the assignments for one of our Utilities clients. Collaborating with cross-functional teams and stakeholders involves gathering data requirements, aligning business goals, and translating them into scalable data solutions. The role includes working closely with data analysts, scientists, and business users to understand needs, designing robust data pipelines, and ensuring data is accessible, reliable, and well-documented. Regular communication, iterative feedback, and joint problem-solving are key to delivering high-impact, data-driven outcomes that support organizational objectives. This position requires a proven track record of transforming processes, driving customer value, cost savings with experience in running end-to-end analytics for large-scale organizations. Design, build, and maintain scalable data pipelines to support analytics, reporting, and advanced modeling needs. Collaborate with consultants, analysts, and clients to understand data requirements and translate them into effective data solutions. Ensure data accuracy, quality, and integrity through validation, cleansing, and transformation processes. Develop and optimize data models, ETL workflows, and database architectures across cloud and on-premises environments. Support data-driven decision-making by delivering reliable, well-structured datasets and enabling self-service analytics. Provides seamless integration with cloud platforms (Azure), making it easy to build and deploy end-to-end data pipelines in the cloud Scalable clusters for handling large datasets and complex computations in Databricks, optimizing performance and cost management. Must to have Client Engagement Experience and collaboration with cross-functional teams Data Engineering background in Databricks Capable of working effectively as an individual contributor or in collaborative team environments Effective communication and thought leadership with proven record. Candidate Profile: Bachelors/masters degree in economics, mathematics, computer science/engineering, operations research or related analytics areas 3+ years experience must be in Data engineering. Hands on experience on SQL, Python, Databricks, cloud Platform like Azure etc. Prior experience in managing and delivering end to end projects Outstanding written and verbal communication skills Able to work in fast pace continuously evolving environment and ready to take up uphill challenges Is able to understand cross cultural differences and can work with clients across the globe.

Posted 1 week ago

Apply

4.0 - 9.0 years

20 - 35 Lacs

pune, gurugram, bengaluru

Hybrid

Salary: 20 to 35 LPA Exp: 5 to 8 years Location: Gurgaon /Pune/Bangalore Notice: Immediate to 30 days..!! Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data pipelines using GCP services such as BigQuery, Data Flow, PubSub, Dataproc, and Cloud Storage. Collaborate with cross-functional teams to identify business requirements and design solutions that meet those needs. Develop complex SQL queries to extract insights from large datasets stored in Google Cloud SQL databases. Troubleshoot issues related to data processing workflows and provide timely resolutions. Desired Candidate Profile 5-9 years of experience in Data Engineering with expertise GCP & Biq query data engineering. Strong understanding of GCP Cloud Platform Administration including Compute Engine (Dataproc), Kubernetes Engine (K8s), Cloud Storage, Cloud SQL etc. . Experience working on big data analytics projects involving ETL processes using tools like Airflow or similar technologies.

Posted 1 week ago

Apply

3.0 - 8.0 years

3 - 7 Lacs

bengaluru

Work from Office

About The Role Project Role : Application Support Engineer Project Role Description :Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : IBM Maximo Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Support Engineer, you will act as software detectives, providing a dynamic service that identifies and resolves issues within various components of critical business systems. Your typical day will involve collaborating with team members to troubleshoot software problems, analyzing system performance, and ensuring that applications run smoothly to support business operations effectively. You will engage with users to understand their challenges and work towards implementing solutions that enhance system functionality and user experience. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of processes and procedures to enhance team knowledge.- Engage with stakeholders to gather requirements and provide feedback on system improvements. Professional & Technical Skills: - Must To Have Skills: Proficiency in IBM Maximo.- Good To Have Skills: Experience with application support and troubleshooting.- Strong understanding of system integration and data flow.- Familiarity with incident management and ticketing systems.- Ability to analyze and interpret system logs and performance metrics. Additional Information:- The candidate should have minimum 3 years of experience in IBM Maximo.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

3.0 - 8.0 years

3 - 7 Lacs

hyderabad

Work from Office

About The Role Project Role : Application Support Engineer Project Role Description :Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : IBM Maximo Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Support Engineer, you will act as software detectives, providing a dynamic service that identifies and resolves issues within various components of critical business systems. Your typical day will involve collaborating with team members to troubleshoot problems, analyzing system performance, and ensuring that all applications run smoothly to support business operations effectively. You will engage with users to understand their challenges and work towards implementing solutions that enhance system functionality and user experience. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of processes and procedures to enhance team knowledge.- Engage with stakeholders to gather requirements and provide feedback on system performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in IBM Maximo.- Good To Have Skills: Experience with application support and troubleshooting.- Strong understanding of system integration and data flow.- Familiarity with incident management and ticketing systems.- Ability to analyze logs and system metrics to identify issues. Additional Information:- The candidate should have minimum 3 years of experience in IBM Maximo.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

12.0 - 15.0 years

4 - 8 Lacs

gurugram

Work from Office

About The Role Project Role : Data Engineer Project Role Description :Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Performance Testing Strategy, Test Automation Strategy Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. A typical day involves creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to optimize data workflows and enhance system performance, ensuring that data solutions meet the evolving needs of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Facilitate knowledge sharing and mentoring within the team to enhance overall capabilities.- Analyze and troubleshoot data-related issues to ensure seamless data flow and integrity. Professional & Technical Skills: - Must To Have Skills: Proficiency in Performance Testing Strategy, Test Automation Strategy.- Strong understanding of data architecture principles and best practices.- Experience with data integration tools and ETL processes.- Proficient in programming languages such as Python or Java for data manipulation.- Familiarity with cloud platforms and services for data storage and processing. Additional Information:- The candidate should have minimum 12 years of experience in Performance Testing Strategy.- This position is based at our Gurugram office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

7.0 - 11.0 years

13 - 18 Lacs

bengaluru

Work from Office

About The Role Project Role : Data Architect Project Role Description :Define the data requirements and structure for the application. Model and design the application data structure, storage and integration. Must have skills : Microsoft Azure Databricks Good to have skills : AWS Architecture, Snowflake Data Warehouse, PySparkMinimum 15 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Architect, you will define the data requirements and structure for the application. Your typical day will involve modeling and designing the application data structure, storage, and integration, ensuring that the architecture aligns with business needs and technical specifications. You will collaborate with various teams to ensure that data flows seamlessly and efficiently throughout the organization, while also addressing any challenges that arise in the data management process. Your role will be pivotal in shaping the data landscape of the organization, driving innovation and efficiency in data handling and utilization. Roles & Responsibilities:- Expected to be a Subject Matter Expert with deep knowledge and experience.- Should have influencing and advisory skills.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Facilitate workshops and discussions to gather requirements and feedback from stakeholders.- Develop and maintain comprehensive documentation of data architecture and design decisions. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks.- Good To Have Skills: Experience with PySpark, Snowflake Data Warehouse, AWS Architecture.- Strong understanding of data modeling techniques and best practices.- Experience with data integration tools and methodologies.- Familiarity with cloud-based data storage solutions and architectures. Additional Information:- The candidate should have minimum 15 years of experience in Microsoft Azure Databricks.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

5.0 - 7.0 years

13 - 17 Lacs

bengaluru

Work from Office

Skilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work Preferred technical and professional experience Create up to 3 bullets maxitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications (encouraging then to focus on required skills)

Posted 1 week ago

Apply

6.0 - 9.0 years

18 - 33 Lacs

gurugram

Work from Office

Job Application Link: https://app.fabrichq.ai/jobs/dae4c183-25fe-4d3a-9cb9-fa58048769df Job Summary: The Principal Data Engineer - GCP role involves designing and implementing end-to-end data and analytics solutions on Google Cloud Platform. The position requires expertise in GCP services like BigQuery, Dataflow, and Cloud Storage, along with strong data modeling skills. The role includes mentoring teams, participating in pre-sales activities, and collaborating with leadership on cloud strategy. Key Responsibilities Design and drive end-to-end data and analytics solution architecture from concept to delivery on Google Cloud Platform (GCP) Design, develop, and support conceptual, logical, and physical data models for advanced analytics and ML-driven solutions Ensure integration of industry-accepted data architecture principles, standards, guidelines, and concepts Drive the design, sizing, provisioning, and setup of GCP environments and related services Provide mentoring and guidance on GCP-based data architecture to engineering, analytics, and business teams Review solution requirements and architecture for appropriate technology selection and integration Advise on emerging GCP trends and services, and recommend adoption strategies Participate in pre-sales engagements, PoCs, and contribute to thought leadership content Collaborate with founders and leadership team on cloud and data strategy Skills & Requirements Must Have Skills Google Cloud Platform (BigQuery, Dataflow, Cloud Storage, Looker) Data modeling techniques (Relational or Star or Snowflake or DataVault) Data warehousing and analytics services Data orchestration tools (Dataflow or Pub/Sub or Dataproc or Cloud Composer) SQL and Python programming Cloud security and infrastructure (IAM, VPCs, VPNs or firewall rules)

Posted 1 week ago

Apply

5.0 - 10.0 years

25 - 37 Lacs

hyderabad, chennai

Hybrid

Job Title: GCP Data Engineer Location: Hyderabad & Chennai (Hybrid mode of working) Experience: 5+ Years Employment Type: Full Time Key Responsibilities Design, build, and optimize scalable data pipelines using GCP services ( Big Query, Dataflow, Airflow, Pub/Sub, Cloud Storage ). Develop and maintain ETL/ELT workflows leveraging Airflow and DBT for data modeling and transformations. Write and optimize complex SQL queries Big Query BigQuery . Implement data integration solutions to ingest structured, semi-structured, and unstructured data from multiple sources. Work closely with data analysts, data scientists, and business stakeholders to ensure reliable and timely data delivery. Apply Python programming for building automation, orchestration, and custom data engineering solutions. Monitor, troubleshoot, and enhance existing pipelines with a focus on scalability, performance, and cost optimization . Ensure adherence to data governance, security, and compliance standards . Required Skills & Experience 5+ years of hands-on experience as a Data Engineer with a focus on GCP ecosystem . Strong experience in Advanced SQL (complex joins, window functions, query optimization, partitioning, clustering). Proficiency in Python programming for data pipelines, automation, and integration. Strong working knowledge of BigQuery, Dataflow, Airflow, and DBT . Experience with ETL/ELT pipeline design, data modeling (snowflake/star schema), and performance tuning . Good understanding of CI/CD, Git, and DevOps practices in data engineering. Ability to work in hybrid mode across Hyderabad/Chennai . Good to Have Experience with Terraform or IaC for GCP resources. Knowledge of Kubernetes, Cloud Functions, or Data Fusion . Exposure to streaming data pipelines using Pub/Sub, Kafka, or similar.

Posted 1 week ago

Apply

5.0 - 10.0 years

17 - 32 Lacs

gurugram

Hybrid

The GCP Data Engineer will be responsible for designing, developing, and maintaining data pipelines and data infrastructure on Google Cloud Platform (GCP). This role requires expertise in data engineering best practices, cloud architecture, and big data technologies. The ideal candidate will work closely with data scientists, analysts, and other stakeholders to ensure the availability, reliability, and efficiency of data systems, enabling data-driven decision-making across the organization. Key Responsibilities Data Pipeline Development Design, develop, and maintain scalable and efficient ETL/ELT pipelines on GCP. Implement data ingestion processes from various data sources (e.g., APIs, databases, file systems). Ensure data quality, integrity, and reliability throughout the data lifecycle. Cloud Architecture Design and implement data architecture on GCP using services such as BigQuery, Dataflow, Pub/Sub, Cloud Storage, and Cloud Composer. Optimize and manage data storage and retrieval processes to ensure high performance and cost efficiency. Ensure data infrastructure is secure, scalable, and aligned with industry best practices. Big Data Processing Develop and manage large-scale data processing workflows using Apache Beam, Dataflow, and other big data technologies. Implement real-time data streaming solutions using Pub/Sub and Dataflow. Optimize data processing jobs for performance and cost. Collaboration and Communication Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions that meet business needs. Communicate technical concepts effectively to both technical and non-technical stakeholders. Participate in agile development processes, including sprint planning, stand-ups, and retrospectives. Data Management and Governance Implement and maintain data governance practices, including data cataloging, metadata management, and data lineage. Ensure compliance with data security and privacy regulations. Monitor and manage data quality and consistency. Troubleshooting and Support Debug and resolve technical issues related to data pipelines and infrastructure. Provide support and maintenance for existing data solutions. Continuously monitor and improve data pipeline performance and reliability. Qualifications Education: Bachelors degree in Computer Science, Information Technology, Data Science, or a related field. Experience: Minimum of 4-12 years of experience in data engineering. Proven experience with GCP data services and tools. Technical Skills: Proficiency in GCP services (e.g., BigQuery, Dataflow, Pub/Sub, Cloud Storage, Cloud Composer). Strong programming skills in languages such as Python Familiarity with big data technologies and frameworks (e.g., Apache Beam, Hadoop, Spark). Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes) is a plus. Key Competencies Strong problem-solving skills and attention to detail. Excellent communication and teamwork skills. Ability to work in a fast-paced, dynamic environment. Self-motivated and able to work independently as well as part of a team. Continuous learning mindset and a passion for staying up-to-date with emerging technologies.

Posted 1 week ago

Apply

5.0 - 10.0 years

8 - 18 Lacs

chennai

Hybrid

Technical competency in the following: - Experience in SQL and PL/SQL development - Oracle Database - Good understanding of ER Diagrams, Data Flows - Good to have experience on DB design & modelling - Hands-on on performance tuning tools and debugging Ability to perform technical analysis, design and identify impacts (functional/technical) Prior experience in High Volume / Mission critical Systems is a plus Contributing Responsibilities Work in duet with our offshore and in-site technical team to coordinate the database initiatives. Perform detailed technical analysis with impacts (technical/functionally) and prepare Technical specification document. Mentor and carry out database peer code reviews of development team. Bug fixing & performance optimization Keep the development team up-to-date about the best-practices and in-site feedbacks. Challenge the time-response performance and the maintainability of the treatments/queries. Maintains data standards and security measures and anonymize Production data and import in Development and Testing environments. Performance tuning and Monitoring of All databases and proactively propose solutions in case of issues. Develop and unit test the code : Develop the Code to suffice the business requirements Unit test the code and bug fix all the defects arising out of the unit testing Properly check in the code to avoid issues arising out of configuration management Deploy and Integrate test the application Developed : Deploy the developed code into the IST Environments and perform the Integration testing by working with the cross teams Fix all the defects arising out of IST and UAT Testing. Keep the development team up-to-date about the best practices and feedback.

Posted 1 week ago

Apply

5.0 - 10.0 years

1 - 1 Lacs

chennai

Hybrid

Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place one that benefits lives, communities and the planet Job Title: Data Engineering Engineer II Location: Chennai Work Type: Hybrid Position Description: Employees in this job function are responsible for designing, building, and maintaining data solutions including data infrastructure, pipelines, etc. for collecting, storing, processing and analyzing large volumes of data efficiently and accurately Key Responsibilities: Collaborate with business and technology stakeholders to understand current and future data requirements Design, build and maintain reliable, efficient and scalable data infrastructure for data collection, storage, transformation, and analysis Plan, design, build and maintain scalable data solutions including data pipelines, data models, and applications for efficient and reliable data workflow Design, implement and maintain existing and future data platforms like data warehouses, data lakes, data lakehouse etc. for structured and unstructured data Design and develop analytical tools, algorithms, and programs to support data engineering activities like writing scripts and automating tasks Ensure optimum performance and identify improvement opportunities Skills Required: Google Cloud Platform - Biq Query, Data Flow, Dataproc, Data Fusion, TERRAFORM, Tekton,Cloud SQL, AIRFLOW, POSTGRES, Airflow PySpark, Python, API Skills Preferred: GenAI Experience Required: Engineer 2 Exp: 4+ years Data Engineering work experience Experience Preferred: Strong proficiency and hands-on experience in both Python(Must-have) and Java(Nice to have). Experience building and maintaining data pipelines (batch or streaming) preferably on Cloud platforms(especially GCP). Experience with at least one major distributed data processing framework (e.g., DBT, DataForm, Apache Spark, Apache Flink, or similar). Experience with workflow orchestration tools (e.g., Apache Airflow, Qlik replicate etc). Experience working with relational databases (SQL) and understanding of data modeling principles. Experience with cloud platforms (Preferably GCP. AWS or Azure will also do) and relevant data services (e.g., BigQuery, GCS, Data Factory, Dataproc, Dataflow, S3, EMR, Glue etc.). Experience with data warehousing concepts and platforms (BigQuery, Snowflake, Redshift etc.). Understanding of concepts related to integrating or deploying machine learning models into production systems. Experience working in an Agile development environment & hands-on in any Agile work management tool(Rally, JIRA etc.). Experience with version control systems, particularly Git. Solid problem-solving, debugging, and analytical skills. Excellent communication and collaboration skills. Experience working in a production support team (L2/L3) for operational support. Preferred Skills and Qualifications (Nice to Have): Familiarity with data quality and data governance concepts. Experience building and consuming APIs (REST, gRPC) related to data or model serving. Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or a related field. Education Required: Bachelor's Degree Education Preferred: Bachelor's Degree TekWissen Group is an equal opportunity employer supporting workforce diversity.

Posted 1 week ago

Apply

2.0 - 3.0 years

5 - 9 Lacs

noida

Work from Office

Candidate will be working on the IVP Delivery team on projects directly with clients or with IVP business analysts to customize & integrate IVP products and custom solutions. Candidate would be responsible for configuring & testing workflows, configuring & testing data flows, orchestrating dataflows, configuring reports / dashboards / charts, loading & validating data in IVP products, writing sql stored procedures & custom C# classes for implementing IVP products. In addition to project delivery, the Implementation Associate is also responsible for coordinating product change requests, product feature validations and maintenance requests between the business and the development team. Skills required: • Minimum 2+ years’ experience in with primarily Engineering and Product Development roles • Experience of the software development lifecycle and Agile development methodologies • Good judgment with the ability to make timely and sound decisions • Computer science or IT Graduate or Equivalent from a reputed college. • React js, C# (.Net Core , .Net Framework),SQL • Good Analytical skills and a quick learner. • Excellent Communication skills (oral and written both). Optional Skills: • Kubernetes / Microk8s will be a plus • Knowledge of Azure devops , CI CD process • PowerShell / UNIX shell scripting a plus • Documentations and testing as needed

Posted 1 week ago

Apply

4.0 - 6.0 years

20 - 25 Lacs

chennai

Work from Office

Position Description: Employees in this job function are responsible for designing, building, and maintaining data solutions including data infrastructure, pipelines, etc. for collecting, storing, processing and analyzing large volumes of data efficiently and accurately Key Responsibilities: 1) Collaborate with business and technology stakeholders to understand current and future data requirements 2) Design, build and maintain reliable, efficient and scalable data infrastructure for data collection, storage, transformation, and analysis 3) Plan, design, build and maintain scalable data solutions including data pipelines, data models, and applications for efficient and reliable data workflow 4) Design, implement and maintain existing and future data platforms like data warehouses, data lakes, data lakehouse etc. for structured and unstructured data 5) Design and develop analytical tools, algorithms, and programs to support data engineering activities like writing scripts and automating tasks 6) Ensure optimum performance and identify improvement opportunities

Posted 1 week ago

Apply

5.0 - 7.0 years

20 - 25 Lacs

chennai

Work from Office

Position Description: Representing the Ford Credit (FC) Data Engineering Organization as a Google Cloud Platform (GCP) Data Engineer, specializing in migration and transformation, you will be a developer part of a global team to build a complex Datawarehouse in the Google Cloud Platform. This role involves designing, implementing, and optimizing data pipelines, ensuring data integrity during migration, and leveraging GCP services to enhance data transformation processes for scalability and efficiency. This role is for a GCP Data Engineer who can build cloud analytics platforms to meet expanding business requirements with speed and quality using lean Agile practices. You will work on analyzing and manipulating large datasets supporting the enterprise by activating data assets to support Enabling Platforms and Analytics in the GCP. You will be responsible for designing the transformation and modernization on GCP. Experience with large scale solutions and operationalizing of data warehouses, data lakes and analytics platforms on Google Cloud Platform or other cloud environment is a must. We are looking for candidates who have a broad set of technology skills across these areas and who can demonstrate an ability to design right solutions with appropriate combination of GCP and 3rd party technologies for deploying on the Google Cloud Platform. Experience Required: 5+ years of experience in data engineering, with a focus on data warehousing and ETL development (including data modelling, ETL processes, and data warehousing principles). • 5+ years of SQL development experience • 3+ years of Cloud experience (GCP preferred) with solutions designed and implemented at production scale. • Strong understanding and experience of key GCP services, especially those related to data processing (Batch/Real Time) leveraging Terraform, BigQuery, Dataflow, DataFusion, Dataproc, Cloud Build, AirFlow, and Pub/Sub, alongside and storage including Cloud Storage, Bigtable, Cloud Spanner • Experience developing with micro service architecture from container orchestration framework. • Designing pipelines and architectures for data processing • Excellent problem-solving skills, with the ability to design and optimize complex data pipelines. • Strong communication and collaboration skills, capable of working effectively with both technical and non-technical stakeholders as part of a large global and diverse team • Strong evidence of self-motivation to continuously develop own engineering skills and those of the team. • Proven record of working autonomously in areas of high ambiguity, without day-to-day supervisory support • Evidence of a proactive mindset to problem solving and willingness to take the initiative. • Strong prioritization, co-ordination, organizational and communication skills, and a proven ability to balance workload and competing demands to meet deadlines

Posted 1 week ago

Apply

4.0 - 9.0 years

14 - 18 Lacs

chennai

Work from Office

Whats the role The Analyst MICOE role requires an experienced designer of data visualizations and complex reporting with a strong understanding of the Financial metrics and how they influence business performance. This designer will work with an integrated team of developers, business analysts, Interface Managers in creating complex data visualizations using Microsoft Power BI and other tools. Once a solution is live, run and maintain the same, take care of year roll-over and minor changes/enhancements to the solution. What you will be doing Swift understanding of the business model, expectations from business, linking it to the business strategy and the way KPIs are measured. Design and create data visualizations in Microsoft Power BI. Expertise in creating info links to ERPs or source Databases to fetch data into visualization tools. Facilitate design review sessions with senior analysts and key stakeholders to refine and elaborate the data visualizations. Work on validation and testing to ensure it meets the requirements. Creates the design specification, deployment plans, and other technical documents for respective design activities. Create user reference document and Training videos on how to use. Support to troubleshooting problems, providing workarounds etc., Estimates the magnitude and time requirements to complete all tasks and provides accurate and timely updates to the team on progress. Ensure on time, high quality deliverables and meeting project milestones and deadlines (Project Plan On A Page) within budget with minimal supervision. Assists peers in the business application, development technologies etc., Participates in peer review of work products such as code, designs, and test plans produced by other team members. Ensure IRM compliance of tools, maintain evidence for the user access management and support any system audit. Support the team in creation, run & maintain, tool enhancements and deployment of latest technologies and functionalities. Working in a Global and Cross cultural environment & displaying strong personal effectiveness. What we need from you Minimum of 4 years of data visualization experience but can be relaxed for the right person with the required drive and appetite for action. Must have good experience in creating data models, complex visualizations of data based upon identified information needs, business model, Data set ERP system and stakeholder requirements Should have global reporting system exposure (GSAP, GPMR, HANA & ECC). Experienced in data modelling, data extraction through SQL. Coding skills would be a plus (Python, VBA, R, etc.) MUST be a Power BI developer. Experienced in working on complex reporting/data visualizations using Microsoft Power BI Strong understanding of Data management (SQL/Azure). Demonstrated experience developing end to end data flow structures, resulting in intuitive BI dashboards with high uptake. Candidates should have in depth experience working with end users to refine identified business needs through in-depth design reviews and information sessions. Candidates should be results driven, detailed orientated and work well within a dynamic and creative team. Work exposure to MS Access, MS Excel and Power Point is essential. Possess good written and oral communication skills as well as presentation skills Ability to learn quickly and adapt to new environments. Self-driven and motivated individual that takes pride in the development of high quality, on time solutions, run and maintain the existing tools and support team for other deliverables.

Posted 1 week ago

Apply

8.0 - 12.0 years

4 - 7 Lacs

bengaluru, karnataka, india

On-site

Act as a Technical Lead with 8 to 12 years of experience in Cloud (GCP) Work within an agile, multidisciplinary devops team Possess Hadoop knowledge and NiFi/Kafka experience Be an expert in Python, Data Flow, Pubsub, and Big Query Have knowledge and experience with GCP components such as GCS, BigQuery, AirFlow, Cloud SQL, PubSub/Kafka, DataFlow and Google Cloud SDK Understand Terraform script and Shell script Have experience working with RDBMS Possess GCP Data Engineer certification (advantageous) Provide technical leadership for a team of engineers Lead and contribute to multiple pods Interface technically with a range of stakeholders Solve complex problems Migrate and re-engineer existing services from on-premises data centers to Cloud (GCP/AWS) Understand business requirements and provide real-time solutions Use project development tools like JIRA, Confluence and GIT Write python/shell scripts to automate operations and server management Build and maintain operations tools for monitoring, notifications, trending, and analysis Define, create, test, and execute operations procedures Document current and future configuration processes and policies

Posted 1 week ago

Apply

8.0 - 12.0 years

4 - 7 Lacs

hyderabad, telangana, india

On-site

Act as a Technical Lead with 8 to 12 years of experience in Cloud (GCP) Work within an agile, multidisciplinary devops team Possess Hadoop knowledge and NiFi/Kafka experience Be an expert in Python, Data Flow, Pubsub, and Big Query Have knowledge and experience with GCP components such as GCS, BigQuery, AirFlow, Cloud SQL, PubSub/Kafka, DataFlow and Google Cloud SDK Understand Terraform script and Shell script Have experience working with RDBMS Possess GCP Data Engineer certification (advantageous) Provide technical leadership for a team of engineers Lead and contribute to multiple pods Interface technically with a range of stakeholders Solve complex problems Migrate and re-engineer existing services from on-premises data centers to Cloud (GCP/AWS) Understand business requirements and provide real-time solutions Use project development tools like JIRA, Confluence and GIT Write python/shell scripts to automate operations and server management Build and maintain operations tools for monitoring, notifications, trending, and analysis Define, create, test, and execute operations procedures Document current and future configuration processes and policies

Posted 1 week ago

Apply

8.0 - 12.0 years

4 - 7 Lacs

delhi, india

On-site

Act as a Technical Lead with 8 to 12 years of experience in Cloud (GCP) Work within an agile, multidisciplinary devops team Possess Hadoop knowledge and NiFi/Kafka experience Be an expert in Python, Data Flow, Pubsub, and Big Query Have knowledge and experience with GCP components such as GCS, BigQuery, AirFlow, Cloud SQL, PubSub/Kafka, DataFlow and Google Cloud SDK Understand Terraform script and Shell script Have experience working with RDBMS Possess GCP Data Engineer certification (advantageous) Provide technical leadership for a team of engineers Lead and contribute to multiple pods Interface technically with a range of stakeholders Solve complex problems Migrate and re-engineer existing services from on-premises data centers to Cloud (GCP/AWS) Understand business requirements and provide real-time solutions Use project development tools like JIRA, Confluence and GIT Write python/shell scripts to automate operations and server management Build and maintain operations tools for monitoring, notifications, trending, and analysis Define, create, test, and execute operations procedures Document current and future configuration processes and policies

Posted 1 week ago

Apply

3.0 - 8.0 years

4 - 7 Lacs

hyderabad, telangana, india

On-site

Possess mandatory Google Cloud Big Query Scripting Skills or Snowflake AWS or Cloud SQL Knowledge Should have knowledge on any one of these ETL Tools: Talend, Datastage, Informatica ISS (Cloud version), Data Fusion, Data Flow, Data Proc Must have SQL PL/SQL Scripting Experience Should possess mandatory Linux/Unix Skills Expertise in Python, Data Flow, Pubsub, Big Query, CICD Must have good experience/knowledge on GCP components like GCS, BigQuery, AirFlow, Cloud SQL, PubSub/Kafka, DataFlow and Google Cloud SDK Should have experience on any of the RDBMS Holding GCP Data Engineer certifications would be an added advantage Must have Hadoop knowledge, NiFi/Kafka experience Strong Scheduler knowledge - Preferably Control M or 1 mandatory compulsory (UC4 Atomic, Airflow Composer, Control M) Must be aware of Agile life cycle Ability to manage changes, incidents and problems

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies