Jobs
Interviews

1103 Dataflow Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

15 - 20 Lacs

Madurai, Tamil Nadu

On-site

Dear Candidate, Greetings of the day!! I am Kantha, and I'm reaching out to you regarding an exciting opportunity with TechMango. You can connect with me on LinkedIn https://www.linkedin.com/in/kantha-m-ashwin-186ba3244/ Or Email: kanthasanmugam.m@techmango.net Techmango Technology Services is a full-scale software development services company founded in 2014 with a strong focus on emerging technologies. It holds a primary objective of delivering strategic solutions towards the goal of its business partners in terms of technology. We are a full-scale leading Software and Mobile App Development Company. Techmango is driven by the mantra “Clients Vision is our Mission”. We have a tendency to stick on to the current statement. To be the technologically advanced & most loved organization providing prime quality and cost-efficient services with a long-term client relationship strategy. We are operational in the USA - Chicago, Atlanta, Dubai - UAE, in India - Bangalore, Chennai, Madurai, Trichy. Techmangohttps://www.techmango.net/ Job Title: GCP Data Engineer Location: Madurai Experience: 5+ Years Notice Period: Immediate About TechMango TechMango is a rapidly growing IT Services and SaaS Product company that helps global businesses with digital transformation, modern data platforms, product engineering, and cloud-first initiatives. We are seeking a GCP Data Architect to lead data modernization efforts for our prestigious client, Livingston, in a highly strategic project. Role Summary As a GCP Data Engineer, you will be responsible for designing and implementing scalable, high-performance data solutions on Google Cloud Platform. You will work closely with stakeholders to define data architecture, implement data pipelines, modernize legacy data systems, and guide data strategy aligned with enterprise goals. Key Responsibilities: Lead end-to-end design and implementation of scalable data architecture on Google Cloud Platform (GCP) Define data strategy, standards, and best practices for cloud data engineering and analytics Develop data ingestion pipelines using Dataflow, Pub/Sub, Apache Beam, Cloud Composer (Airflow), and BigQuery Migrate on-prem or legacy systems to GCP (e.g., from Hadoop, Teradata, or Oracle to BigQuery) Architect data lakes, warehouses, and real-time data platforms Ensure data governance, security, lineage, and compliance (using tools like Data Catalog, IAM, DLP) Guide a team of data engineers and collaborate with business stakeholders, data scientists, and product managers Create documentation, high-level design (HLD) and low-level design (LLD), and oversee development standards Provide technical leadership in architectural decisions and future-proofing the data ecosystem Required Skills & Qualifications: 5+ years of experience in data architecture, data engineering, or enterprise data platforms. Minimum 3 years of hands-on experience in GCP Data Service. Proficient in:BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer, Cloud SQL/Spanner. Python / Java / SQL Data modeling (OLTP, OLAP, Star/Snowflake schema). Experience with real-time data processing, streaming architectures, and batch ETL pipelines. Good understanding of IAM, networking, security models, and cost optimization on GCP. Prior experience in leading cloud data transformation projects. Excellent communication and stakeholder management skills. Preferred Qualifications: GCP Professional Data Engineer / Architect Certification. Experience with Terraform, CI/CD, GitOps, Looker / Data Studio / Tableau for analytics. Exposure to AI/ML use cases and MLOps on GCP. Experience working in agile environments and client-facing roles. What We Offer: Opportunity to work on large-scale data modernization projects with global clients. A fast-growing company with a strong tech and people culture. Competitive salary, benefits, and flexibility. Collaborative environment that values innovation and leadership. Job Type: Full-time Pay: ₹1,500,000.00 - ₹2,000,000.00 per year Application Question(s): Current CTC ? Expected CTC ? Notice Period ? (If you are serving Notice period please mention the Last working day) Experience: GCP Data Architecture : 3 years (Required) BigQuery: 3 years (Required) Cloud Composer (Airflow): 3 years (Required) Location: Madurai, Tamil Nadu (Required) Work Location: In person

Posted 4 days ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Overview of 66degrees 66degrees is a leading consulting and professional services company specializing in developing AI-focused, data-led solutions leveraging the latest advancements in cloud technology. With our unmatched engineering capabilities and vast industry experience, we help the world's leading brands transform their business challenges into opportunities and shape the future of work. At 66degrees, we believe in embracing the challenge and winning together. These values not only guide us in achieving our goals as a company but also for our people. We are dedicated to creating a significant impact for our employees by fostering a culture that sparks innovation and supports professional and personal growth along the way. Overview of Role As a Data Engineer specializing in AI/ML, you'll be instrumental in designing, building, and maintaining the data infrastructure crucial for training, deploying, and serving our advanced AI and Machine Learning models. You'll work closely with Data Scientists, ML Engineers, and Cloud Architects to ensure data is accessible, reliable, and optimized for high-performance AI/ML workloads, primarily within the Google Cloud ecosystem. Responsibilities Data Pipeline Development: Design, build, and maintain robust, scalable, and efficient ETL/ELT data pipelines to ingest, transform, and load data from various sources into data lakes and data warehouses, specifically optimized for AI/ML consumption. AI/ML Data Infrastructure: Architect and implement the underlying data infrastructure required for machine learning model training, serving, and monitoring within GCP environments. Google Cloud Ecosystem: Leverage a broad range of Google Cloud Platform (GCP) data services including, BigQuery, Dataflow, Dataproc, Cloud Storage, Pub/Sub, Vertex AI, Composer (Airflow), and Cloud SQL. Data Quality & Governance: Implement best practices for data quality, data governance, data lineage, and data security to ensure the reliability and integrity of AI/ML datasets. Performance Optimization: Optimize data pipelines and storage solutions for performance, cost-efficiency, and scalability, particularly for large-scale AI/ML data processing. Collaboration with AI/ML Teams: Work closely with Data Scientists and ML Engineers to understand their data needs, prepare datasets for model training, and assist in deploying models into production. Automation & MLOps Support: Contribute to the automation of data pipelines and support MLOps initiatives, ensuring seamless integration from data ingestion to model deployment and monitoring. Troubleshooting & Support: Troubleshoot and resolve data-related issues within the AI/ML ecosystem, ensuring data availability and pipeline health. Documentation: Create and maintain comprehensive documentation for data architectures, pipelines, and data models. Qualifications 1-2+ years of experience in Data Engineering, with at least 2-3 years directly focused on building data pipelines for AI/ML workloads. Deep, hands-on experience with core GCP data services such as BigQuery, Dataflow, Dataproc, Cloud Storage, Pub/Sub, and Composer/Airflow. Strong proficiency in at least one relevant programming language for data engineering (Python is highly preferred).SQL skills for complex data manipulation, querying, and optimization. Solid understanding of data warehousing concepts, data modeling (dimensional, 3NF), and schema design for analytical and AI/ML purposes. Proven experience designing, building, and optimizing large-scale ETL/ELT processes. Familiarity with big data processing frameworks (e.g., Apache Spark, Hadoop) and concepts. Exceptional analytical and problem-solving skills, with the ability to design solutions for complex data challenges. Excellent verbal and written communication skills, capable of explaining complex technical concepts to both technical and non-technical stakeholders. 66degrees is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to actual or perceived race, color, religion, sex, gender, gender identity, national origin, age, weight, height, marital status, sexual orientation, veteran status, disability status or other legally protected class.

Posted 5 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Fiche De Poste We are looking for an experienced and motivated Senior GCP Data Engineer to join our dynamic data team. In this role, you will be responsible for designing, building, and optimizing data pipelines, implementing advanced analytics solutions, and maintaining robust data infrastructure using Google Cloud Platform (GCP) services. You will play a key role in enabling data-driven decision-making and enhancing the performance and scalability of our data ecosystem. Key Responsibilities Design, implement, and optimize data pipelines using Google Cloud Platform (GCP) services, including Compute Engine, BigQuery, Cloud Pub/Sub, Dataflow, Cloud Storage, and AlloyDB. Lead the design and optimization of schema for large-scale data systems, ensuring data consistency, integrity, and scalability. Work closely with cross-functional teams to understand data requirements and deliver efficient, high-performance solutions. Design and execute complex SQL queries for BigQuery and other databases, ensuring optimal performance and efficiency. Implement efficient data processing workflows and streaming data solutions using Cloud Pub/Sub and Dataflow. Develop and maintain data models, schemas, and data marts to ensure consistency and scalability across datasets. Ensure the scalability, reliability, and security of cloud-based data architectures. Optimize cloud storage, compute, and query performance, driving cost-effective solutions. Collaborate with data scientists, analysts, and software engineers to create actionable insights and drive business outcomes. Implement best practices for data management, including governance, quality, and monitoring of data pipelines. Provide mentorship and guidance to junior data engineers and collaborate with them to achieve team goals. Required Qualifications Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent work experience). 5+ years of experience in data engineering, with a strong focus on Google Cloud Platform (GCP). Extensive hands-on experience with GCP Compute Engine, BigQuery, Cloud Pub/Sub, Dataflow, Cloud Storage, and AlloyDB. Strong expertise in SQL for query optimization and performance tuning in large-scale datasets. Solid experience in designing data schemas, data pipelines, and ETL processes. Strong understanding of data modeling techniques, and experience with schema design for both transactional and analytical systems. Proven experience optimizing BigQuery performance, including partitioning, clustering, and cost optimization strategies. Experience with managing and processing streaming data and batch data processing workflows. Knowledge of AlloyDB for managing transactional databases in the cloud and integrating them into data pipelines. Familiarity with data security, governance, and compliance best practices on GCP. Excellent problem-solving skills, with the ability to troubleshoot complex data issues and find efficient solutions. Strong communication and collaboration skills, with the ability to work with both technical and non-technical stakeholders. Preferred Qualifications Bachelor's/Master’s degree in Computer Science, Data Engineering, or a related field. Familiarity with infrastructure as code tools like Terraform or Cloud Deployment Manager. GCP certifications (e.g., Google Cloud Professional Data Engineer or Cloud Architect). Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.

Posted 5 days ago

Apply

5.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

It's fun to work in a company where people truly BELIEVE in what they are doing! We're committed to bringing passion and customer focus to the business. Location - Open Position: Data Engineer (GCP) – Technology If you are an extraordinary developer and who loves to push the boundaries to solve complex business problems using creative solutions, then we wish to talk with you. As an Analytics Technology Engineer, you will work on the Technology team that helps deliver our Data Engineering offerings at large scale to our Fortune clients worldwide. The role is responsible for innovating, building and maintaining technology services. Responsibilities: Be an integral part of large scale client business development and delivery engagements Develop the software and systems needed for end-to-end execution on large projects Work across all phases of SDLC, and use Software Engineering principles to build scaled solutions Build the knowledge base required to deliver increasingly complex technology projects Qualifications & Experience: A bachelor’s degree in Computer Science or related field with 5 to 10 years of technology experience Desired Technical Skills: Data Engineering and Analytics on Google Cloud Platform: Basic Cloud Computing Concepts Bigquery, Google Cloud Storage, Cloud SQL, PubSub, Dataflow, Cloud Composer, GCP Data Transfer, gcloud CLI Python, Google Cloud Python SDK, SQL Experience in working with Any NoSQL/Columnar / MPP Database Experience in working with Any ETL Tool (Informatica/DataStage/Talend/Pentaho etc.) Strong Knowledge of database concepts, Data Modeling in RDBMS Vs NoSQL, OLTP Vs OLAP, MPP Architecture Other Desired Skills: Excellent communication and co-ordination skills Problem understanding, articulation and solutioning Quick learner & adaptable with regards to new technologies Ability to research & solve technical issues Responsibilities: Developing Data Pipelines (Batch/Streaming) Developing Complex data transformations ETL Orchestration Data Migration Develop and Maintain Datawarehouse / Data Lakes Good To Have: Experience in working with Apache Spark / Kafka Machine Learning concepts Google Cloud Professional Data Engineer Certification If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us! Not the right fit? Let us know you're interested in a future opportunity by clicking Introduce Yourself in the top-right corner of the page or create an account to set up email alerts as new job postings become available that meet your interest!

Posted 5 days ago

Apply

8.0 years

0 Lacs

India

Remote

Job Title: Senior Data Architect – Fintech Data Lakes Location: Remote Department: Enterprise Data & Analytics / Technology Reports to: Chief Data Officer (CDO) or Head of Data Engineering Role Highlights Senior-level technical architect, strong cloud experience (GCP + Azure) Specialized in data lakes , compliance, real-time & batch pipelines Financial services / fintech domain knowledge (e.g., ledgers, payment rails, PII compliance) Expertise in SQL + Python/Scala/Java Mentoring, governance, and cross-functional advisory Factors Affecting Range Strong cloud certifications (GCP, Azure Architect) Deep domain in compliance frameworks (PCI, SOX, GLBA) Hands-on vs. purely strategic About the Role We are seeking a highly experienced Senior Data Architect to lead the architecture and governance of our fintech data platforms, spanning Google Cloud Platform (GCP) for real-time production systems and Azure for regulatory and business reporting. This role is critical to building secure, governed, and scalable data lakes that support both operational finance systems and strategic analytics. You will be responsible for designing robust data architectures that ingest, process, and govern both structured data (e.g., transactions, accounts, ledgers) and unstructured data (e.g., scanned documents, KYC images, PDFs, voice logs)—ensuring compliance with financial regulations and enabling insights across the organization. Key Responsibilities Data Lake & Architecture Strategy Architect and maintain GCP-based production data lakes for real-time transactional ingestion and processing (e.g., payment processing, KYC, fraud detection). Design Azure-based reporting data lakes for BI, regulatory, and financial reporting workloads (e.g., ledger audits, compliance reports). Build multi-zone lake structures (raw, refined, curated) across both clouds, incorporating schema evolution, data contracts, and role-based access control. Financial Data Modeling & Pipeline Design Model financial datasets (ledger data, user profiles, transactions, pricing) using dimensional, normalized, and vault approaches. Build and optimize real-time and batch pipelines with GCP (BigQuery, Pub/Sub, Dataflow) and Azure (Data Factory, Synapse, ADLS Gen2). Enable unified analytics on structured data (MySQL) and unstructured content (OCR’d documents, audio transcripts, logs). Compliance, Governance & Risk Controls Implement data access, retention, and classification policies that meet regulatory requirements (GLBA, PCI-DSS, SOX, GDPR). Collaborate with infosec, legal, and audit teams to ensure auditability and lineage tracking across data flows. Define controls for PII, financial data sensitivity, and third-party data sharing. Cross-Functional Enablement Serve as a technical advisor to business and compliance teams for data design and provisioning. Mentor data engineers and analysts on financial data structures, accuracy, and business rules. Help define enterprise standards for metadata, data cataloging, and data quality monitoring using tools like Azure Purview and GCP Data Catalog. Required Qualifications 8+ years in data architecture, with significant experience in financial services, fintech, or banking environments. Strong experience with Google Cloud Platform (BigQuery, Dataflow, Cloud Storage, Pub/Sub) and Azure Data Lake / Synapse Analytics. Deep understanding of financial data modeling, including ledgers, double-entry accounting, payment rails, and regulatory audit structures. Experience with batch and streaming architectures, including handling of high-velocity financial transactions. Proficient in SQL and at least one programming language (Python, Scala, or Java). Strong understanding of data compliance frameworks, particularly in regulated financial environments. Preferred Qualifications Prior experience with data lakehouse design using Delta Lake, Iceberg, or BigLake. Experience integrating data platforms with BI/reporting tools like Power BI, Looker, Tableau, or internal compliance dashboards. Familiarity with fraud analytics, anti-money laundering (AML) data flows, or KYC enrichment pipelines. Python programming, Web scraping, API integration, Data analysis, Machine learning, and Linux. What Success Looks Like A resilient, compliant, and scalable data foundation that enables accurate financial operations and reporting. Efficient, observable data pipelines with proactive data quality monitoring and failure alerting. High trust in the reporting data lake from internal audit, compliance, and executive stakeholders. A streamlined data access and provisioning process that supports agility while meeting governance requirements.

Posted 5 days ago

Apply

0.0 - 10.0 years

0 Lacs

Delhi

On-site

Job requisition ID :: 84960 Date: Jul 27, 2025 Location: Delhi Designation: Senior Consultant Entity: Deloitte Touche Tohmatsu India LLP Your potential, unleashed. India’s impact on the global economy has increased at an exponential rate and Deloitte presents an opportunity to unleash and realise your potential amongst cutting edge leaders, and organisations shaping the future of the region, and indeed, the world beyond. At Deloitte, your whole self to work, every day. Combine that with our drive to propel with purpose and you have the perfect playground to collaborate, innovate, grow, and make an impact that matters. The team As a member of the Operation, Industry and domain solutions team you will embark on an exciting and fulfilling journey with a group of intelligent and innovative globally aware individuals. We work in conjuncture with various institutions solving key business problems across a broad-spectrum roles and functions, all set against the backdrop of constant industry change. Your work profile Devops Engineer Qualifications: B.E./ B. Tech./ MCA/ M.E./ M. Tech Required Experience: 10 years or more Desirable: Experience in Govt. IT Projects / Govt. Health IT Projects • Rich experience in analyzing enterprise application performance, determining roots cause, and optimizing resources up and down the stack Scaling Application Workloads in Linux VMware Demonstrates Technical Qualification Administering and utilizing Jenkins / Gitlab CI at scale for build managementand continuous integration Very Strong in Kubernetes, Envoy, Consul, Service mesh, API gateway. Substantial Knowledge of Monitoring tools like Zipkin, Kibana, Grafana, Prometheus, SonarQube. Strong in CI/CD experience. Relevant Experience in any cloud platform Creating Docker images and managing Docker Containers Scripting for configuration management. Experience in airflow ELK, dataflow for ETL. Good to have Infrastructure-as-code secrets management, deployment strategies, cloud networking. Familiarity with primitives like deployment and cron job. Scripting experience Supporting highly available open-source production applications and tools How you’ll grow Connect for impact Our exceptional team of professionals across the globe are solving some of the world’s most complex business problems, as well as directly supporting our communities, the planet, and each other. Know more in our Global Impact Report and our India Impact Report. Empower to lead You can be a leader irrespective of your career level. Our colleagues are characterised by their ability to inspire, support, and provide opportunities for people to deliver their best and grow both as professionals and human beings. Know more about Deloitte and our One Young World partnership. Inclusion for all At Deloitte, people are valued and respected for who they are and are trusted to add value to their clients, teams and communities in a way that reflects their own unique capabilities. Know more about everyday steps that you can take to be more inclusive. At Deloitte, we believe in the unique skills, attitude and potential each and every one of us brings to the table to make an impact that matters. Drive your career At Deloitte, you are encouraged to take ownership of your career. We recognise there is no one size fits all career path, and global, cross-business mobility and up / re-skilling are all within the range of possibilities to shape a unique and fulfilling career. Know more about Life at Deloitte. Everyone’s welcome… entrust your happiness to us Our workspaces and initiatives are geared towards your 360-degree happiness. This includes specific needs you may have in terms of accessibility, flexibility, safety and security, and caregiving. Here’s a glimpse of things that are in store for you. Interview tips We want job seekers exploring opportunities at Deloitte to feel prepared, confident and comfortable. To help you with your interview, we suggest that you do your research, know some background about the organisation and the business area you’re applying to. Check out recruiting tips from Deloitte professionals.

Posted 5 days ago

Apply

5.0 - 10.0 years

0 Lacs

maharashtra

On-site

You are a highly skilled and motivated Lead Data Scientist / Machine Learning Engineer sought to join a team pivotal in the development of a cutting-edge reporting platform. This platform is designed to measure and optimize online marketing campaigns effectively. Your role will involve focusing on data engineering, ML model lifecycle, and cloud-native technologies. You will be responsible for designing, building, and maintaining scalable ELT pipelines, ensuring high data quality, integrity, and governance. Additionally, you will develop and validate predictive and prescriptive ML models to enhance marketing campaign measurement and optimization. Experimenting with different algorithms and leveraging various models will be crucial in driving insights and recommendations. Furthermore, you will deploy and monitor ML models in production and implement CI/CD pipelines for seamless updates and retraining. You will work closely with data analysts, marketing teams, and software engineers to align ML and data solutions with business objectives. Translating complex model insights into actionable business recommendations and presenting findings to stakeholders will also be part of your responsibilities. Qualifications & Skills: Educational Qualifications: - Bachelors or Masters degree in Computer Science, Data Science, Machine Learning, Artificial Intelligence, Statistics, or related field. - Certifications in Google Cloud (Professional Data Engineer, ML Engineer) is a plus. Must-Have Skills: - Experience: 5-10 years with the mentioned skillset & relevant hands-on experience. - Data Engineering: Experience with ETL/ELT pipelines, data ingestion, transformation, and orchestration (Airflow, Dataflow, Composer). - ML Model Development: Strong grasp of statistical modeling, supervised/unsupervised learning, time-series forecasting, and NLP. - Programming: Proficiency in Python (Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch) and SQL for large-scale data processing. - Cloud & Infrastructure: Expertise in GCP (BigQuery, Vertex AI, Dataflow, Pub/Sub, Cloud Storage) or equivalent cloud platforms. - MLOps & Deployment: Hands-on experience with CI/CD pipelines, model monitoring, and version control (MLflow, Kubeflow, Vertex AI, or similar tools). - Data Warehousing & Real-time Processing: Strong knowledge of modern data platforms for batch and streaming data processing. Nice-to-Have Skills: - Experience with Graph ML, reinforcement learning, or causal inference modeling. - Working knowledge of BI tools (Looker, Tableau, Power BI) for integrating ML insights into dashboards. - Familiarity with marketing analytics, attribution modeling, and A/B testing methodologies. - Experience with distributed computing frameworks (Spark, Dask, Ray). Location: - Bengaluru Brand: - Merkle Time Type: - Full time Contract Type: - Permanent,

Posted 5 days ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

We are looking for a skilled Data Governance Engineer to take charge of developing and overseeing robust data governance frameworks on Google Cloud Platform (GCP). Your role will involve leveraging your expertise in data management, metadata frameworks, compliance, and security within cloud environments to ensure the implementation of high-quality, secure, and compliant data practices aligned with organizational objectives. With a minimum of 4 years of experience in data governance, data management, or data security, you should possess hands-on proficiency with Google Cloud Platform (GCP) tools such as BigQuery, Dataflow, Dataproc, and Google Data Catalog. Additionally, a strong command over metadata management, data lineage, and data quality tools like Collibra and Informatica is crucial. A deep understanding of data privacy laws and compliance frameworks, coupled with proficiency in SQL and Python for governance automation, is essential. Experience with RBAC, encryption, data masking techniques, and familiarity with ETL/ELT pipelines and data warehouse architectures will be advantageous. Your responsibilities will include developing and executing comprehensive data governance frameworks with a focus on metadata management, lineage tracking, and data quality. You will be tasked with defining, documenting, and enforcing data governance policies, access control mechanisms, and security standards using GCP-native services like IAM, DLP, and KMS. Managing metadata repositories using tools such as Collibra, Informatica, Alation, or Google Data Catalog will also be part of your role. Collaborating with data engineering and analytics teams to ensure compliance with regulatory standards like GDPR, CCPA, SOC 2, and automating processes for data classification, monitoring, and reporting using Python and SQL will be key responsibilities. Supporting data stewardship initiatives, optimizing ETL/ELT pipelines, and data workflows to adhere to governance best practices will also be part of your role. At GlobalLogic, we offer a culture of caring, emphasizing inclusivity and personal growth. You will have access to continuous learning and development opportunities, engaging and meaningful work, as well as a healthy work-life balance. Join our high-trust organization where integrity is paramount, and collaborate with us to engineer innovative solutions that have a lasting impact on industries worldwide.,

Posted 6 days ago

Apply

12.0 - 16.0 years

0 Lacs

karnataka

On-site

About KPMG in India KPMG entities in India are professional services firm(s). These Indian member firms are affiliated with KPMG International Limited. KPMG was established in India in August 1993. Our professionals leverage the global network of firms, and are conversant with local laws, regulations, markets and competition. KPMG has offices across India in Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Jaipur, Hyderabad, Jaipur, Kochi, Kolkata, Mumbai, Noida, Pune, Vadodara and Vijayawada. KPMG entities in India offer services to national and international clients in India across sectors. We strive to provide rapid, performance-based, industry-focused and technology-enabled services, which reflect a shared knowledge of global and local industries and our experience of the Indian business environment. We are seeking an experienced and highly skilled Senior Google Cloud Analytics & Vertex AI Specialist for the position of Associate Director with 12-15 years of experience, specifically focusing on Google Vertex AI. The ideal candidate will have a deep understanding of Google Cloud Platform (GCP) and extensive hands-on experience with Google Cloud analytics services and Vertex AI. The role involves leading projects, designing scalable data solutions, driving the adoption of AI and machine learning practices within the organization, and supporting pre-sales activities. A minimum of 2 years of hands-on experience with Vertex AI is required. Key Responsibilities: - Architect and Implement: Design and implement end-to-end data analytics solutions using Google Cloud services such as BigQuery, Dataflow, Pub/Sub, and Cloud Storage. - Vertex AI Development: Develop, train, and deploy machine learning models using Vertex AI. Utilize Vertex AI's integrated tools for model monitoring, versioning, and CI/CD pipelines. Implement custom machine learning pipelines using Vertex AI Pipelines. Utilize Vertex AI Feature Store for feature management and Vertex AI Model Registry for model tracking. - Data Integration: Integrate data from various sources, ensuring data quality and consistency across different systems. - Performance Optimization: Optimize data pipelines and analytics processes for maximum efficiency and performance. - Leadership and Mentorship: Lead and mentor a team of data engineers and data scientists, providing guidance and support on best practices in GCP and AI/ML. - Collaboration: Work closely with stakeholders to understand business requirements and translate them into technical solutions. - Innovation: Stay updated with the latest trends and advancements in Google Cloud services and AI technologies, advocating for their adoption when beneficial. - Pre-Sales Support: Collaborate cross-functionally to understand client requirements, design tailored solutions, prepare and deliver technical presentations and product demonstrations, and assist in proposal and RFP responses. - Project Delivery: Manage and oversee the delivery of data analytics and AI/ML projects, ensuring timely and within budget completion while coordinating with cross-functional teams. Qualifications: - Experience: 12-15 years in data engineering, data analytics, and AI/ML with a focus on Google Cloud Platform. - Technical Skills: Proficient in Google Cloud services (BigQuery, Dataflow, Pub/Sub, Cloud Storage, Vertex AI), strong programming skills in Python and SQL, experience with machine learning frameworks (TensorFlow, PyTorch), data visualization tools (Looker, Data Studio). - Pre-Sales and Delivery Skills: Experience in supporting pre-sales activities, managing and delivering complex data analytics and AI/ML projects. - Certifications: Google Cloud Professional Data Engineer or Professional Machine Learning Engineer certification is a plus. - Soft Skills: Excellent problem-solving, communication, and leadership skills. Qualifications: - B.E./B.Tech/Post Graduate,

Posted 6 days ago

Apply

6.0 - 10.0 years

0 Lacs

haryana

On-site

Join GlobalLogic as a valuable member of the team working on a significant software project for a world-class company that provides M2M / IoT 4G/5G modules to industries such as automotive, healthcare, and logistics. Your engagement will involve contributing to the development of end-user modules" firmware, implementing new features, maintaining compatibility with the latest telecommunication and industry standards, and analyzing and estimating customer requirements. Requirements - BA / BS degree in Computer Science, Mathematics, or a related technical field, or equivalent practical experience. - Proficiency in Cloud SQL and Cloud Bigtable. - Experience with Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub / Sub, and Genomics. - Familiarity with Google Transfer Appliance, Cloud Storage Transfer Service, and BigQuery Data Transfer. - Knowledge of data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and data processing algorithms (MapReduce, Flume). - Previous experience working with technical customers. - Proficiency in writing software in languages like Java or Python. - 6-10 years of relevant consulting, industry, or technology experience. - Strong problem-solving and troubleshooting skills. - Excellent communication skills. Job Responsibilities - Hands-on experience working with data warehouses, including technical architectures, infrastructure components, ETL / ELT, and reporting / analytic tools. - Experience in technical consulting. - Proficiency in architecting and developing software or internet-scale Big Data solutions in virtualized environments like Google Cloud Platform (mandatory) and AWS / Azure (good to have). - Familiarity with big data, information retrieval, data mining, machine learning, and building high availability applications with modern web technologies. - Working knowledge of ITIL and / or agile methodologies. - Google Data Engineer certification. What We Offer - Culture of caring: Prioritize a culture of caring, where people come first, fostering an inclusive environment of acceptance and belonging. - Learning and development: Commitment to continuous learning and growth, offering various programs, training curricula, and hands-on opportunities for personal and professional advancement. - Interesting & meaningful work: Engage in impactful projects that allow for creative problem-solving and exploration of new solutions. - Balance and flexibility: Embrace work-life balance with diverse career areas, roles, and work arrangements to support personal well-being. - High-trust organization: Join a high-trust organization with a focus on integrity, trustworthiness, and ethical practices. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner known for collaborating with forward-thinking companies to create innovative digital products and experiences. Join the team in transforming businesses and industries through intelligent products, platforms, and services, contributing to cutting-edge solutions that shape the world today.,

Posted 6 days ago

Apply

6.0 - 10.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Data Pipeline Architect at our company, you will be responsible for designing, developing, and maintaining optimal data pipeline architecture. You will monitor incidents, perform root cause analysis, and implement appropriate actions to ensure smooth operations. Additionally, you will troubleshoot issues related to abnormal job execution and data corruption, and automate jobs, notifications, and reports for efficiency. Your role will also involve optimizing existing queries, reverse engineering for data research and analysis, and calculating the impact of issues on downstream processes for effective communication. You will support failures, address data quality issues, and ensure the overall health of the environment. Maintaining ingestion and pipeline runbooks, portfolio summaries, and DBAR will be part of your responsibilities. Furthermore, you will enable infrastructure changes, enhancements, and updates roadmap, and build the infrastructure for optimal extraction, transformation, and loading of data from various sources using big data technologies, python, or Web-based APIs. Conducting and participating in code reviews with peers, ensuring effective communication, and understanding requirements will be essential in this role. To qualify for this position, you should hold a Bachelor's degree in Engineering/Computer Science or a related quantitative field. You must have a minimum of 8 years of programming experience with python and SQL, as well as hands-on experience with GCP, BigQuery, Dataflow, Data Warehousing, Apache Beam, and Cloud Storage. Experience with massively parallel processing systems like Spark or Hadoop, source code control systems (GIT), and CI/CD processes is required. Involvement in designing, prototyping, and delivering software solutions within the big data ecosystem, developing generative AI models, and ensuring code quality through reviews are key aspects of this role. Experience with Agile development methodologies, improving data governance and quality, and increasing data reliability are also important. Joining our team at EXL Analytics offers you the opportunity to work in a dynamic and innovative environment alongside experienced professionals. You will gain insights into various business domains, develop teamwork and time-management skills, and receive training in analytics tools and techniques. Our mentoring program and growth opportunities ensure that you have the support and guidance needed to excel in your career. Sky is the limit for our team members, and the experiences gained at EXL Analytics pave the way for personal and professional development within our company and beyond.,

Posted 6 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

GCP Senior Data Engineer Chennai, India A skilled data engineering professional with 5 years of experience in GCP BigQuery and Oracle PL/SQL , specializing in designing and implementing end-to-end batch data processes in the Google Cloud ecosystem. Strong hands-on expertise with: Core Skills & Tools: Mandatory: GCP, BigQuery Additional Tools: GCS, DataFlow, Cloud Composer, Pub/Sub, GCP Storage, Google Analytics Hub Nice to Have: Apache Airflow, GCP DataProc, GCP DMS, Python Technical Proficiency: Expert in BigQuery , BQL , and DBMS Well-versed in Linux and Python scripting Skilled in Terraform for GCP infrastructure automation Proficient in CI/CD tools such as GitHub , Jenkins , and Nexus Experience with GCP orchestration tools : Cloud Composer, DataFlow, and Pub/Sub Additional Strengths: Strong communication and collaboration skills Capable of building scalable, automated cloud-based solutions Able to work across both data engineering and DevOps environments This profile is well-suited for roles involving cloud-based data architecture , automation , and pipeline orchestration within the GCP environment .

Posted 6 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description We are looking for an experienced and motivated Senior GCP Data Engineer to join our dynamic data team. In this role, you will be responsible for designing, building, and optimizing data pipelines, implementing advanced analytics solutions, and maintaining robust data infrastructure using Google Cloud Platform (GCP) services. You will play a key role in enabling data-driven decision-making and enhancing the performance and scalability of our data ecosystem. Key Responsibilities Design, implement, and optimize data pipelines using Google Cloud Platform (GCP) services, including Compute Engine, BigQuery, Cloud Pub/Sub, Dataflow, Cloud Storage, and AlloyDB. Lead the design and optimization of schema for large-scale data systems, ensuring data consistency, integrity, and scalability. Work closely with cross-functional teams to understand data requirements and deliver efficient, high-performance solutions. Design and execute complex SQL queries for BigQuery and other databases, ensuring optimal performance and efficiency. Implement efficient data processing workflows and streaming data solutions using Cloud Pub/Sub and Dataflow. Develop and maintain data models, schemas, and data marts to ensure consistency and scalability across datasets. Ensure the scalability, reliability, and security of cloud-based data architectures. Optimize cloud storage, compute, and query performance, driving cost-effective solutions. Collaborate with data scientists, analysts, and software engineers to create actionable insights and drive business outcomes. Implement best practices for data management, including governance, quality, and monitoring of data pipelines. Provide mentorship and guidance to junior data engineers and collaborate with them to achieve team goals. Required Qualifications Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent work experience). 5+ years of experience in data engineering, with a strong focus on Google Cloud Platform (GCP). Extensive hands-on experience with GCP Compute Engine, BigQuery, Cloud Pub/Sub, Dataflow, Cloud Storage, and AlloyDB. Strong expertise in SQL for query optimization and performance tuning in large-scale datasets. Solid experience in designing data schemas, data pipelines, and ETL processes. Strong understanding of data modeling techniques, and experience with schema design for both transactional and analytical systems. Proven experience optimizing BigQuery performance, including partitioning, clustering, and cost optimization strategies. Experience with managing and processing streaming data and batch data processing workflows. Knowledge of AlloyDB for managing transactional databases in the cloud and integrating them into data pipelines. Familiarity with data security, governance, and compliance best practices on GCP. Excellent problem-solving skills, with the ability to troubleshoot complex data issues and find efficient solutions. Strong communication and collaboration skills, with the ability to work with both technical and non-technical stakeholders. Preferred Qualifications Bachelor's/Master’s degree in Computer Science, Data Engineering, or a related field. Familiarity with infrastructure as code tools like Terraform or Cloud Deployment Manager. GCP certifications (e.g., Google Cloud Professional Data Engineer or Cloud Architect). Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Position Title: GCP Data Engineer 34306 Job Type: Full-Time Work Mode: Hybrid Location: Chennai Budget: ₹18–20 LPA Notice Period: Immediate Joiners Preferred Role Overview We are seeking a proactive Full Stack Data Engineer with a strong focus on Google Cloud Platform (GCP) and data engineering tools. The ideal candidate will contribute to building analytics products supporting supply chain insights and will be responsible for developing cloud-based data pipelines, APIs, and user interfaces. The role demands high standards of software engineering, agile practices like Test-Driven Development (TDD), and experience in modern data architectures. Key Responsibilities Design, build, and deploy scalable data pipelines and analytics platforms using GCP tools like BigQuery, Dataflow, Dataproc, Data Fusion, and Cloud SQL. Implement and maintain Infrastructure as Code (IaC) using Terraform and CI/CD pipelines using Tekton. Develop robust APIs using Python, Java, and Spring Boot, and deliver frontend interfaces using Angular, React, or Vue. Build and support data integration workflows using Airflow, PySpark, and PostgreSQL. Collaborate with cross-functional teams in an Agile environment, leveraging Jira, paired programming, and TDD. Ensure cloud deployments are secure, scalable, and performant on GCP. Mentor team members and promote continuous learning, clean code practices, and Agile principles. Mandatory Skills GCP services: BigQuery, Dataflow, Dataproc, Data Fusion, Cloud SQL Programming: Python, Java, Spring Boot Frontend: Angular, React, Vue, TypeScript, JavaScript Data Orchestration: Airflow, PySpark DevOps/CI-CD: Terraform, Tekton, Jenkins Databases: PostgreSQL, Cloud SQL, NoSQL API development and integration Experience 5+ years in software/data engineering Minimum 1 year in GCP-based deployment and cloud architecture Education Bachelor’s or Master’s in Computer Science, Engineering, or related technical discipline Desired Traits Passion for clean, maintainable code Strong problem-solving skills Agile mindset with an eagerness to mentor and collaborate Skills: typescript,data fusion,terraform,java,spring boot,dataflow,data integration,cloud sql,javascript,bigquery,react,postgresql,nosql,vue,data,pyspark,dataproc,sql,cloud,angular,python,tekton,api development,gcp services,jenkins,airflow,gcp

Posted 6 days ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Position Title: Senior Data Engineer Location: Chennai 34322 Job Type: Contract Budget: ₹18 LPA Notice Period: Immediate Joiners Only Role Overview We are seeking a highly capable Software Engineer (Data Engineer) to support end-to-end development and deployment of critical data products. The selected candidate will work across diverse business and technical teams to design, build, transform, and migrate data solutions using modern cloud technologies. This is a high-impact role focused on cloud-native data engineering and infrastructure. Key Responsibilities Develop and manage scalable data pipelines and workflows on Google Cloud Platform (GCP) Design and implement ETL processes using Python, BigQuery, and Terraform Support data product lifecycle from concept, development to deployment and DevOps Optimize query performance and manage large datasets with efficiency Collaborate with cross-functional teams to gather requirements and deliver solutions Maintain strong adherence to Agile practices, contributing to sprint planning and user stories Apply best practices in data security, quality, and governance Effectively communicate technical solutions to stakeholders and team members Required Skills & Experience Minimum 4 years of relevant experience in GCP Data Engineering Strong hands-on experience with BigQuery, Python programming, Terraform, Cloud Run, and GitHub Proven expertise in SQL, data modeling, and performance optimization Solid understanding of cloud data warehousing and pipeline orchestration (e.g., DBT, Dataflow, Composer, or Airflow DAGs) Background in ETL workflows and data processing logic Familiarity with Agile (Scrum) methodology and collaboration tools Preferred Skills Experience with Java, Spring Boot, and RESTful APIs Exposure to infrastructure automation and CI/CD pipelines Educational Qualification Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field Skills: etl,terraform,dbt,java,spring boot,etl workflows,data modeling,dataflow,data engineering,ci/cd,bigquery,agile,data,sql,cloud,restful apis,github,airflow dags,gcp,cloud run,composer,python

Posted 6 days ago

Apply

5.0 years

19 - 20 Lacs

Chennai, Tamil Nadu, India

On-site

Position Title: Senior Software Engineer 34332 Location: Chennai (Onsite) Job Type: Contract Budget: ₹20 LPA Notice Period: Immediate Joiners Only Role Overview We are looking for a highly skilled Senior Software Engineer to be a part of a centralized observability and monitoring platform team. The role focuses on building and maintaining a scalable, reliable observability solution that enables faster incident response and data-driven decision-making through latency, traffic, error, and saturation monitoring. This opportunity requires a strong background in cloud-native architecture, observability tooling, backend and frontend development, and data pipeline engineering. Key Responsibilities Design, build, and maintain observability and monitoring platforms to enhance MTTR/MTTX Create and optimize dashboards, alerts, and monitoring configurations using tools like Prometheus, Grafana, etc. Architect and implement scalable data pipelines and microservices for real-time and batch data processing Utilize GCP tools including BigQuery, Dataflow, Dataproc, Data Fusion, and others Develop end-to-end solutions using Spring Boot, Python, Angular, and REST APIs Design and manage relational and NoSQL databases including PostgreSQL, MySQL, and BigQuery Implement best practices in data governance, RBAC, encryption, and security within cloud environments Ensure automation and reliability through CI/CD, Terraform, and orchestration tools like Airflow and Tekton Drive full-cycle SDLC processes including design, coding, testing, deployment, and monitoring Collaborate closely with software architects, DevOps, and cross-functional teams for solution delivery Core Skills Required Proficiency in Spring Boot, Angular, Java, and Python Experience in developing microservices and SOA-based systems Cloud-native development experience, preferably on Google Cloud Platform (GCP) Strong understanding of HTML, CSS, JavaScript/TypeScript, and modern frontend frameworks Experience with infrastructure automation and monitoring tools Working knowledge of data engineering technologies: PySpark, Airflow, Apache Beam, Kafka, and similar Strong grasp of RESTful APIs, GitHub, and TDD methodologies Preferred Skills GCP Professional Certifications (e.g., Data Engineer, Cloud Developer) Hands-on experience with Terraform, Cloud SQL, Data Governance tools, and security frameworks Exposure to performance tuning, cost optimization, and observability best practices Experience Required 5+ years of experience in full-stack and cloud-based application development Strong track record in building distributed, scalable systems Prior experience with observability and performance monitoring tools is a plus Educational Qualifications Bachelor’s Degree in Computer Science, Information Technology, or a related field (mandatory) Skills: java,data fusion,html,dataflow,terraform,spring boot,restful apis,python,angular,dataproc,microservices,apache beam,css,cloud sql,soa,typescript,tdd,kafka,javascript,airflow,github,pyspark,bigquery,,gcp

Posted 6 days ago

Apply

5.0 years

6 - 8 Lacs

Chennai

On-site

As a GCP Data Engineer, you will integrate data from various sources into novel data products. You will build upon existing analytical data, including merging historical data from legacy platforms with data ingested from new platforms. You will also analyze and manipulate large datasets, activating data assets to enable enterprise platforms and analytics within GCP. You will design and implement the transformation and modernization on GCP, creating scalable data pipelines that land data from source applications, integrate into subject areas, and build data marts and products for analytics solutions. You will also conduct deep-dive analysis of Current State Receivables and Originations data in our data warehouse, performing impact analysis related to Ford Credit North America's modernization and providing implementation solutions. Moreover, you will partner closely with our AI, data science, and product teams, developing creative solutions that build the future for Ford Credit. Experience with large-scale solutions and operationalizing data warehouses, data lakes, and analytics platforms on Google Cloud Platform or other cloud environments is a must. We are looking for candidates with a broad set of analytical and technology skills across these areas and who can demonstrate an ability to design the right solutions with the appropriate combination of GCP and 3rd party technologies for deployment on Google Cloud Platform. GCP certified Professional Data Engineer Successfully designed and implemented data warehouses and ETL processes for over five years, delivering high-quality data solutions. 5+ years of complex SQL development experience 2+ experience with programming languages such as Python, Java, or Apache Beam. Experienced cloud engineer with 3+ years of GCP expertise, specializing in managing cloud infrastructure and applications to production-scale solutions. In-depth understanding of GCP’s underlying architecture and hands-on experience of crucial GCP services, especially those related to data processing (Batch/Real Time) leveraging Terraform, Big Query, Dataflow, Pub/Sub, Data form, astronomer, Data Fusion, DataProc, Pyspark, Cloud Composer/Air Flow, Cloud SQL, Compute Engine, Cloud Functions, Cloud Run, Cloud build and App Engine, alongside and storage including Cloud Storage DevOps tools such as Tekton, GitHub, Terraform, Docker. Expert in designing, optimizing, and troubleshooting complex data pipelines. Experience developing and deploying microservices architectures leveraging container orchestration frameworks Experience in designing pipelines and architectures for data processing. Passion and self-motivation to develop/experiment/implement state-of-the-art data engineering methods/techniques. Self-directed, work independently with minimal supervision, and adapts to ambiguous environments. Evidence of a proactive problem-solving mindset and willingness to take the initiative. Strong prioritization, collaboration & coordination skills, and ability to simplify and communicate complex ideas with cross-functional teams and all levels of management. Proven ability to juggle multiple responsibilities and competing demands while maintaining a high level of productivity. Master’s degree in computer science, software engineering, information systems, Data Engineering, or a related field. Data engineering or development experience gained in a regulated financial environment. Experience in coaching and mentoring Data Engineers Project management tools like Atlassian JIRA Experience working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment. Experience with data security, governance, and compliance best practices in the cloud. Experience using data science concepts on production datasets to generate insights Design and build production data engineering solutions on Google Cloud Platform (GCP) using services such as BigQuery, Dataflow, DataForm, Astronomer, Data Fusion, DataProc, Cloud Composer/Air Flow, Cloud SQL, Compute Engine, Cloud Functions, Cloud Run, Artifact Registry, GCP APIs, Cloud Build, App Engine, and real-time data streaming platforms like Apache Kafka and GCP Pub/Sub. Design new solutions to better serve AI/ML needs. Lead teams to expand our AI-enabled services. Partner with governance teams to tackle key business needs. Collaborate with stakeholders and cross-functional teams to gather and define data requirements and ensure alignment with business objectives. Partner with analytics teams to understand how value is created using data. Partner with central teams to leverage existing solutions to drive future products. Design and implement batch, real-time streaming, scalable, and fault-tolerant solutions for data ingestion, processing, and storage. Create insights into existing data to fuel the creation of new data products. Perform necessary data mapping, impact analysis for changes, root cause analysis, and data lineage activities, documenting information flows. Implement and champion an enterprise data governance model. Actively promote data protection, sharing, reuse, quality, and standards to ensure data integrity and confidentiality. Develop and maintain documentation for data engineering processes, standards, and best practices. Ensure knowledge transfer and ease of system maintenance. Utilize GCP monitoring and logging tools to proactively identify and address performance bottlenecks and system failures. Provide production support by addressing production issues as per SLAs. Optimize data workflows for performance, reliability, and cost-effectiveness on the GCP infrastructure. Work within an agile product team. Deliver code frequently using Test-Driven Development (TDD), continuous integration, and continuous deployment (CI/CD). Continuously enhance your domain knowledge. Stay current on the latest data engineering practices. Contribute to the company's technical direction while maintaining a customer-centric approach.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Minimum qualifications: Bachelor’s degree or equivalent practical experience. 5 years of experience with software development in one or more programming languages, and with data structures/algorithms. 3 years of experience testing, maintaining, or launching software products. 1 year of experience with software design and architecture. 1 year of experience in generative AI and machine learning. 1 year of experience implementing core AI/ML concepts. Preferred qualifications: Master's degree or PhD in Computer Science, or a related technical field. 1 year of experience in a technical leadership role. Experience with Python, Notebooks, ML Frameworks (e.g., Tensorflow). Experience in large-scale data systems. About The Job Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward. In this role, you will be responsible for designing and developing next-generation software systems at the intersection of data analytics (data warehousing, business intelligence, spark, dataflow, data catalog, and more) and generative AI. You will work closely with our team of experts to research, explore and develop innovative solutions that will bring generative AI to the forefront of Google Cloud Platform (GCP) Data Analytics for our customers. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Write and test product or system development code. Collaborate with peers and stakeholders through design and code reviews to ensure best practices amongst available technologies (e.g., style guidelines, checking code in, accuracy, testability, and efficiency,) Contribute to existing documentation or educational content and adapt content based on product/program updates and user feedback. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on hardware, network, or service operations and quality. Design and implement solutions in one or more specialized Machine Learning (ML) areas, leverage ML infrastructure, and demonstrate experience in a chosen field. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .

Posted 6 days ago

Apply

4.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

We are looking for a skilled Data Governance Engineer to spearhead the development and supervision of robust data governance frameworks on Google Cloud Platform (GCP). You should have a deep understanding of data management, metadata frameworks, compliance, and security within cloud environments to ensure the adoption of high-quality, secure, and compliant data practices aligned with organizational objectives. The ideal candidate should possess: - Over 4 years of experience in data governance, data management, or data security. - Hands-on expertise with Google Cloud Platform (GCP) tools like BigQuery, Dataflow, Dataproc, and Google Data Catalog. - Proficiency in metadata management, data lineage, and data quality tools such as Collibra, Informatica. - Comprehensive knowledge of data privacy laws and compliance frameworks. - Strong skills in SQL and Python for governance automation. - Experience with RBAC, encryption, and data masking techniques. - Familiarity with ETL/ELT pipelines and data warehouse architectures. Your main responsibilities will include: - Developing and implementing comprehensive data governance frameworks emphasizing metadata management, lineage tracking, and data quality. - Defining, documenting, and enforcing data governance policies, access control mechanisms, and security standards utilizing GCP-native services like IAM, DLP, and KMS. - Managing metadata repositories using tools like Collibra, Informatica, Alation, or Google Data Catalog. - Collaborating with data engineering and analytics teams to ensure compliance with GDPR, CCPA, SOC 2, and other regulatory standards. - Automating processes for data classification, monitoring, and reporting using Python and SQL. - Supporting data stewardship initiatives including the creation of data dictionaries and governance documentation. - Optimizing ETL/ELT pipelines and data workflows to adhere to governance best practices. At GlobalLogic, we offer: - A culture of caring that prioritizes inclusivity, acceptance, and personal connections. - Continuous learning and development opportunities to enhance your skills. - Engagement in interesting and meaningful work with cutting-edge solutions. - Balance and flexibility to help you integrate work and life effectively. - A high-trust organization committed to integrity and ethical practices. GlobalLogic, a Hitachi Group Company, is a leading digital engineering partner to world-renowned companies, focusing on creating innovative digital products and experiences. Join us to collaborate on transforming businesses through intelligent products, platforms, and services.,

Posted 6 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About The Role We are seeking a skilled and passionate Data Engineer to join our team and drive the development of scalable data pipelines for Generative AI (GenAI) and Large Language Model (LLM)-powered applications. This role demands hands-on expertise in Spark, GCP, and data integration with modern AI APIs. What You'll Do Design and develop high-throughput, scalable data pipelines for GenAI and LLM-based solutions. Build robust ETL/ELT processes using Spark (PySpark/Scala) on Google Cloud Platform (GCP). Integrate enterprise and unstructured data with LLM APIs such as OpenAI, Gemini, and Hugging Face. Process and enrich large volumes of unstructured data, including text and document embeddings. Manage real-time and batch workflows using Airflow, Dataflow, and BigQuery. Implement and maintain best practices for data quality, observability, lineage, and API-first designs. What Sets You Apart 3+ years of experience building scalable Spark-based pipelines (PySpark or Scala). Strong hands-on experience with GCP services: BigQuery, Dataproc, Pub/Sub, Cloud Functions. Familiarity with LLM APIs, vector databases (e.g., Pinecone, FAISS), and GenAI use cases. Expertise in text processing, unstructured data handling, and performance optimization. Agile mindset and the ability to thrive in a fast-paced startup or dynamic environment. Nice To Have Experience working with embeddings and semantic search. Exposure to MLOps or data observability tools. Background in deploying production-grade AI/ML workflows (ref:hirist.tech)

Posted 1 week ago

Apply

5.0 - 13.0 years

0 Lacs

pune, maharashtra

On-site

You are a highly skilled and experienced Cloud Architect/Engineer with deep expertise in Google Cloud Platform (GCP). Your primary responsibility is to design, build, and manage scalable and reliable cloud infrastructure on GCP. You will leverage various GCP services such as Compute Engine, Cloud Run, BigQuery, Pub/Sub, Cloud Functions, Dataflow, Dataproc, IAM, and Cloud Storage to ensure high-performance cloud solutions. Your role also includes developing and maintaining CI/CD pipelines, automating infrastructure deployment using Infrastructure as Code (IaC) principles, and implementing best practices in cloud security, monitoring, performance tuning, and logging. Collaboration with cross-functional teams to deliver cloud solutions aligned with business objectives is essential. You should have 5+ years of hands-on experience in cloud architecture and engineering, with at least 3 years of practical experience on Google Cloud Platform (GCP). In-depth expertise in GCP services mentioned above is required. Strong understanding of networking, security, containerization (Docker, Kubernetes), and CI/CD pipelines is essential. Experience with monitoring, performance tuning, and logging in cloud environments is preferred. Familiarity with DevSecOps practices and tools such as HashiCorp Vault is a plus. Your role as a GCP Cloud Architect/Engineer will contribute to ensuring system reliability, backup, and disaster recovery strategies. This hybrid role is based out of Pune and requires a total of 10 to 13 years of relevant experience.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

You will be working as a Technical Lead Data Engineer for a leading data and AI/ML solutions provider based in Gurgaon. In this role, you will be responsible for designing, developing, and leading complex data projects primarily on Google Cloud Platform and other modern data stacks. Your key responsibilities will include leading the design and implementation of robust data pipelines, collaborating with cross-functional teams to deliver end-to-end data solutions, owning project modules, developing technical roadmaps, and implementing data governance frameworks on GCP. You will be required to integrate GCP data services like BigQuery, Dataflow, Dataproc, Cloud Composer, Vertex AI Studio, and GenAI with platforms such as Snowflake. Additionally, you will write efficient code in Python, SQL, and ETL/orchestration tools, utilize containerized solutions for scalable deployments, and apply expertise in PySpark, Kafka, and advanced data querying for high-volume data environments. Monitoring, optimizing, and troubleshooting system performance, reducing job run-times through architecture optimization, developing data warehouses, and mentoring team members will also be part of your role. To be successful in this position, you should have a Bachelors or Masters degree in Computer Science, Engineering, or a related field. Extensive hands-on experience with Google Cloud Platform data services, Snowflake integration, strong programming skills in Python and SQL, proficiency in PySpark, Kafka, and data querying tools, and experience with containerized solutions using Google Kubernetes Engine are essential. Strong communication skills, documentation skills, experience with large distributed datasets, and the ability to balance short-term deliverables with long-term technical sustainability are also required. Prior leadership experience in data engineering teams and exposure to cloud data platforms are desirable. This role offers you the opportunity to lead high-impact data projects for reputed clients in a fast-growing data consulting environment, work with cutting-edge technologies, and collaborate in an innovative and growth-oriented culture.,

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description As a GCP Data Engineer, you will integrate data from various sources into novel data products. You will build upon existing analytical data, including merging historical data from legacy platforms with data ingested from new platforms. You will also analyze and manipulate large datasets, activating data assets to enable enterprise platforms and analytics within GCP. You will design and implement the transformation and modernization on GCP, creating scalable data pipelines that land data from source applications, integrate into subject areas, and build data marts and products for analytics solutions. You will also conduct deep-dive analysis of Current State Receivables and Originations data in our data warehouse, performing impact analysis related to Ford Credit North America's modernization and providing implementation solutions. Moreover, you will partner closely with our AI, data science, and product teams, developing creative solutions that build the future for Ford Credit. Experience with large-scale solutions and operationalizing data warehouses, data lakes, and analytics platforms on Google Cloud Platform or other cloud environments is a must. We are looking for candidates with a broad set of analytical and technology skills across these areas and who can demonstrate an ability to design the right solutions with the appropriate combination of GCP and 3rd party technologies for deployment on Google Cloud Platform. Responsibilities Design and build production data engineering solutions on Google Cloud Platform (GCP) using services such as BigQuery, Dataflow, DataForm, Astronomer, Data Fusion, DataProc, Cloud Composer/Air Flow, Cloud SQL, Compute Engine, Cloud Functions, Cloud Run, Artifact Registry, GCP APIs, Cloud Build, App Engine, and real-time data streaming platforms like Apache Kafka and GCP Pub/Sub. Design new solutions to better serve AI/ML needs. Lead teams to expand our AI-enabled services. Partner with governance teams to tackle key business needs. Collaborate with stakeholders and cross-functional teams to gather and define data requirements and ensure alignment with business objectives. Partner with analytics teams to understand how value is created using data. Partner with central teams to leverage existing solutions to drive future products. Design and implement batch, real-time streaming, scalable, and fault-tolerant solutions for data ingestion, processing, and storage. Create insights into existing data to fuel the creation of new data products. Perform necessary data mapping, impact analysis for changes, root cause analysis, and data lineage activities, documenting information flows. Implement and champion an enterprise data governance model. Actively promote data protection, sharing, reuse, quality, and standards to ensure data integrity and confidentiality. Develop and maintain documentation for data engineering processes, standards, and best practices. Ensure knowledge transfer and ease of system maintenance. Utilize GCP monitoring and logging tools to proactively identify and address performance bottlenecks and system failures. Provide production support by addressing production issues as per SLAs. Optimize data workflows for performance, reliability, and cost-effectiveness on the GCP infrastructure. Work within an agile product team. Deliver code frequently using Test-Driven Development (TDD), continuous integration, and continuous deployment (CI/CD). Continuously enhance your domain knowledge. Stay current on the latest data engineering practices. Contribute to the company's technical direction while maintaining a customer-centric approach. Qualifications GCP certified Professional Data Engineer Successfully designed and implemented data warehouses and ETL processes for over five years, delivering high-quality data solutions. 5+ years of complex SQL development experience 2+ experience with programming languages such as Python, Java, or Apache Beam. Experienced cloud engineer with 3+ years of GCP expertise, specializing in managing cloud infrastructure and applications to production-scale solutions. In-depth understanding of GCP’s underlying architecture and hands-on experience of crucial GCP services, especially those related to data processing (Batch/Real Time) leveraging Terraform, Big Query, Dataflow, Pub/Sub, Data form, astronomer, Data Fusion, DataProc, Pyspark, Cloud Composer/Air Flow, Cloud SQL, Compute Engine, Cloud Functions, Cloud Run, Cloud build and App Engine, alongside and storage including Cloud Storage DevOps tools such as Tekton, GitHub, Terraform, Docker. Expert in designing, optimizing, and troubleshooting complex data pipelines. Experience developing and deploying microservices architectures leveraging container orchestration frameworks Experience in designing pipelines and architectures for data processing. Passion and self-motivation to develop/experiment/implement state-of-the-art data engineering methods/techniques. Self-directed, work independently with minimal supervision, and adapts to ambiguous environments. Evidence of a proactive problem-solving mindset and willingness to take the initiative. Strong prioritization, collaboration & coordination skills, and ability to simplify and communicate complex ideas with cross-functional teams and all levels of management. Proven ability to juggle multiple responsibilities and competing demands while maintaining a high level of productivity. Master’s degree in computer science, software engineering, information systems, Data Engineering, or a related field. Data engineering or development experience gained in a regulated financial environment. Experience in coaching and mentoring Data Engineers Project management tools like Atlassian JIRA Experience working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment. Experience with data security, governance, and compliance best practices in the cloud. Experience using data science concepts on production datasets to generate insights

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Position Overview Job Title: Technology Service Analyst, AS Location: Bangalore, India Role Description You will be operating within Production services team of Trade Finance and Lending domain which is a subdivision of Corporate Bank Production Services as a Production Support Engineer. In this role, you will be accountable for the following: To resolve user request supports, troubleshooting functional, application, and infrastructure incidents in the production environment. work on identified initiatives to automate manual work, application and infrastructure monitoring improvements and platform hygiene. Eyes on glass monitoring of services and batch. Preparing and fulfilling data requests. Participation in incident, change and problem management meetings as required. Deutsche Bank’s Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy. Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Provide hands on technical support for a suite of applications/platforms within Deutsche Bank Build up technical subject matter expertise on the applications/platforms being supported including business flows, the application architecture and the hardware configuration. Resolve service requests submitted by the application end users to the best of L2 ability and escalate any issues that cannot be resolved to L3. Conduct real time monitoring to ensure application SLAs are achieved and maximum application availability (up time). Ensure all knowledge is documented and that support runbooks and knowledge articles are kept up to date. Approach support with a proactive attitude, working to improve the environment before issues occur. Update the RUN Book and KEDB as & when required. Participate in all BCP and component failure tests based on the run books Understand flow of data through the application infrastructure. It is critical to understand the dataflow so as to best provide operational support Your Skills And Experience Must Have : Programming Language - Java Operating systems -UNIX, Windows and the underlying infrastructure environments. Middleware - (e.g. MQ, Kafka or similar) WebLogic, Webserver environment - Apache, Tomcat Database - Oracle, MS-SQL, Sybase, No SQL Batch Monitoring - Control-M /Autosys Scripting - UNIX shell and PowerShell, PERL, Python Monitoring Tools – Geneos or App Dynamics or Dynatrace or Grafana ITIL Service Management framework such as Incident, Problem, and Change processes. Preferably knowledge and experience on GCP. Nice to Have : 5+ years of experience in IT in large corporate environments, specifically in the area of controlled production environments or in Financial Services Technology in a client-facing function Good analytical and problem-solving skills ITIL / best practice service context. ITIL foundation is plus. Ticketing Tool experience – Service Desk, Service Now. Understanding of SRE concepts (SLA, SLO’s, SLI’s) Knowledge and development experience in Ansible automation. Working knowledge of one cloud platform (AWS or GCP). Excellent communication skills, both written and verbal, with attention to detail. Ability to work in virtual teams and in matrix structures. How We’ll Support You Training and development to help you excel in your career. Coaching and support from experts in your team A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 1 week ago

Apply

5.0 - 7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities Skilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Preferred Education Master's Degree Required Technical And Professional Expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work Preferred Technical And Professional Experience Create up to 3 bullets maxitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications (encouraging then to focus on required skills)

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies