Jobs
Interviews

214 Dataproc Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Technology Service Specialist, AVP at our Pune location, you will be an integral part of the Technology, Data, and Innovation (TDI) Private Bank team. In this role, you will be responsible for providing 2nd Level Application Support for business applications used in branches, by mobile sales, or via the internet. Your expertise in Incident Management and Problem Management will be crucial in ensuring the stability of these applications. Partnerdata, the central client reference data system in Germany, is a core banking system that integrates many banking processes and applications through numerous interfaces. With the recent migration to Google Cloud (GCP), you will be involved in operating and further developing applications and functionalities on the cloud platform. Your focus will also extend to regulatory topics surrounding partner/client relationships. We are seeking individuals who can contribute to this contemporary and emerging Cloud application area. Key Responsibilities: - Ensure optimum service level to supported business lines - Oversee resolution of incidents and problems within the team - Assist in managing business stakeholder relationships - Define and manage OLAs with relevant stakeholders - Monitor team performance, adherence to processes, and alignment with business SLAs - Manage escalations and work with relevant functions to resolve issues quickly - Identify areas for improvement and implement best practices in your area of expertise - Mentor and coach Production Management Analysts within the team - Fulfill Service Requests, communicate with Service Desk function, and participate in major incident calls - Document tasks, incidents, problems, changes, and knowledge bases - Improve monitoring of applications and implement automation of tasks Skills and Experience: - Service Operations Specialist experience in a global operations context - Extensive experience supporting complex application and infrastructure domains - Ability to manage and mentor Service Operations teams - Strong ITIL/best practice service context knowledge - Proficiency in interface technologies, communication protocols, and ITSM tools - Bachelor's Degree in IT or Computer Science related discipline - ITIL certification and experience with ITSM tool ServiceNow preferred - Knowledge of Banking domain and regulatory topics - Experience with databases like BigQuery and understanding of Big Data and GCP technologies - Proficiency in tools like GitHub, Terraform, Cloud SQL, Cloud Storage, Dataproc, Dataflow - Architectural skills for big data solutions and interface architecture Area-Specific Tasks/Responsibilities: - Handle Incident/Problem Management and Service Request Fulfilment - Analyze and resolve incidents escalated from 1st Level Support - Support the resolution of high-impact incidents and escalate when necessary - Provide solutions for open problems and support service transition for new projects/applications Joining our team, you will receive training, development opportunities, coaching from experts, and a culture of continuous learning to support your career progression. We value diversity and promote a positive, fair, and inclusive work environment at Deutsche Bank Group. Visit our company website for more information.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

We are seeking experienced and talented engineers to join our team. Your main responsibilities will include designing, building, and maintaining the software that drives the global logistics industry. WiseTech Global is a leading provider of software for the logistics sector, facilitating connectivity for major companies like DHL and FedEx within their supply chains. Our organization is product and engineer-focused, with a strong commitment to enhancing the functionality and quality of our software through continuous innovation. Our primary Research and Development center in Bangalore plays a pivotal role in our growth strategies and product development roadmap. As a Lead Software Engineer, you will serve as a mentor, a leader, and an expert in your field. You should be adept at effective communication with senior management while also being hands-on with the code to deliver effective solutions. The technical environment you will work in includes technologies such as C#, Java, C++, Python, Scala, Spring, Spring Boot, Apache Spark, Hadoop, Hive, Delta Lake, Kafka, Debezium, GKE (Kubernetes Engine), Composer (Airflow), DataProc, DataStreams, DataFlow, MySQL RDBMS, MongoDB NoSQL (Atlas), UIPath, Helm, Flyway, Sterling, EDI, Redis, Elastic Search, Grafana Dashboard, and Docker. Before applying, please note that WiseTech Global may engage external service providers to assess applications. By submitting your application and personal information, you agree to WiseTech Global sharing this data with external service providers who will handle it confidentially in compliance with privacy and data protection laws.,

Posted 2 days ago

Apply

6.0 - 10.0 years

1 - 1 Lacs

Chennai

Hybrid

Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place one that benefits lives, communities and the planet Job Title: Specialty Development Practitioner Location: Chennai Work Type: Hybrid Position Description: At the client's Credit Company, we are modernizing our enterprise data warehouse in Google Cloud to enhance data, analytics, and AI/ML capabilities, improve customer experience, ensure regulatory compliance, and boost operational efficiencies. As a GCP Data Engineer, you will integrate data from various sources into novel data products. You will build upon existing analytical data, including merging historical data from legacy platforms with data ingested from new platforms. You will also analyze and manipulate large datasets, activating data assets to enable enterprise platforms and analytics within GCP. You will design and implement the transformation and modernization on GCP, creating scalable data pipelines that land data from source applications, integrate into subject areas, and build data marts and products for analytics solutions. You will also conduct deep-dive analysis of Current State Receivables and Originations data in our data warehouse, performing impact analysis related to the client's Credit North America's modernization and providing implementation solutions. Moreover, you will partner closely with our AI, data science, and product teams, developing creative solutions that build the future for the client's Credit. Experience with large-scale solutions and operationalizing data warehouses, data lakes, and analytics platforms on Google Cloud Platform or other cloud environments is a must. We are looking for candidates with a broad set of analytical and technology skills across these areas and who can demonstrate an ability to design the right solutions with the appropriate combination of GCP and 3rd party technologies for deployment on Google Cloud Platform. Skills Required: Big Query,, Data Flow, DataForm, Data Fusion, Dataproc, Cloud Composer, AIRFLOW, Cloud SQL, Compute Engine, Google Cloud Platform - Biq Query Experience Required: GCP Data Engineer Certified Successfully designed and implemented data warehouses and ETL processes for over five years, delivering high-quality data solutions. 5+ years of complex SQL development experience 2+ experience with programming languages such as Python, Java, or Apache Beam. Experienced cloud engineer with 3+ years of GCP expertise, specializing in managing cloud infrastructure and applications into production-scale solutions. Big Query,, Data Flow, DataForm, Data Fusion, Dataproc, Cloud Composer, AIRFLOW, Cloud SQL, Compute Engine, Google Cloud Platform Biq Query, Data Flow, Dataproc, Data Fusion, TERRAFORM, Tekton,Cloud SQL, AIRFLOW, POSTGRES, Airflow PySpark, Python, API, cloudbuild, App Engine, Apache Kafka, Pub/Sub, AI/ML, Kubernetes Experience Preferred: In-depth understanding of GCP's underlying architecture and hands-on experience of crucial GCP services, especially those related to data processing (Batch/Real Time) leveraging Terraform, Big Query, Dataflow, Pub/Sub, Data form, astronomer, Data Fusion, DataProc, Pyspark, Cloud Composer/Air Flow, Cloud SQL, Compute Engine, Cloud Functions, Cloud Run, Cloud build and App Engine, alongside and storage including Cloud Storage DevOps tools such as Tekton, GitHub, Terraform, Docker. Expert in designing, optimizing, and troubleshooting complex data pipelines. Experience developing with microservice architecture from container orchestration framework. Experience in designing pipelines and architectures for data processing. Passion and self-motivation to develop/experiment/implement state-of-the-art data engineering methods/techniques. Self-directed, work independently with minimal supervision, and adapts to ambiguous environments. Evidence of a proactive problem-solving mindset and willingness to take the initiative. Strong prioritization, collaboration & coordination skills, and ability to simplify and communicate complex ideas with cross-functional teams and all levels of management. Proven ability to juggle multiple responsibilities and competing demands while maintaining a high level of productivity. Data engineering or development experience gained in a regulated financial environment. Experience in coaching and mentoring Data Engineers Project management tools like Atlassian JIRA Experience working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment. Experience with data security, governance, and compliance best practices in the cloud. Experience with AI solutions or platforms that support AI solutions Experience using data science concepts on production datasets to generate insights Experience Range: 5+ years Education Required: Bachelor's Degree TekWissen® Group is an equal opportunity employer supporting workforce diversity.

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

maharashtra

On-site

As a Data Warehouse (DWH) professional with relevant experience in Google Cloud Platform (GCP), you will be responsible for developing and implementing robust data architectures. This includes designing data lakes, data warehouses, and data marts by utilizing GCP services such as BigQuery, Dataflow, DataProc, and Cloud Storage. Your role will involve designing and implementing data models that meet business requirements while ensuring data integrity, consistency, and accessibility. Your deep understanding of GCP services and best practices for data warehousing, data analytics, and machine learning will be crucial in this role. You will also be tasked with planning and executing data migration strategies from on-premises or other cloud environments to GCP. Optimizing data pipelines and query performance to facilitate efficient data processing and analysis will be a key focus area. Additionally, your proven experience in managing teams and project delivery will be essential for success in this position. Collaborating closely with stakeholders to comprehend their requirements and deliver effective solutions will be a significant part of your responsibilities. Any experience with Looker will be considered advantageous for this role.,

Posted 3 days ago

Apply

6.0 - 8.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Job Summary Synechron is seeking a highly skilled and proactive Data Engineer to join our dynamic data analytics team. In this role, you will be instrumental in designing, developing, and maintaining scalable data pipelines and solutions on the Google Cloud Platform (GCP). With your expertise, you'll enable data-driven decision-making, contribute to strategic business initiatives, and ensure robust data infrastructure. This position offers an opportunity to work in a collaborative environment with a focus on innovative technologies and continuous growth. Software Requirements Required: Proficiency in Data Engineering tools and frameworks such as Hive , Apache Spark , and Python (version 3.x) Extensive experience working with Google Cloud Platform (GCP) offerings including Dataflow, BigQuery, Cloud Storage, and Pub/Sub Familiarity with Git , Jira , and Confluence for version control and collaboration Preferred: Experience with additional GCP services like DataProc, Data Studio, or Cloud Composer Exposure to other programming languages such as Java or Scala Knowledge of data security best practices and tools Overall Responsibilities Design, develop, and optimize scalable data pipelines on GCP to support analytics and reporting needs Collaborate with cross-functional teams to translate business requirements into technical solutions Build and maintain data models, ensuring data quality, integrity, and security Participate actively in code reviews, adhering to best practices and standards Develop automated and efficient data workflows to improve system performance Stay updated with emerging data engineering trends and continuously improve technical skills Provide technical guidance and support to team members, fostering a collaborative environment Ensure timely delivery of deliverables aligned with project milestones Technical Skills (By Category) Programming Languages: EssentialPython (required) PreferredJava, Scala Data Management & Databases: Experience with Hive, BigQuery, and relational databases Knowledge of data warehousing concepts and SQL proficiency Cloud Technologies: Extensive hands-on experience with GCP services including Dataflow, BigQuery, Cloud Storage, Pub/Sub, and Composer Ability to build and optimize data pipelines leveraging GCP offerings Frameworks & Libraries: Spark (PySpark preferred), Hadoop ecosystem experience is advantageous Development Tools & Methodologies: Agile/Scrum methodologies, version control with Git, project tracking via JIRA, documentation on Confluence Security Protocols: Understanding of data security, privacy, and compliance standards Experience Requirements Minimum of 6-8 years in data or software engineering roles with a focus on data pipeline development Proven experience in designing and implementing data solutions on cloud platforms, particularly GCP Prior experience working in agile teams, participating in code reviews, and delivering end-to-end data projects Experience working with cross-disciplinary teams and understanding varied stakeholder requirements Exposure to industry best practices for data security, governance, and quality assurance is desired Day-to-Day Activities Attend daily stand-up meetings and contribute to project planning sessions Collaborate with business analysts, data scientists, and other stakeholders to understand data needs Develop, test, and deploy scalable data pipelines, ensuring efficiency and reliability Perform regular code reviews, provide constructive feedback, and uphold coding standards Document technical solutions and maintain clear records of data workflows Troubleshoot and resolve technical issues in data processing environments Participate in continuous learning initiatives to stay abreast of technological developments Support team members by sharing knowledge and resolving technical challenges Qualifications Bachelor's or Masters degree in Computer Science, Information Technology, or a related field Relevant professional certifications in GCP (such as Google Cloud Professional Data Engineer) are preferred but not mandatory Demonstrable experience in data engineering and cloud technologies Professional Competencies Strong analytical and problem-solving skills, with a focus on outcome-driven solutions Excellent communication and interpersonal skills to effectively collaborate within teams and with stakeholders Ability to work independently with minimal supervision and manage multiple priorities effectively Adaptability to evolving technologies and project requirements Demonstrated initiative in driving tasks forward and continuous improvement mindset Strong organizational skills with a focus on quality and attention to detail S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice

Posted 3 days ago

Apply

3.0 - 7.0 years

13 - 18 Lacs

Pune

Work from Office

About The Role : Job Title Technical-Specialist Big Data (PySpark) Developer LocationPune, India Role Description This role is for Engineer who is responsible for design, development, and unit testing software applications. The candidate is expected to ensure good quality, maintainable, scalable, and high performing software applications getting delivered to users in an Agile development environment. Candidate / Applicant should be coming from a strong technological background. The candidate should have goo working experience in Python and Spark technology. Should be hands on and be able to work independently requiring minimal technical/tool guidance. Should be able to technically guide and mentor junior resources in the team. As a developer you will bring extensive design and development skills to enforce the group of developers within the team. The candidate will extensively make use and apply Continuous Integration tools and practices in the context of Deutsche Banks digitalization journey. What well offer you . 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Design and discuss your own solution for addressing user stories and tasks. Develop and unit-test, Integrate, deploy, maintain, and improve software. Perform peer code review. Actively participate into the sprint activities and ceremonies e.g., daily stand-up/scrum meeting, Sprint planning, retrospectives, etc. Apply continuous integration best practices in general (SCM, build automation, unit testing, dependency management) Collaborate with other team members to achieve the Sprint objectives. Report progress/update Agile team management tools (JIRA/Confluence) Manage individual task priorities and deliverables. Responsible for quality of solutions candidate / applicant provides. Contribute to planning and continuous improvement activities & support PO, ITAO, Developers and Scrum Master. Your skills and experience Engineer with Good development experience in Big Data platform for at least 5 years. Hands own experience in Spark (Hive, Impala). Hands own experience in Python Programming language. Preferably, experience in BigQuery , Dataproc , Composer , Terraform , GKE , Cloud SQL and Cloud functions. Experience in set-up, maintenance, and ongoing development of continuous build/ integration infrastructure as a part of DevOps. Create and maintain fully automated CI build processes and write build and deployment scripts. Has experience with development platformsOpenShift/ Kubernetes/Docker configuration and deployment with DevOps tools e.g., GIT, TeamCity, Maven, SONAR Good Knowledge about the core SDLC processes and tools such as HP ALM, Jira, Service Now. Strong analytical skills. Proficient communication skills. Fluent in English (written/verbal). Ability to work in virtual teams and in matrixed organizations. Excellent team player. Open minded and willing to learn business and technology. Keeps pace with technical innovation. Understands the relevant business area. Ability to share information, transfer knowledge to expertise the team members. How well support you . . . .

Posted 3 days ago

Apply

8.0 - 12.0 years

30 - 35 Lacs

Pune

Work from Office

About The Role : Job TitleSenior Engineer PD, AVP LocationPune, India Role Description Our team is part of the area Technology, Data, and Innovation (TDI) Private Bank. Within TDI, Partner data is the central client reference data system in Germany. As a core banking system, many banking processes and applications are integrated and communicate via >2k interfaces. From a technical perspective, we focus on mainframe but also build solutions on premise cloud, restful services, and an angular frontend. Next to the maintenance and the implementation of new CTB requirements, the content focus also lies on the regulatory and tax topics surrounding a partner/ client. We are looking for a very motivated candidate for the Cloud Data Engineer area. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities You are responsible for the implementation of the new project on GCP (Spark, Dataproc, Dataflow, BigQuery, Terraform etc) in the whole SDLC chain You are responsible for the support of the migration of current functionalities to Google Cloud You are responsible for the stability of the application landscape and support software releases You also support in L3 topics and application governance You are responsible in the CTM area for coding as part of an agile team (Java, Scala, Spring Boot) Your skills and experience You have experience with databases (BigQuery, Cloud SQl, No Sql, Hive etc.) and development preferably for Big Data and GCP technologies Strong understanding of Data Mesh Approach and integration patterns Understanding of Party data and integration with Product data Your architectural skills for big data solutions, especially interface architecture allows a fast start You have experience in at leastSpark, Java ,Scala and Python, Maven, Artifactory, Hadoop Ecosystem, Github Actions, GitHub, Terraform scripting You have knowledge in customer reference data, customer opening processes and preferably regulatory topics around know your customer processes You can work very well in teams but also independent and are constructive and target oriented Your English skills are good and you can both communicate professionally but also informally in small talks with the team How well support you About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 3 days ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

We are looking for a skilled Data Governance Engineer to take charge of developing and overseeing robust data governance frameworks on Google Cloud Platform (GCP). Your role will involve leveraging your expertise in data management, metadata frameworks, compliance, and security within cloud environments to ensure the implementation of high-quality, secure, and compliant data practices aligned with organizational objectives. With a minimum of 4 years of experience in data governance, data management, or data security, you should possess hands-on proficiency with Google Cloud Platform (GCP) tools such as BigQuery, Dataflow, Dataproc, and Google Data Catalog. Additionally, a strong command over metadata management, data lineage, and data quality tools like Collibra and Informatica is crucial. A deep understanding of data privacy laws and compliance frameworks, coupled with proficiency in SQL and Python for governance automation, is essential. Experience with RBAC, encryption, data masking techniques, and familiarity with ETL/ELT pipelines and data warehouse architectures will be advantageous. Your responsibilities will include developing and executing comprehensive data governance frameworks with a focus on metadata management, lineage tracking, and data quality. You will be tasked with defining, documenting, and enforcing data governance policies, access control mechanisms, and security standards using GCP-native services like IAM, DLP, and KMS. Managing metadata repositories using tools such as Collibra, Informatica, Alation, or Google Data Catalog will also be part of your role. Collaborating with data engineering and analytics teams to ensure compliance with regulatory standards like GDPR, CCPA, SOC 2, and automating processes for data classification, monitoring, and reporting using Python and SQL will be key responsibilities. Supporting data stewardship initiatives, optimizing ETL/ELT pipelines, and data workflows to adhere to governance best practices will also be part of your role. At GlobalLogic, we offer a culture of caring, emphasizing inclusivity and personal growth. You will have access to continuous learning and development opportunities, engaging and meaningful work, as well as a healthy work-life balance. Join our high-trust organization where integrity is paramount, and collaborate with us to engineer innovative solutions that have a lasting impact on industries worldwide.,

Posted 5 days ago

Apply

6.0 - 10.0 years

0 Lacs

haryana

On-site

Join GlobalLogic as a valuable member of the team working on a significant software project for a world-class company that provides M2M / IoT 4G/5G modules to industries such as automotive, healthcare, and logistics. Your engagement will involve contributing to the development of end-user modules" firmware, implementing new features, maintaining compatibility with the latest telecommunication and industry standards, and analyzing and estimating customer requirements. Requirements - BA / BS degree in Computer Science, Mathematics, or a related technical field, or equivalent practical experience. - Proficiency in Cloud SQL and Cloud Bigtable. - Experience with Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub / Sub, and Genomics. - Familiarity with Google Transfer Appliance, Cloud Storage Transfer Service, and BigQuery Data Transfer. - Knowledge of data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and data processing algorithms (MapReduce, Flume). - Previous experience working with technical customers. - Proficiency in writing software in languages like Java or Python. - 6-10 years of relevant consulting, industry, or technology experience. - Strong problem-solving and troubleshooting skills. - Excellent communication skills. Job Responsibilities - Hands-on experience working with data warehouses, including technical architectures, infrastructure components, ETL / ELT, and reporting / analytic tools. - Experience in technical consulting. - Proficiency in architecting and developing software or internet-scale Big Data solutions in virtualized environments like Google Cloud Platform (mandatory) and AWS / Azure (good to have). - Familiarity with big data, information retrieval, data mining, machine learning, and building high availability applications with modern web technologies. - Working knowledge of ITIL and / or agile methodologies. - Google Data Engineer certification. What We Offer - Culture of caring: Prioritize a culture of caring, where people come first, fostering an inclusive environment of acceptance and belonging. - Learning and development: Commitment to continuous learning and growth, offering various programs, training curricula, and hands-on opportunities for personal and professional advancement. - Interesting & meaningful work: Engage in impactful projects that allow for creative problem-solving and exploration of new solutions. - Balance and flexibility: Embrace work-life balance with diverse career areas, roles, and work arrangements to support personal well-being. - High-trust organization: Join a high-trust organization with a focus on integrity, trustworthiness, and ethical practices. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner known for collaborating with forward-thinking companies to create innovative digital products and experiences. Join the team in transforming businesses and industries through intelligent products, platforms, and services, contributing to cutting-edge solutions that shape the world today.,

Posted 5 days ago

Apply

4.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

We are looking for a skilled Data Governance Engineer to spearhead the development and supervision of robust data governance frameworks on Google Cloud Platform (GCP). You should have a deep understanding of data management, metadata frameworks, compliance, and security within cloud environments to ensure the adoption of high-quality, secure, and compliant data practices aligned with organizational objectives. The ideal candidate should possess: - Over 4 years of experience in data governance, data management, or data security. - Hands-on expertise with Google Cloud Platform (GCP) tools like BigQuery, Dataflow, Dataproc, and Google Data Catalog. - Proficiency in metadata management, data lineage, and data quality tools such as Collibra, Informatica. - Comprehensive knowledge of data privacy laws and compliance frameworks. - Strong skills in SQL and Python for governance automation. - Experience with RBAC, encryption, and data masking techniques. - Familiarity with ETL/ELT pipelines and data warehouse architectures. Your main responsibilities will include: - Developing and implementing comprehensive data governance frameworks emphasizing metadata management, lineage tracking, and data quality. - Defining, documenting, and enforcing data governance policies, access control mechanisms, and security standards utilizing GCP-native services like IAM, DLP, and KMS. - Managing metadata repositories using tools like Collibra, Informatica, Alation, or Google Data Catalog. - Collaborating with data engineering and analytics teams to ensure compliance with GDPR, CCPA, SOC 2, and other regulatory standards. - Automating processes for data classification, monitoring, and reporting using Python and SQL. - Supporting data stewardship initiatives including the creation of data dictionaries and governance documentation. - Optimizing ETL/ELT pipelines and data workflows to adhere to governance best practices. At GlobalLogic, we offer: - A culture of caring that prioritizes inclusivity, acceptance, and personal connections. - Continuous learning and development opportunities to enhance your skills. - Engagement in interesting and meaningful work with cutting-edge solutions. - Balance and flexibility to help you integrate work and life effectively. - A high-trust organization committed to integrity and ethical practices. GlobalLogic, a Hitachi Group Company, is a leading digital engineering partner to world-renowned companies, focusing on creating innovative digital products and experiences. Join us to collaborate on transforming businesses through intelligent products, platforms, and services.,

Posted 6 days ago

Apply

5.0 - 13.0 years

0 Lacs

pune, maharashtra

On-site

You are a highly skilled and experienced Cloud Architect/Engineer with deep expertise in Google Cloud Platform (GCP). Your primary responsibility is to design, build, and manage scalable and reliable cloud infrastructure on GCP. You will leverage various GCP services such as Compute Engine, Cloud Run, BigQuery, Pub/Sub, Cloud Functions, Dataflow, Dataproc, IAM, and Cloud Storage to ensure high-performance cloud solutions. Your role also includes developing and maintaining CI/CD pipelines, automating infrastructure deployment using Infrastructure as Code (IaC) principles, and implementing best practices in cloud security, monitoring, performance tuning, and logging. Collaboration with cross-functional teams to deliver cloud solutions aligned with business objectives is essential. You should have 5+ years of hands-on experience in cloud architecture and engineering, with at least 3 years of practical experience on Google Cloud Platform (GCP). In-depth expertise in GCP services mentioned above is required. Strong understanding of networking, security, containerization (Docker, Kubernetes), and CI/CD pipelines is essential. Experience with monitoring, performance tuning, and logging in cloud environments is preferred. Familiarity with DevSecOps practices and tools such as HashiCorp Vault is a plus. Your role as a GCP Cloud Architect/Engineer will contribute to ensuring system reliability, backup, and disaster recovery strategies. This hybrid role is based out of Pune and requires a total of 10 to 13 years of relevant experience.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

You will be working as a Technical Lead Data Engineer for a leading data and AI/ML solutions provider based in Gurgaon. In this role, you will be responsible for designing, developing, and leading complex data projects primarily on Google Cloud Platform and other modern data stacks. Your key responsibilities will include leading the design and implementation of robust data pipelines, collaborating with cross-functional teams to deliver end-to-end data solutions, owning project modules, developing technical roadmaps, and implementing data governance frameworks on GCP. You will be required to integrate GCP data services like BigQuery, Dataflow, Dataproc, Cloud Composer, Vertex AI Studio, and GenAI with platforms such as Snowflake. Additionally, you will write efficient code in Python, SQL, and ETL/orchestration tools, utilize containerized solutions for scalable deployments, and apply expertise in PySpark, Kafka, and advanced data querying for high-volume data environments. Monitoring, optimizing, and troubleshooting system performance, reducing job run-times through architecture optimization, developing data warehouses, and mentoring team members will also be part of your role. To be successful in this position, you should have a Bachelors or Masters degree in Computer Science, Engineering, or a related field. Extensive hands-on experience with Google Cloud Platform data services, Snowflake integration, strong programming skills in Python and SQL, proficiency in PySpark, Kafka, and data querying tools, and experience with containerized solutions using Google Kubernetes Engine are essential. Strong communication skills, documentation skills, experience with large distributed datasets, and the ability to balance short-term deliverables with long-term technical sustainability are also required. Prior leadership experience in data engineering teams and exposure to cloud data platforms are desirable. This role offers you the opportunity to lead high-impact data projects for reputed clients in a fast-growing data consulting environment, work with cutting-edge technologies, and collaborate in an innovative and growth-oriented culture.,

Posted 6 days ago

Apply

5.0 - 7.0 years

5 - 14 Lacs

Pune, Gurugram, Bengaluru

Work from Office

• Handson experience in objectoriented programming using Python, PySpark, APIs, SQL, BigQuery, GCP • Building data pipelines for huge volume of data • Dataflow Dataproc and BigQuery • Deep understanding of ETL concepts

Posted 6 days ago

Apply

5.0 - 8.0 years

5 - 8 Lacs

Bengaluru

Work from Office

Skills desired: Strong at SQL (Multi pyramid SQL joins) Python skills (FastAPI or flask framework) PySpark Commitment to work in overlapping hours GCP knowledge(BQ, DataProc and Dataflow) Amex experience is preferred(Not Mandatory) Power BI preferred (Not Mandatory) Flask, Pyspark, Python, Sql

Posted 6 days ago

Apply

4.0 - 8.0 years

12 - 18 Lacs

Hyderabad

Hybrid

Egen is a fast-growing and entrepreneurial company with a data-first mindset. We bring together the best engineering talent working with the most advanced technology platforms, including Google Cloud and Salesforce, to help clients drive action and impact through data and insights. We are committed to being a place where the best people choose to work so they can apply their engineering and technology expertise to envision what is next for how data and platforms can change the world for the better. We are dedicated to learning, thrive on solving tough problems, and continually innovate to achieve fast, effective results. Job Summary We are seeking a talented and passionate Python Developer to join our dynamic team. In this role, you will be instrumental in designing, developing, and deploying scalable and efficient applications on the Google Cloud Platform. You will have the opportunity to work on exciting projects and contribute to the growth and innovation of our products/services. You will also mentorship to other engineers, and engage with clients to understand their needs and deliver effective solutions. Responsibilities: Design, develop, and maintain robust and scalable applications using Python. Build and consume RESTful APIs using FastAPI. Deploy and manage applications on the Google Cloud Platform (GCP). Collaborate effectively with cross-functional teams, including product managers, designers, and other engineers. Write clean, well-documented, and testable code. Participate in code reviews to ensure code quality and adherence to best practices. Troubleshoot and debug issues in development and production environments. Create clear and effective documents. Stay up-to-date with the latest industry trends and technologies. Assist the junior team members. Required Skills and Experience 5+ years of relevant work experience in software development using Python Solid understanding and practical experience with the FastAPI framework Hands-on experience with the Google Cloud Platform (GCP) and its core services Experience with CI/CD pipelines Ability to write unit test cases and execute them Able to discuss and propose architectural changes Knowledge of security best practices Strong problem-solving and analytical skills Excellent communication and collaboration abilities Bachelors degree in Computer Science or a related field (or equivalent practical experience) Optional Skills (a plus) Experience with any front-end framework such as Angular, React, Vue.js , etc. Familiarity with DevOps principles and practices. Experience with infrastructure-as-code tools like Terraform. Knowledge of containerization technologies such as Docker and Kubernetes.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Engineer, VP at our Pune location in India, you will be responsible for managing and performing work across various areas of the bank's IT Platform/Infrastructure. Your role will involve analysis, development, and administration, with possible oversight of engineering delivery for specific departments. Your day-to-day tasks will include planning and developing engineering solutions to achieve business goals, ensuring reliability and resiliency in solutions, and promoting maintainability and reusability. You will play a key role in architecting well-integrated solutions and reviewing engineering plans to enhance capability and reusability. You will collaborate with a cross-functional agile delivery team, bringing an innovative approach to software development using the latest technologies and practices to deliver business value efficiently. Your focus will be on fostering a collaborative environment, open code sharing, and supporting all stages of software delivery from analysis to production support. In this role, you will enjoy benefits such as a best-in-class leave policy, gender-neutral parental leaves, sponsorship for industry certifications, employee assistance programs, comprehensive insurance coverage, and health screening. You will be expected to lead engineering efforts, champion best practices, collaborate with stakeholders to achieve business outcomes, and acquire functional knowledge of the business capabilities being digitized. Key Skills required: - GCP Services: Composer, BigQuery, DataProc, GCP Cloud Architecture, etc. - Big Data Hadoop: Hive, HQL, HDFS - Programming: Python, PySpark, SQL Query writing - Scheduler: Control-M or any other scheduler - Experience in Database engines (e.g., SQL Server, Oracle), ETL Pipeline development, Tableau, Looker, and performance tuning - Proficiency in architecture design, technical documentation, and mapping business requirements with technology Desired Skills: - Understanding of Workflow automation and Agile methodology - Terraform Coding and experience in Project Management - Prior experience in Banking/Finance domain and hybrid cloud solutions, preferably using GCP - Product development experience Join us to excel in your career with training, coaching, and continuous learning opportunities. Our culture promotes responsibility, commercial thinking, initiative, and collaboration. We value a positive, fair, and inclusive work environment where we celebrate the successes of our people. Embrace the empowering culture at Deutsche Bank Group and be part of our success together. For more information about our company and teams, please visit our website at https://www.db.com/company/company.htm.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a GCP Senior Data Engineer/Architect, you will play a crucial role in our team by designing, developing, and implementing robust and scalable data solutions on the Google Cloud Platform (GCP). Collaborating closely with Architects and Business Analysts, especially for our US clients, you will translate data requirements into effective technical solutions. Your responsibilities will include designing and implementing scalable data warehouse and data lake solutions, orchestrating complex data pipelines, leading cloud data lake implementation projects, participating in cloud migration projects, developing containerized applications, optimizing SQL queries, writing automation scripts in Python, and utilizing various GCP data services such as BigQuery, Bigtable, and Cloud SQL. Your expertise in data warehouse and data lake design and implementation, experience in data pipeline development and tuning, hands-on involvement in cloud migration and data lake projects, proficiency in Docker and GKE, strong SQL and Python scripting skills, and familiarity with GCP services like BigQuery, Cloud SQL, Dataflow, and Composer will be essential for this role. Additionally, knowledge of data governance principles, experience with dbt, and the ability to work effectively within a team and adapt to project needs are highly valued. Strong communication skills, the willingness to work in UK shift timings, and the openness to giving and receiving feedback are important traits that will contribute to your success in this role.,

Posted 1 week ago

Apply

8.0 - 13.0 years

0 Lacs

hyderabad, telangana

On-site

You are an experienced GCP Data Engineer with 8+ years of expertise in designing and implementing robust, scalable data architectures on Google Cloud Platform. Your role involves defining and leading the implementation of data architecture strategies using GCP services to meet business and technical requirements. As a visionary GCP Data Architect, you will be responsible for architecting and optimizing scalable data pipelines using Google Cloud Storage, BigQuery, Dataflow, Cloud Composer, Dataproc, and Pub/Sub. You will design solutions for large-scale batch processing and real-time streaming, leveraging tools like Dataproc for distributed data processing. Your responsibilities also include establishing and enforcing data governance, security frameworks, and best practices for data management. You will conduct architectural reviews and performance tuning for GCP-based data solutions, ensuring cost-efficiency and scalability. Collaborating with cross-functional teams, you will translate business needs into technical requirements and deliver innovative data solutions. The required skills for this role include strong expertise in GCP services such as Google Cloud Storage, BigQuery, Dataflow, Cloud Composer, Dataproc, and Pub/Sub. Proficiency in designing and implementing data processing frameworks for ETL/ELT, batch, and real-time workloads is essential. You should have an in-depth understanding of data modeling, data warehousing, and distributed data processing using tools like Dataproc and Spark. Hands-on experience with Python, SQL, and modern data engineering practices is required. Your knowledge of data governance, security, and compliance best practices on GCP will be crucial in this role. Strong problem-solving, leadership, and communication skills are necessary for guiding teams and engaging stakeholders effectively.,

Posted 1 week ago

Apply

12.0 - 15.0 years

35 - 60 Lacs

Chennai, Bengaluru

Hybrid

Job Description: Job Title: GCP Solution Architect Location : Chennai | Bangalore Experience : 12-15 years in IT Key Responsibilities Architect and lead GCP-native data and AI solutions tailored to AdTech use casessuch as real-time bidding, campaign analytics, customer segmentation, and look alike modeling. Design high-throughput data pipelines, audience data lakes, and analytics platforms leveraging GCP services like BigQuery, Dataflow, Pub/Sub, Cloud Storage, Vertex AI, etc. Collaborate with ad operations, marketing teams, and digital product owners to understand business goals and translate them into scalable and performant solutions. Integrate with third-party AdTech and MarTech platforms, including DSPs, SSPs, CDPs, DMPs, ad exchanges, and identity resolution systems. Ensure architectural alignment with data privacy regulations (GDPR, CCPA) and support consent management and data anonymization strategies. Drive technical leadership across multi-disciplinary teams (Data Engineering, MLOps, Analytics) and enforce best practices in data governance, model deployment, and cloud optimization. Lead discovery workshops, solution assessments, and architecture reviews during pre-sales and delivery cycles. Required Skills & Qualifications Bachelors or Masters degree in Computer Science, Engineering, or related field. BigQuery, Cloud Pub/Sub, Dataflow, Dataproc, Cloud Composer (Airflow), Vertex AI, AI Platform, AutoML, Cloud Functions, Cloud Run, Looker, Apigee, Dataplex, GKE Deep understanding of programmatic advertising (RTB, OpenRTB), cookie-less identity frameworks, and AdTech/MarTech data flows. Experience integrating or building components like: Data Management Platforms (DMPs) Customer Data Platforms (CDPs) Demand-Side Platforms (DSPs) Ad servers, attribution engines, and real-time bidding pipelines Event-driven and microservices architecture using APIs, streaming pipelines, and edge delivery networks. Integration with platforms like Google Marketing Platform, Google Ads Data Hub, Snowplow, Segment, or similar. Strong understanding of IAM, data encryption, PII anonymization, and regulatory compliance (GDPR, CCPA, HIPAA if applicable). Experience with CI/CD pipelines (Cloud Build), Infrastructure as Code (Terraform), and MLOps pipelines using Vertex AI or Kubeflow. Strong experience in Python and SQL; familiarity with Scala or Java is a plus. Experience with version control (Git), Agile delivery, and architectural documentation tools. If you know someone suitable, feel free to forward their resume to aarthi.murali@zucisystems.com. Regards, Aarthi Murali

Posted 1 week ago

Apply

5.0 - 10.0 years

25 - 35 Lacs

Noida, Pune, Bengaluru

Work from Office

Description: We are seeking a proficient Data Governance Engineer to lead the development and management of robust data governance frameworks on Google Cloud Platform (GCP). The ideal candidate will bring in-depth expertise in data management, metadata frameworks, compliance, and security within cloud environments to ensure high-quality, secure, and compliant data practices aligned with organizational goals. Requirements: 4+ years of experience in data governance, data management, or data security. Hands-on experience with Google Cloud Platform (GCP) including BigQuery, Dataflow, Dataproc, and Google Data Catalog. Strong command over metadata management, data lineage, and data quality tools (e.g., Collibra, Informatica). Deep understanding of data privacy laws and compliance frameworks. Proficiency in SQL and Python for governance automation. Experience with RBAC, encryption, and data masking techniques. Familiarity with ETL/ELT pipelines and data warehouse architectures. Job Responsibilities: Develop and implement comprehensive data governance frameworks , focusing on metadata management, lineage tracking , and data quality. Define, document, and enforce data governance policies, access control mechanisms, and security standards using GCP-native services such as IAM, DLP, and KMS. Manage metadata repositories using tools like Collibra, Informatica, Alation, or Google Data Catalog. Collaborate with data engineering and analytics teams to ensure compliance with GDPR, CCPA, SOC 2, and other regulatory standards. Automate processes for data classification, monitoring, and reporting using Python and SQL. Support data stewardship initiatives including the development of data dictionaries and governance documentation. Optimize ETL/ELT pipelines and data workflows to meet governance best practices. What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!

Posted 1 week ago

Apply

4.0 - 7.0 years

18 - 20 Lacs

Pune

Hybrid

Job Title: GCP Data Engineer Location: Pune, India Experience: 4 to 7 Years Job Type: Full-Time Job Summary: We are looking for a highly skilled GCP Data Engineer with 4 to 7 years of experience to join our data engineering team in Pune . The ideal candidate should have strong experience working with Google Cloud Platform (GCP) , including Dataproc , Cloud Composer (Apache Airflow) , and must be proficient in Python , SQL , and Apache Spark . The role involves designing, building, and optimizing data pipelines and workflows to support enterprise-grade analytics and data science initiatives. Key Responsibilities: Design and implement scalable and efficient data pipelines on GCP , leveraging Dataproc , BigQuery , Cloud Storage , and Pub/Sub. Develop and manage ETL/ELT workflows using Apache Spark , SQL , and Python. Orchestrate and automate data workflows using Cloud Composer (Apache Airflow). Build batch and streaming data processing jobs that integrate data from various structured and unstructured sources. Optimize pipeline performance and ensure cost-effective data processing. Collaborate with data analysts, scientists, and business teams to understand data requirements and deliver high-quality solutions. Implement and monitor data quality checks, validation, and transformation logic. Required Skills: Strong hands-on experience with Google Cloud Platform (GCP) Proficiency with Dataproc for big data processing and Apache Spark Expertise in Python and SQL for data manipulation and scripting Experience with Cloud Composer / Apache Airflow for workflow orchestration Knowledge of data modeling, warehousing, and pipeline best practices Solid understanding of ETL/ELT architecture and implementation Strong troubleshooting and problem-solving skills Preferred Qualifications: GCP Data Engineer or Cloud Architect Certification. Familiarity with BigQuery , Dataflow , and Pub/Sub. Interested candidates can send your your resume on pranitathapa@onixnet.com

Posted 1 week ago

Apply

7.0 - 10.0 years

20 - 27 Lacs

Noida

Work from Office

Job Responsibilities: Technical Leadership: • Provide technical leadership and mentorship to a team of data engineers. • Design, architect, and implement highly scalable, resilient, and performant data pipelines, using GCP technologies is a plus (e.g., Dataproc, Cloud Composer, Pub/Sub, BigQuery). • Guide the team in adopting best practices for data engineering, including CI/CD, infrastructure-as-code, and automated testing. • Conduct code reviews, design reviews, and provide constructive feedback to team members. • Stay up-to-date with the latest technologies and trends in data engineering, Data Pipeline Development: • Develop and maintain robust and efficient data pipelines to ingest, process, and transform large volumes of structured and unstructured data from various sources. • Implement data quality checks and monitoring systems to ensure data accuracy and integrity. • Collaborate with cross functional teams, and business stakeholders to understand data requirements and deliver data solutions that meet their needs. Platform Building & Maintenance: • Design and implement secure and scalable data storage solutions • Manage and optimize cloud infrastructure costs related to data engineering workloads. • Contribute to the development and maintenance of data engineering tooling and infrastructure to improve team productivity and efficiency. Collaboration & Communication: • Effectively communicate technical designs and concepts to both technical and non-technical audiences. • Collaborate effectively with other engineering teams, product managers, and business stakeholders. • Contribute to knowledge sharing within the team and across the organization. Required Qualifications: • Bachelor's or Master's degree in Computer Science, Engineering, or a related field. • 7+ years of experience in data engineering and Software Development. • 7+ years of experience of coding in SQL and Python/Java. • 3+ years of hands-on experience building and managing data pipelines in cloud environment like GCP. • Strong programming skills in Python or Java, with experience in developing data-intensive applications. • Expertise in SQL and data modeling techniques for both transactional and analytical workloads. • Experience with CI/CD pipelines and automated testing frameworks. • Excellent communication, interpersonal, and problem-solving skills. • Experience leading or mentoring a team of engineers Roles and Responsibilities Job Responsibilities: Technical Leadership: • Provide technical leadership and mentorship to a team of data engineers. • Design, architect, and implement highly scalable, resilient, and performant data pipelines, using GCP technologies is a plus (e.g., Dataproc, Cloud Composer, Pub/Sub, BigQuery). • Guide the team in adopting best practices for data engineering, including CI/CD, infrastructure-as-code, and automated testing. • Conduct code reviews, design reviews, and provide constructive feedback to team members. • Stay up-to-date with the latest technologies and trends in data engineering, Data Pipeline Development: • Develop and maintain robust and efficient data pipelines to ingest, process, and transform large volumes of structured and unstructured data from various sources. • Implement data quality checks and monitoring systems to ensure data accuracy and integrity. • Collaborate with cross functional teams, and business stakeholders to understand data requirements and deliver data solutions that meet their needs. Platform Building & Maintenance: • Design and implement secure and scalable data storage solutions • Manage and optimize cloud infrastructure costs related to data engineering workloads. • Contribute to the development and maintenance of data engineering tooling and infrastructure to improve team productivity and efficiency. Collaboration & Communication: • Effectively communicate technical designs and concepts to both technical and non-technical audiences. • Collaborate effectively with other engineering teams, product managers, and business stakeholders. • Contribute to knowledge sharing within the team and across the organization. Required Qualifications: • Bachelor's or Master's degree in Computer Science, Engineering, or a related field. • 7+ years of experience in data engineering and Software Development. • 7+ years of experience of coding in SQL and Python/Java. • 3+ years of hands-on experience building and managing data pipelines in cloud environment like GCP. • Strong programming skills in Python or Java, with experience in developing data-intensive applications. • Expertise in SQL and data modeling techniques for both transactional and analytical workloads. • Experience with CI/CD pipelines and automated testing frameworks. • Excellent communication, interpersonal, and problem-solving skills. • Experience leading or mentoring a team of engineers

Posted 1 week ago

Apply

7.0 - 10.0 years

1 - 6 Lacs

Chennai

Work from Office

Key Responsibilities Design and develop large-scale data pipelines using GCP services (BigQuery, Dataflow, Dataproc, Pub/Sub). Implement batch and real-time ETL/ELT pipelines using Apache Beam and Spark. Manage and optimize BigQuery queries, partitioning, clustering, and cost control. Build distributed processing jobs on Dataproc (Hadoop/Spark) clusters. Develop and maintain streaming data pipelines with Pub/Sub and Dataflow. Work with Cloud Spanner to support highly available and globally scalable databases. Integrate data from various sources, manage schema evolution, and ensure data quality. Collaborate with data analysts, data scientists, and business teams to deliver scalable data solutions. Follow CI/CD , DevOps, and infrastructure-as-code best practices using tools like Terraform or Cloud Build . Monitor, debug, and tune data pipelines for performance and reliability. Must-Have Skills GCP expertise BigQuery, Dataflow, Dataproc, Cloud Spanner, Pub/Sub. Strong SQL skills and performance optimization in BigQuery. Solid experience in streaming (real-time) and batch processing . Proficiency in Apache Beam , Apache Spark , or similar frameworks. Python or Java for data processing logic. Understanding of data architecture , pipeline design patterns, and distributed systems . Experience with IAM roles , service accounts , and GCP security best practices. Familiarity with monitoring tools – Stackdriver, Dataflow job metrics, BQ Query Plans.

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Data Specialist, you will be responsible for utilizing your expertise in ETL Fundamentals, SQL, BigQuery, Dataproc, Python, Data Catalog, Data Warehousing, and various other tools to contribute to the successful implementation of data projects. Your role will involve working with technologies such as Cloud Trace, Cloud Logging, Cloud Storage, and Datafusion to build and maintain a modern data platform. To excel in this position, you should possess a minimum of 5 years of experience in the data engineering field, with a focus on GCP cloud data implementation suite including BigQuery, Pub Sub, Data Flow/Apache Beam, Airflow/Composer, and Cloud Storage. Your strong understanding of very large-scale data architecture and hands-on experience in data warehouses, data lakes, and analytics platforms will be crucial for the success of our projects. Key Requirements: - Minimum 5 years of experience in data engineering - Hands-on experience in GCP cloud data implementation suite - Strong expertise in GBQ Query, Python, Apache Airflow, and SQL (BigQuery preferred) - Extensive hands-on experience with SQL and Python for working with data If you are passionate about data and have a proven track record of delivering results in a fast-paced environment, we invite you to apply for this exciting opportunity to be a part of our dynamic team.,

Posted 1 week ago

Apply

5.0 - 10.0 years

4 - 9 Lacs

Chennai, Bengaluru

Work from Office

Dear Candidate, This is with reference to your profile on the job portal. Deloitte India Consulting has an immediate requirement for the following role. Job Notice period: Looking for immediate 4 Weeks (Max) Location : Any Job Description – Skill : GCP Data Engineer Incase if you are interested, please share your updated resume along with the following details.(Mandatory) to Smouni@deloitte.com Candidate Name Mobile No. Email ID Skill Total Experience Education Details Current Location Requested location Current Firm Current CTC Exp CTC Notice Period/LWD Feedback

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies