Jobs
Interviews

419 Dataproc Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

15 - 25 Lacs

bengaluru

Work from Office

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As a Data Engineer at Kyndryl, you'll be at the forefront of the data revolution, crafting and shaping data platforms that power our organization's success. This role is not just about code and databases; it's about transforming raw data into actionable insights that drive strategic decisions and innovation. In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation. Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataset—a true data alchemist. Armed with a keen eye for detail, you'll scrutinize data solutions, ensuring they align with business and technical requirements. Your work isn't just a means to an end; it's the foundation upon which data-driven decisions are made – and your lifecycle management expertise will ensure our data remains fresh and impactful. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Expertise 6–8 years of experience working as a Data Engineer or in Azure cloud modernization Good experience in Power BI for data visualization and dashboard development Strong experience in developing data pipelines and using tools such as AWS Glue, Azure Databricks, Synapse, or Google Dataproc Proficient in working with both relational and NoSQL databases, including PostgreSQL, DB2, and MongoDB Excellent problem-solving, analytical, and critical thinking skills Ability to manage multiple projects simultaneously while maintaining a high level of attention to detail Expertise in data mining, data storage, and Extract-Transform-Load (ETL) processes Preferred Technical and Professional Experience Experience in Data Modelling, to create conceptual model of how data is connected and how it will be used in business processes Professional certification, e.g., Open Certified Technical Specialist with Data Engineering Specialization Cloud platform certification, e.g., AWS Certified Data Analytics– Specialty, Elastic Certified Engineer, Google CloudProfessional Data Engineer, or Microsoft Certified: Azure Data Engineer Associate Understanding of social coding and Integrated Development Environments, e.g., GitHub and Visual Studio Degree in a scientific discipline, such as Computer Science, Software Engineering, or Information Technology Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 5 days ago

Apply

5.0 - 7.0 years

13 - 17 Lacs

bengaluru

Work from Office

Skilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work Preferred technical and professional experience Create up to 3 bullets maxitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications (encouraging then to focus on required skills)

Posted 5 days ago

Apply

5.0 - 8.0 years

6 - 9 Lacs

kochi

Work from Office

We are seeking experienced Data Engineers to join our growing team, with positions open for both senior (9+ years) and mid-level (5\u20139 years) professionals. This role focuses on developing scalable data solutions in cloud or relational database environments and requires strong hands-on experience with large datasets and implementation projects. The ideal candidate will have a strong foundation in Data Engineering or Data Warehousing (DWH), deep SQL skills, and a proven track record of working on real-world, implementation-focused data projects (not support roles). You\u2019ll be responsible for designing, developing, and optimising end-to-end data flows, ensuring clean, secure, and high-performing data movement across platforms. Core Responsibilities: Develop and maintain data pipelines and DWH systems across cloud or on-premise platforms Work on large, complex datasets to support analytical and operational reporting Contribute to design and execution of implementation projects, from planning to production Use advanced SQL for data extraction, transformation, and manipulation Collaborate with cross-functional teams including architects, analysts, and stakeholders Nice to Have: Proficiency in Python or any other scripting language Understanding of prompt engineering and AI-driven data processes Experience working on healthcare or life sciences data projects

Posted 6 days ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

As a Data Engineer at Walmart Global Tech, you will be responsible for architecting, designing, and implementing high-performance data ingestion and integration processes in a complex, large-scale data environment. Your role will involve developing and implementing databases, data collection systems, data analytics, and other strategies to optimize statistical efficiency and quality. You will also oversee and mentor the data engineering team's practices to ensure data privacy and security compliance. Collaboration is key in this role as you will work closely with data scientists, data analysts, and other stakeholders to understand data needs and deliver on those requirements. Additionally, you will collaborate with all business units and engineering teams to develop a long-term strategy for data platform architecture. Your responsibilities will also include developing and maintaining scalable data pipelines, building new API integrations, and monitoring data quality to ensure accurate and reliable production data. To be successful in this role, you should have a Bachelor's degree or higher in Computer Science, Engineering, Mathematics, or a related field, along with at least 12 years of proven experience in data engineering, software development, or a similar data management role. You should have strong knowledge and experience with Big Data technologies such as Hadoop, Spark, and Kafka, as well as proficiency in scripting languages like Python, Java, Scala, etc. Experience with SQL and NoSQL databases, deep understanding of data structures and algorithms, and familiarity with machine learning algorithms and principles are also preferred. Excellent communication and leadership skills are essential for this role, along with hands-on experience in data processing and manipulation. Expertise with GCP cloud and GCP data processing tools like GCS, DataProc, DPaaS, BigQuery, Hive, as well as experience with Orchestration tools like Airflow, Automic, Autosys, are highly valued. Join Walmart Global Tech, where you can make a significant impact by leveraging your expertise to innovate at scale, influence millions, and shape the future of retail. With a hybrid work environment, competitive compensation, incentive awards, and a range of benefits, you'll have the opportunity to grow your career and contribute to a culture where everyone feels valued and included. Walmart Global Tech is committed to being an Equal Opportunity Employer, fostering a workplace culture where everyone is respected and valued for their unique contributions. Join us in creating opportunities for all associates, customers, and suppliers, and help us build a more inclusive Walmart for everyone.,

Posted 6 days ago

Apply

2.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As a Senior Data Engineer with 5-8 years of IT experience, including 2-3 years focused on GCP data services, you will be a valuable addition to our dynamic data and analytics team. Your primary responsibility will be to design, develop, and implement robust and insightful data-intensive solutions using GCP Cloud services. Your role will entail a deep understanding of data engineering, proficiency in SQL, and extensive experience with various GCP services such as BigQuery, DataFlow, DataStream, Pub/Sub, Dataproc, Cloud Storage, and other key GCP services for Data Pipeline Orchestration. You will be instrumental in the construction of a GCP native cloud data platform. Key Responsibilities: - Lead and contribute to the development, deployment, and lifecycle management of applications on GCP, utilizing services like Compute Engine, Kubernetes Engine (GKE), Cloud Functions, Cloud Run, Pub/Sub, BigQuery, Cloud SQL, Cloud Storage, and more. Required Skills and Qualifications: - Bachelor's degree in Computer Science, Information Technology, Data Analytics, or a related field. - 5-8 years of overall IT experience, with hands-on experience in designing and developing data applications on GCP Cloud. - In-depth expertise in GCP services and architectures, including Compute, Storage & Databases, Data & Analytics, and Operations & Monitoring. - Proven ability to translate business requirements into technical solutions. - Strong analytical, problem-solving, and critical thinking skills. - Effective communication and interpersonal skills for collaboration with technical and non-technical stakeholders. - Experience in Agile development methodology. - Ability to work independently, manage multiple priorities, and meet deadlines. Preferred Skills (Nice to Have): - Experience with other Hyperscalers. - Proficiency in Python or other scripting languages for data manipulation and automation. If you are a highly skilled and experienced Data Engineer with a passion for leveraging GCP data services to drive innovation, we invite you to apply for this exciting opportunity in Gurugram or Hyderabad.,

Posted 6 days ago

Apply

4.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

At Solidatus, we are revolutionizing the way organizations comprehend their data. We are an award-winning, venture-backed software company often referred to as the Git for Metadata. Our platform enables businesses to extract, model, and visualize intricate data lineage flows. Through our unique lineage-first approach and active AI development, we offer organizations unparalleled clarity and robust control over their data's journey and significance. As a rapidly growing B2B SaaS business with fewer than 100 employees, your contributions play a pivotal role in shaping our product. Renowned for our innovation and collaborative culture, we invite you to join us as we expand globally and redefine the future of data understanding. We are currently looking for an experienced Data Pipeline Engineer/Data Lineage Engineer to support the development of data lineage solutions for our clients" existing data pipelines. In this role, you will collaborate with cross-functional teams to ensure the integrity, accuracy, and timeliness of the data lineage solution. Your responsibilities will involve working directly with clients to maximize the value derived from our product and assist them in achieving their contractual objectives. **Experience:** - 4-10 years of relevant experience **Qualifications:** - Proven track record as a Data Engineer or in a similar capacity, with hands-on experience in constructing and optimizing data pipelines and infrastructure. - Demonstrated experience working with Big Data and related tools. - Strong problem-solving and analytical skills to diagnose and resolve complex data-related issues. - Profound understanding of data engineering principles and practices. - Exceptional communication and collaboration abilities to work effectively in cross-functional teams and convey technical concepts to non-technical stakeholders. - Adaptability to new technologies, tools, and methodologies within a dynamic environment. - Proficiency in writing clean, scalable, and robust code using Python or similar programming languages. Background in software engineering is advantageous. **Desirable Languages/Tools:** - Proficiency in programming languages such as Python, Java, Scala, or SQL for data manipulation and scripting. - Experience with XML in transformation pipelines. - Familiarity with major Database technologies like Oracle, Snowflake, and MS SQL Server. - Strong grasp of data modeling concepts including relational and dimensional modeling. - Exposure to big data technologies and frameworks such as Databricks, Spark, Kafka, and MS Notebooks. - Knowledge of modern data architectures like lakehouse. - Experience with CI/CD pipelines and version control systems such as Git. - Understanding of ETL tools like Apache Airflow, Informatica, or SSIS. - Familiarity with data governance and best practices in data management. - Proficiency in cloud platforms and services like AWS, Azure, or GCP for deploying and managing data solutions. - Strong problem-solving and analytical skills for resolving complex data-related issues. - Proficiency in SQL for database management and querying. - Exposure to tools like Open Lineage, Apache Spark Streaming, Kafka, or similar for real-time data streaming. - Experience utilizing data tools in at least one cloud service - AWS, Azure, or GCP. **Key Responsibilities:** - Implement robust data lineage solutions utilizing Solidatus products to support business intelligence, analytics, and data governance initiatives. - Collaborate with stakeholders to comprehend data lineage requirements and translate them into technical and business solutions. - Develop and maintain lineage data models, semantic metadata systems, and data dictionaries. - Ensure data quality, security, and compliance with relevant regulations. - Uphold Solidatus implementation and data lineage modeling best practices at client sites. - Stay updated on emerging technologies and industry trends to enhance data lineage architecture practices continually. **Qualifications:** - Bachelor's or Master's degree in Computer Science, Information Systems, or a related field. - Proven experience in data architecture, focusing on large-scale data systems across multiple companies. - Proficiency in data modeling, database design, and data warehousing concepts. - Experience with cloud platforms (e.g., AWS, Azure, GCP) and big data technologies (e.g., Hadoop, Spark). - Strong understanding of data governance, data quality, and data security principles. - Excellent communication and interpersonal skills to thrive in a collaborative environment. **Why Join Solidatus ** - Participate in an innovative company that is shaping the future of data management. - Collaborate with a dynamic and talented team in a supportive work environment. - Opportunities for professional growth and career advancement. - Flexible working arrangements, including hybrid work options. - Competitive compensation and benefits package. If you are passionate about data architecture and eager to make a significant impact, we invite you to apply now and become a part of our team at Solidatus.,

Posted 6 days ago

Apply

3.0 - 14.0 years

0 Lacs

chennai, tamil nadu

On-site

As a seasoned professional with extensive experience in machine learning operations (MLOps), you will be responsible for leading a talented team and developing a comprehensive technical strategy. Your expertise in Python programming and ML frameworks such as TensorFlow or PyTorch will be crucial in implementing MLOps best practices, including model versioning, monitoring, automated deployment, and infrastructure automation. Your in-depth knowledge of Google Cloud Platform services, including Data Fusion, Dataproc, and Dataflow, will play a key role in data processing and pipeline orchestration. Experience with PostgreSQL databases and data integration tools like Qlik Replicate will further enhance your capabilities in this role. Security and privacy considerations for machine learning systems will be a top priority, requiring expertise in data encryption, access control, and compliance with regulations such as GDPR and HIPAA. Strong communication and leadership skills are essential for engaging both technical and non-technical stakeholders effectively. With a background in Computer Science, Engineering, or related field, along with 5+ years of experience managing software engineering or MLOps teams, you will be well-equipped to take on this challenging role. Hands-on experience deploying and managing machine learning models in production for 3+ years will further strengthen your candidacy. Your overall IT industry experience of 14+ years, along with relevant certifications in MLOps and Cloud platforms (especially GCP Professional Machine Learning Engineer or Data Engineer), will be valuable assets in this position. In this role, you will be leading the development and execution of a comprehensive technical strategy for the end-to-end ML lifecycle and experimentation capabilities. Your responsibilities will include fostering a culture of continuous learning and innovation, participating hands-on in coding, code reviews, troubleshooting, and mentoring, and creating scalable MLOps frameworks and infrastructure to support the full machine learning pipeline. Collaboration with data scientists, data engineers, software developers, and business stakeholders will be essential to ensure robustness, scalability, and performance in integrating ML models into production. Implementation of rigorous security best practices to maintain compliance with industry standards and regulations will be a critical aspect of your role. Furthermore, you will be responsible for maintaining thorough technical documentation, guidelines, and knowledge-sharing resources to support the ongoing success of the MLOps initiatives.,

Posted 6 days ago

Apply

5.0 - 10.0 years

18 - 33 Lacs

japan, chennai

Work from Office

C1X AdTech Pvt Ltd is a fast-growing product and engineering-driven AdTech company building next-generation advertising and marketing technology platforms. Our mission is to empower enterprise clients with the smartest marketing solutions, enabling seamless integration with personalization engines and delivering cross-channel marketing capabilities. We are dedicated to enhancing customer engagement and experiences while focusing on increasing Lifetime Value (LTV) through consistent messaging across all channels.Our engineering team spans front end (UI), back end (Java/Node.js APIs), Big Data, and DevOps , working together to deliver scalable, high-performance products for the digital advertising ecosystem. Role Overview As a Data Engineer , you will be a key member of our data engineering team, responsible for building and maintaining large-scale data products and infrastructure. Youll shape the next generation of data analytics tech stack by leveraging modern big data technologies. This role involves working closely with business stakeholders, product managers, and engineering teams to meet diverse data requirements that drive business insights and product innovation. Objectives Design, build, and maintain scalable data infrastructure for collection, storage, and processing. Enable easy access to reliable data for data scientists, analysts, and business users. Support data-driven decision-making and improve organizational efficiency through high-quality data products. Responsibilities Build large-scale batch and real-time data pipelines using frameworks like Apache Spark on AWS or GCP. Design, manage, and automate data flows between multiple data sources. Implement best practices for continuous integration, testing, and data quality assurance . Maintain data documentation, definitions, and governance practices. Optimize performance, scalability, and cost-effectiveness of data systems. Collaborate with stakeholders to translate business needs into data-driven solutions. Qualifications Bachelor’s degree in Computer Science, Engineering, or related field (exceptional coding performance on platforms like LeetCode/HackerRank may substitute). 2+ years’ experience working on full lifecycle Big Data projects. Strong foundation in data structures, algorithms, and software design principles . Proficiency in at least two programming languages – Python or Scala preferred. Experience with AWS services such as EMR, Lambda, S3, DynamoDB (GCP equivalents also relevant). Hands-on experience with Databricks Notebooks and Jobs API. Strong expertise in big data frameworks: Spark, MapReduce, Hadoop, Sqoop, Hive, HDFS, Airflow, Zookeeper . Familiarity with containerization (Docker) and workflow management tools (Apache Airflow) . Intermediate to advanced knowledge of SQL (relational + NoSQL databases like Postgres, MySQL, Redshift, Redis). Experience with SQL tuning, schema design, and analytical programming . Proficient in Git (version control) and collaborative workflows. Comfortable working across diverse technologies in a fast-paced, results-oriented environment .

Posted 6 days ago

Apply

5.0 - 10.0 years

12 - 20 Lacs

hyderabad

Work from Office

Role & responsibilities Preferred candidate profile Bachelors degree or equivalent experience in Computer Science, Mathematics, Information Technology or related field. 5+ years of solid hands-on experience as a Data Engineer, demonstrating increasing levels of responsibility and technical leadership. Strong understanding of data warehousing concepts and data modeling principles. Proven experience with designing and implementing data pipelines using GCP BigQuery or cloud platform. Strong SQL and scripting languages like Python (or similar) skills. Expert knowledge of data quality tools and techniques. Excellent communication and collaboration skills. Ability to work independently and as needed part of a team. Passion for data and a desire to learn and adapt to new technologies. Experience with other GCP services like Cloud Storage, Dataflow, Dataproc, Bigquery and Pub/Sub or similar cloud platforms. Experience with cloud deployment and automation tools like Terraform. Experience with data visualization tools like Tableau / Power BI / Looker. Experience with health care data.

Posted 6 days ago

Apply

7.0 - 12.0 years

20 - 35 Lacs

bengaluru

Work from Office

Job Description GCP Lead Google Cloud Platform Location: Brookefield, Bangalore, India Department: Software Development Legal Entity: FGSI Why Join Fossil Group? At Fossil Group, we are part of an international team that dares to dream, disrupt, and deliver innovative watches, jewelry, and leather goods to the world. We're committed to long-term value creation, driven by technology and our core values: Authenticity, Grit, Curiosity, Humor, and Impact. If you are a forward-thinker who thrives in a diverse, global setting, we want to hear from you. Job Summary We are seeking a passionate and technically strong GCP Lead to join our Global Technology team at Fossil Group . This role is responsible for leading cloud-based architecture design, implementation, automation, and optimization efforts on Google Cloud Platform (GCP) . You will serve as a technical mentor and thought leader, enabling the business with modern, scalable cloud infrastructure. Responsibilities Lead the design, development, and deployment of end-to-end GCP-based solutions. Architect cloud-native data pipelines, applications, and infrastructure using GCP tools such as BigQuery, Dataflow, Dataproc, Airflow and Cloud Composer. Collaborate with stakeholders across engineering, data, security, and operations to align cloud solutions with business goals. Provide technical guidance and mentorship to team members on GCP best practices and automation. Implement monitoring, logging, and alerting systems using Stackdriver, Cloud Logging, Cloud Monitoring, or similar tools. Oversee performance tuning, cost optimization, and security hardening of GCP workloads. Drive continuous integration and deployment pipelines using Terraform, Jenkins, GitOps, etc. Ensure compliance with cloud security standards, access control policies, and audit requirements. Participate in architectural reviews, sprint planning, and roadmap discussions. Manage escalations, troubleshoot cloud issues, and ensure uptime and service reliability. Requirements Bachelors degree in Computer Science, Information Technology, or equivalent experience. 7-12 years of total experience, with 3+ years in GCP-focused roles and prior leadership/architect experience. Hands-on experience with GCP services : BigQuery, Dataproc, Dataflow, Airflow, Cloud Composer . Strong development background in Python or Java , with experience building data pipelines and APIs. Expertise in infrastructure as code (IaC) using Terraform , Deployment Manager , or similar tools. Working knowledge of containerization and orchestration (Docker, Kubernetes, GKE). Proficiency in CI/CD pipelines , version control (Git), and Agile delivery methodologies. Familiarity with data security, encryption, identity management, and compliance frameworks. Excellent problem-solving skills, system-level thinking, and performance tuning expertise. Strong communication and stakeholder management skills. EEO Statement At Fossil, we believe our differences not only make us stronger as a team, but also help us create better products and a richer community. We are an Equal Employment Opportunity Employer dedicated to a policy of nondiscrimination in all employment practices without regard to age, disability, gender identity or expression, marital status, pregnancy, race, religion, sexual orientation, or any other protected characteristic.

Posted 6 days ago

Apply

5.0 - 7.0 years

13 - 17 Lacs

chennai

Work from Office

Skilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work Preferred technical and professional experience Create up to 3 bullets maxitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications (encouraging then to focus on required skills)

Posted 1 week ago

Apply

3.0 - 6.0 years

6 - 10 Lacs

bengaluru

Work from Office

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Should have developed/Worked for atleast 1 Gen AI project. Has data pipeline implementation experience with any of these cloud providers - AWS, Azure, GCP. Experience with cloud storage, cloud database, cloud data warehousing and Data lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Has good knowledge of cloud compute services and load balancing. Has good knowledge of cloud identity management, authentication and authorization. Proficiency in using cloud utility functions such as AWS lambda, AWS step functions, Cloud Run, Cloud functions, Azure functions. Experience in using cloud data integration services for structured, semi structured and unstructured data such as Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Your Profile Good knowledge of Infra capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs performance and scaling. Able to contribute to making architectural choices using various cloud services and solution methodologies. Expertise in programming using python. Very good knowledge of cloud Dev-ops practices such as infrastructure as code, CI/CD components, and automated deployments on cloud. Must understand networking, security, design principles and best practices in cloud. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.

Posted 1 week ago

Apply

3.0 - 6.0 years

5 - 9 Lacs

pune

Work from Office

About The Role Data engineers are responsible for building reliable and scalable data infrastructure that enables organizations to derive meaningful insights, make data-driven decisions, and unlock the value of their data assets. About The Role - Grade Specific The role support the team in building and maintaining data infrastructure and systems within an organization. Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management

Posted 1 week ago

Apply

1.0 - 3.0 years

3 - 6 Lacs

bengaluru

Work from Office

About The Role Project Role : Data Science Practitioner Project Role Description : Formulating, design and deliver AI/ML-based decision-making frameworks and models for business outcomes. Measure and justify AI/ML based solution values. Must have skills : Google Cloud Platform Architecture Good to have skills : NA Minimum 18 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Technology Architect, you will design, develop, and deliver advanced analytics and machine learning solutions leveraging Google Clouds AI/ML capabilities. You will define models and workflows to meet performance, scalability, and accuracy requirements, ensuring business insights are actionable and aligned with strategic goals. You will be responsible for the successful deployment of AI/ML solutions on Google Cloud.Roles & Responsibilities:Expected to be a Subject Matter Expert (SME) with deep expertise in data science and AI/ML on Google Cloud.Influence and guide analytical and AI-driven decision-making across multiple teams.Engage with stakeholders to identify high-impact data science opportunities.Design, build, and optimize ML models for predictive, prescriptive, and descriptive analytics.Collaborate with data engineering teams to ensure high-quality data pipelines for ML workloads.Ensure solutions adhere to ethical AI principles, security requirements, and compliance standards.Lead model deployment, monitoring, and continuous improvement in production environments.Professional & Technical Skills: Must To Have Skills: Proficiency in Google Cloud AI/ML tools (Vertex AI, AI Platform, BigQuery ML, TensorFlow, AutoML).Strong understanding of statistical modeling, machine learning algorithms, and data mining techniques.Experience with large-scale data processing frameworks (Dataflow, Dataproc) and SQL-based analytics (BigQuery).Proficiency in Python/R for data science, with experience in ML frameworks (TensorFlow, Scikit-learn, PyTorch).Knowledge of MLOps practices, including CI/CD for ML and model governance. Additional Information:The candidate should have a minimum of 18 years of experience in data science, with a focus on Google Cloud AI/ML solutions.This position is based Pan India Qualification 15 years full time education

Posted 1 week ago

Apply

2.0 - 4.0 years

4 - 8 Lacs

mumbai

Work from Office

About The Role Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Google BigQuery Good to have skills : Google Cloud Data Services, Microsoft SQL Server Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in enhancing the overall data infrastructure and ensuring that data is accessible and reliable for decision-making purposes. Project Role :Analytics and Modelor Project Role Description :Analyze and model client, market and key performance data. Use analytical tools and techniques to develop business insights and improve decision-making. Must have Skills :Google BigQuery, SSI:NON SSI:Good to Have Skills :SSI:No Technology Specialization NON SSI :Job Requirements :Key Responsibilities :Data Proc, Pub,Sub, Data flow, Kalka Streaming, Looker, SQL (No FLEX)1:Proven track record of delivering data integration, data warehousing soln2:Strong SQL And Hands-on (No FLEX)2:Exp with data integration and migration projects 3:Proficient in BigQuery SQL language (No FLEX)4:understanding on cloud native services :bucket storage, GBQ, cloud function, pub sub, composer, and KubernetesExp in cloud solutions, mainly data platform services , GCP Certifications 5:Exp in Shell Scripting, Python (NO FLEX), Oracle, SQL Technical Experience :1:Expert in Python (NO FLEX). Strong hands-on and strong knowledge in SQL(NO FLEX), Python programming using Pandas, NumPy, deep understanding of various data structure dictionary, array, list, tree etc, experiences in pytest, code coverage skills are preferred2:Strong hands-on experience with building solutions using cloud native services:bucket storage, Big Query, cloud function, pub sub, composer, and Kubernetes etc. (NO FLEX)3:Proficiency with tools to automate AZDO CI CD pipelines like Control-M , GitHub, JIRA, confluence , CI CD Pipeline4:Open mindset, ability to quickly adapt new technologies5:Performance tuning of BigQuery SQL scripts6:GCP Certified preferred7:Working in agile environment Professional Attributes :1:Must have good communication skills 2:Must have ability to collaborate with different teams and suggest solutions 3:Ability to work independently with little supervision or as a team 4:Good analytical problem solving skills 5:Good team handling skills Educational Qualification:15 years of Full time education Additional Information :Candidate should be ready for Shift B and work as individual contributor Qualification 15 years full time education

Posted 1 week ago

Apply

2.0 - 4.0 years

4 - 8 Lacs

coimbatore

Work from Office

About The Role Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Google BigQuery Good to have skills : Microsoft SQL Server, Google Cloud Data Services Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in enhancing the overall data infrastructure and ensuring that data is accessible and reliable for decision-making purposes. Project Role :Analytics and Modelor Project Role Description :Analyze and model client, market and key performance data. Use analytical tools and techniques to develop business insights and improve decision-making. Must have Skills :Google BigQuery, SSI:NON SSI:Good to Have Skills :SSI:No Technology Specialization NON SSI :Job Requirements :Key Responsibilities :Data Proc, Pub,Sub, Data flow, Kalka Streaming, Looker, SQL (No FLEX)1:Proven track record of delivering data integration, data warehousing soln2:Strong SQL And Hands-on (No FLEX)2:Exp with data integration and migration projects3:Proficient in BigQuery SQL language (No FLEX)4:understanding on cloud native services :bucket storage, GBQ, cloud function, pub sub, composer, and KubernetesExp in cloud solutions, mainly data platform services , GCP Certifications5:Exp in Shell Scripting, Python (NO FLEX), Oracle, SQL Technical Experience :1:Expert in Python (NO FLEX). Strong hands-on and strong knowledge in SQL(NO FLEX), Python programming using Pandas, NumPy, deep understanding of various data structure dictionary, array, list, tree etc, experiences in pytest, code coverage skills are preferred2:Strong hands-on experience with building solutions using cloud native services:bucket storage, Big Query, cloud function, pub sub, composer, and Kubernetes etc. (NO FLEX)3:Proficiency with tools to automate AZDO CI CD pipelines like Control-M , GitHub, JIRA, confluence , CI CD Pipeline4:Open mindset, ability to quickly adapt new technologies5:Performance tuning of BigQuery SQL scripts6:GCP Certified preferred7:Working in agile environment Professional Attributes :1:Must have good communication skills2:Must have ability to collaborate with different teams and suggest solutions3:Ability to work independently with little supervision or as a team4:Good analytical problem solving skills 5:Good team handling skills Educational Qualification:15 years of Full time education Additional Information :Candidate should be ready for Shift B and work as individual contributor Qualification 15 years full time education

Posted 1 week ago

Apply

10.0 - 15.0 years

25 - 35 Lacs

chennai

Work from Office

We are looking for an experienced Senior Software Engineer with strong expertise in full-stack development and cloud technologies. The ideal candidate will be responsible for designing, developing, testing, and maintaining scalable software applications and products. You will be deeply involved in the entire software development lifecycleright from architecture design to deployment—while collaborating with cross-functional teams and driving user-centric solutions. Key Responsibilities Engage with customers to understand use cases, pain points, and requirements, and advocate for user-focused solutions. Design, develop, and deliver software applications using various tools, frameworks, and methodologies (Agile). Assess business and technical requirements to determine the best technology stack, integration methods, and deployment strategy. Create high-level software architecture designs, including structure, components, and interfaces. Collaborate closely with product owners, designers, and architects to align on solutions. Define and implement software testing strategies, policies, and best practices. Continuously optimize application performance and adopt new technologies to enhance efficiency. Apply programming practices such as Test-Driven Development (TDD), Continuous Integration (CI), and Continuous Delivery (CD). Implement secure coding practices, including encryption and anonymization of user data. Develop user-friendly, interactive front-end interfaces and robust back-end services (APIs, microservices). Leverage cloud platforms and emerging technologies to build future-ready solutions. Skills Required Programming & Data Engineering: Python, PySpark, API Development, SQL/Postgres Cloud Platforms & Tools: Google Cloud Platform (BigQuery, Cloud Run, Dataflow, Dataproc, Data Fusion, Cloud SQL), IBM WebSphere Application Server Infrastructure & DevOps: Terraform, Tekton, Airflow Other Expertise: MDM (Master Data Management), application optimization, microservices Experience Required 10+ years of experience in IT, with 8+ years in software development. Strong practical experience in at least 2 programming languages OR advanced expertise in 1 language. Experience mentoring and guiding engineering teams.

Posted 1 week ago

Apply

4.0 - 7.0 years

15 - 18 Lacs

pune

Hybrid

Job Title: GCP Data Engineer Location: Pune, India Experience: 3 to 7 Years Job Type: Full-Time Job Summary: We are looking for a highly skilled GCP Data Engineer with 3 to 7 years of experience to join our data engineering team in Pune . The ideal candidate should have strong experience working with Google Cloud Platform (GCP) , including Dataproc , Cloud Composer (Apache Airflow) , and must be proficient in Python , SQL , and Apache Spark . The role involves designing, building, and optimizing data pipelines and workflows to support enterprise-grade analytics and data science initiatives. Key Responsibilities: Design and implement scalable and efficient data pipelines on GCP , leveraging Dataproc , BigQuery , Cloud Storage , and Pub/Sub. Develop and manage ETL/ELT workflows using Apache Spark , SQL , and Python. Orchestrate and automate data workflows using Cloud Composer (Apache Airflow). Build batch and streaming data processing jobs that integrate data from various structured and unstructured sources. Optimize pipeline performance and ensure cost-effective data processing. Collaborate with data analysts, scientists, and business teams to understand data requirements and deliver high-quality solutions. Implement and monitor data quality checks, validation, and transformation logic. Required Skills: Strong hands-on experience with Google Cloud Platform (GCP) Proficiency with Dataproc for big data processing and Apache Spark Expertise in Python and SQL for data manipulation and scripting Experience with Cloud Composer / Apache Airflow for workflow orchestration Knowledge of data modeling, warehousing, and pipeline best practices Solid understanding of ETL/ELT architecture and implementation Strong troubleshooting and problem-solving skills Preferred Qualifications: GCP Data Engineer or Cloud Architect Certification. Familiarity with BigQuery , Dataflow , and Pub/Sub. Interested candidates can send your your resume on pranitathapa@onixnet.com

Posted 1 week ago

Apply

6.0 - 8.0 years

10 - 15 Lacs

bengaluru

Work from Office

Job Summary Synechron is seeking a highly skilled and proactive Data Engineer to join our dynamic data analytics team. In this role, you will be instrumental in designing, developing, and maintaining scalable data pipelines and solutions on the Google Cloud Platform (GCP). With your expertise, you'll enable data-driven decision-making, contribute to strategic business initiatives, and ensure robust data infrastructure. This position offers an opportunity to work in a collaborative environment with a focus on innovative technologies and continuous growth. Software Requirements Required: Proficiency in Data Engineering tools and frameworks such as Hive , Apache Spark , and Python (version 3.x) Extensive experience working with Google Cloud Platform (GCP) offerings including Dataflow, BigQuery, Cloud Storage, and Pub/Sub Familiarity with Git , Jira , and Confluence for version control and collaboration Preferred: Experience with additional GCP services like DataProc, Data Studio, or Cloud Composer Exposure to other programming languages such as Java or Scala Knowledge of data security best practices and tools Overall Responsibilities Design, develop, and optimize scalable data pipelines on GCP to support analytics and reporting needs Collaborate with cross-functional teams to translate business requirements into technical solutions Build and maintain data models, ensuring data quality, integrity, and security Participate actively in code reviews, adhering to best practices and standards Develop automated and efficient data workflows to improve system performance Stay updated with emerging data engineering trends and continuously improve technical skills Provide technical guidance and support to team members, fostering a collaborative environment Ensure timely delivery of deliverables aligned with project milestones Technical Skills (By Category) Programming Languages: Essential: Python (required) Preferred: Java, Scala Data Management & Databases: Experience with Hive, BigQuery, and relational databases Knowledge of data warehousing concepts and SQL proficiency Cloud Technologies: Extensive hands-on experience with GCP services including Dataflow, BigQuery, Cloud Storage, Pub/Sub, and Composer Ability to build and optimize data pipelines leveraging GCP offerings Frameworks & Libraries: Spark (PySpark preferred), Hadoop ecosystem experience is advantageous Development Tools & Methodologies: Agile/Scrum methodologies, version control with Git, project tracking via JIRA, documentation on Confluence Security Protocols: Understanding of data security, privacy, and compliance standards Experience Requirements Minimum of 6-8 years in data or software engineering roles with a focus on data pipeline development Proven experience in designing and implementing data solutions on cloud platforms, particularly GCP Prior experience working in agile teams, participating in code reviews, and delivering end-to-end data projects Experience working with cross-disciplinary teams and understanding varied stakeholder requirements Exposure to industry best practices for data security, governance, and quality assurance is desired Day-to-Day Activities Attend daily stand-up meetings and contribute to project planning sessions Collaborate with business analysts, data scientists, and other stakeholders to understand data needs Develop, test, and deploy scalable data pipelines, ensuring efficiency and reliability Perform regular code reviews, provide constructive feedback, and uphold coding standards Document technical solutions and maintain clear records of data workflows Troubleshoot and resolve technical issues in data processing environments Participate in continuous learning initiatives to stay abreast of technological developments Support team members by sharing knowledge and resolving technical challenges Qualifications Bachelor's or Masters degree in Computer Science, Information Technology, or a related field Relevant professional certifications in GCP (such as Google Cloud Professional Data Engineer) are preferred but not mandatory Demonstrable experience in data engineering and cloud technologies Professional Competencies Strong analytical and problem-solving skills, with a focus on outcome-driven solutions Excellent communication and interpersonal skills to effectively collaborate within teams and with stakeholders Ability to work independently with minimal supervision and manage multiple priorities effectively Adaptability to evolving technologies and project requirements Demonstrated initiative in driving tasks forward and continuous improvement mindset Strong organizational skills with a focus on quality and attention to detail

Posted 1 week ago

Apply

2.0 - 7.0 years

4 - 9 Lacs

coimbatore

Work from Office

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Google Cloud Machine Learning Services Good to have skills : GCP Dataflow, Google Pub/Sub, Google Dataproc Minimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :We are seeking a skilled GCP Data Engineer to join our dynamic team. The ideal candidate will design, build, and maintain scalable data pipelines and solutions on Google Cloud Platform (GCP). This role requires expertise in cloud-based data engineering and hands-on experience with GCP tools and services, ensuring efficient data integration, transformation, and storage for various business use cases.________________________________________ Roles & Responsibilities: Design, develop, and deploy data pipelines using GCP services such as Dataflow, BigQuery, Pub/Sub, and Cloud Storage. Optimize and monitor data workflows for performance, scalability, and reliability. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and implement solutions. Implement data security and governance measures, ensuring compliance with industry standards. Automate data workflows and processes for operational efficiency. Troubleshoot and resolve technical issues related to data pipelines and platforms. Document technical designs, processes, and best practices to ensure maintainability and knowledge sharing.________________________________________ Professional & Technical Skills:a) Must Have: Proficiency in GCP tools such as BigQuery, Dataflow, Pub/Sub, Cloud Composer, and Cloud Storage. Expertise in SQL and experience with data modeling and query optimization. Solid programming skills in Python ofor data processing and ETL development. Experience with CI/CD pipelines and version control systems (e.g., Git). Knowledge of data warehousing concepts, ELT/ETL processes, and real-time streaming. Strong understanding of data security, encryption, and IAM policies on GCP.b) Good to Have: Experience with Dialogflow or CCAI tools Knowledge of machine learning pipelines and integration with AI/ML services on GCP. Certifications such as Google Professional Data Engineer or Google Cloud Architect.________________________________________ Additional Information: - The candidate should have a minimum of 3 years of experience in Google Cloud Machine Learning Services and overall Experience is 3- 5 years - The ideal candidate will possess a strong educational background in computer science, mathematics, or a related field, along with a proven track record of delivering impactful data-driven solutions. Qualifications 15 years full time education

Posted 1 week ago

Apply

3.0 - 8.0 years

11 - 16 Lacs

pune

Work from Office

Job Description: Job Title Lead Engineer Location Pune Corporate Title Director As a lead engineer within the Transaction Monitoring department, you will lead and drive forward critical engineering initiatives and improvements to our application landscape whilst supporting and leading the engineering teams to excel in their roles. You will be closely aligned to the architecture function and delivery leads, ensuring alignment with planning and correct design and architecture governance is followed for all implementation work. You will lead by example and drive and contribute to automation and innovation initiatives with the engineering teams. Join the fight against financial crime with us! Your key responsibilities Experienced hands-on cloud and on-premise engineer, leading by example with engineering squads Thinking analytically, with systematic and logical approach to solving complex problems, and high attention to detail Design & document complex technical solutions at varying levels in an inclusive and participatory manner with a range of stakeholders Liaise and face-off directly to senior stakeholders in technology, business and modelling areas Collaborating with application development teams to design and prototype solutions (both on-premises and on-cloud), supporting / presenting these via the Design Authority forum for approval and providing good practice and guidelines to the teams Ensuring engineering & architecture compliance with bank-standard processes for deploying new applications, working directly with central functions such as Group Architecture, Chief Security Office and Data Governance Innovate and think creatively, showing willingness to apply new approaches to solving problems and to learn new methods, technologies and potentially outside-of-box solution Your skills and experience Proven hands-on engineering and design experience in a delivery-focused (preferably agile) environment Solid technical/engineering background, preferably with at least two high level languages and multiple relational databases or big-data technologies Proven experience with cloud technologies, preferably GCP (GKE / DataProc / CloudSQL / BigQuery), GitHub & Terraform Competence / expertise in technical skills across a wide range of technology platforms and ability to use and learn new frameworks, libraries and technologies A deep understanding of the software development life cycle and the waterfall and agile methodologies Experience leading complex engineering initiatives and engineering teams Excellent communication skills, with demonstrable ability to interface and converse at both junior and senior level and with non-IT staff Line management experience including working in a matrix management configuration

Posted 1 week ago

Apply

7.0 - 12.0 years

20 - 35 Lacs

hyderabad, chennai, bengaluru

Work from Office

Roles and Responsibilities Design, develop, and maintain large-scale data pipelines using BigQuery, Data Flow, Dataproc, Airflow on Google Cloud Platform (GCP). Collaborate with cross-functional teams to identify business requirements and design solutions that meet those needs. Develop complex SQL queries to extract insights from massive datasets stored in BigQuery. Troubleshoot issues related to pipeline failures or errors in the data processing workflow. Ensure compliance with security standards by implementing proper access controls and encryption mechanisms.

Posted 1 week ago

Apply

15.0 - 25.0 years

13 - 18 Lacs

bengaluru

Work from Office

About The Role Project Role : Data Architect Project Role Description : Define the data requirements and structure for the application. Model and design the application data structure, storage and integration. Must have skills : Google Cloud Platform Architecture Good to have skills : NAMinimum 15 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Architect, you will design and deliver end-to-end data architecture solutions for platforms, products, or engagements on Google Cloud. You will define architectures that meet performance, scalability, security, and compliance requirements while ensuring data integrity and accessibility. You will be responsible for the successful implementation of data solutions that align with business strategy. Roles & Responsibilities:Expected to be a Subject Matter Expert (SME) with deep expertise in Google Cloud data architecture.Provide strategic guidance, influencing architectural decisions across multiple teams.Collaborate with stakeholders to define data strategies, roadmaps, and governance models.Design enterprise-grade data architectures supporting analytics, AI/ML, and operational workloads.Ensure solutions adhere to best practices for security, performance, and cost optimization.Lead the implementation of data architecture frameworks and reference models.Guide teams on data migration, integration, and modernization initiatives. Professional & Technical Skills: Must To Have Skills: Expertise in Google Cloud data services (BigQuery, Cloud Storage, Pub/Sub, Dataflow, Dataproc, etc.).Strong knowledge of data architecture principles, data modeling, and data governance.Proven experience in designing scalable, high-performance, and secure cloud-based data platforms.Hands-on experience with data ingestion, ETL/ELT, streaming, and batch processing.Familiarity with compliance frameworks and data security best practices in cloud environments. Additional Information:The candidate should have a minimum of 16 years of experience in data architecture, with a strong focus on Google Cloud.This position is based Pan India Qualification 15 years full time education

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 18 Lacs

hyderabad, chennai, bengaluru

Hybrid

Location : Bangalore, Chennai, Hyderabad, & Pune - Hybrid Experience - 5+ years Please find below JD: We are looking for a talented GCP BigQuery Developer with strong SQL skills and basic proficiency in Python to join our data engineering team. The ideal candidate should have hands-on experience working with Google Cloud Platform (GCP) services, particularly BigQuery , and be capable of writing efficient SQL queries to support data transformation, analytics, and reporting tasks. Key Responsibilities: Design and implement data solutions using BigQuery and GCP-native services. Develop, optimize, and troubleshoot complex SQL queries for data extraction and analysis. Build and maintain data pipelines to support reporting and analytics use cases. Perform data validation, profiling , and quality checks to ensure data integrity. Collaborate with business analysts, data scientists, and engineers to understand data needs. Write Python scripts for automation, orchestration, and lightweight data processing tasks. Monitor and improve performance of BigQuery workloads and queries. Maintain documentation of data flows, transformations, and processes. Required Skills & Qualifications: ~5 years of experience in data engineering, data analytics, or BI development. Proven hands-on experience with Google BigQuery and GCP data services (e.g., Cloud Storage, Cloud Functions, Dataflow optional). Strong expertise in SQL , including analytical functions, joins, CTEs, and window functions. Basic to intermediate knowledge of Python for scripting and automation. Understanding of data warehouse concepts , data modeling, and normalization. Familiarity with version control systems like Git.

Posted 1 week ago

Apply

3.0 - 7.0 years

8 - 13 Lacs

pune

Work from Office

Role Description Our team is part of the area Technology, Data, and Innovation (TDI) Private Bank. Within TDI, Partner data is the central client reference data system in Germany. As a core banking system, many banking processes and applications are integrated and communicate via >2k interfaces. From a technical perspective, we focus on mainframe but also build solutions on premise cloud, restful services, and an angular frontend. Next to the maintenance and the implementation of new CTB requirements, the content focus also lies on the regulatory and tax topics surrounding a partner/ client. We are looking for a very motivated candidate for the Cloud Data Engineer area. Your key responsibilities You are responsible for the implementation of the new project on GCP (Spark, Dataproc, Dataflow, BigQuery, Terraform etc) in the whole SDLC chain You are responsible for the support of the migration of current functionalities to Google Cloud You are responsible for the stability of the application landscape and support software releases You also support in L3 topics and application governance You are responsible in the CTM area for coding as part of an agile team (Java, Scala, Spring Boot) Your skills and experience You have experience with databases (BigQuery, Cloud SQl, No Sql, Hive etc.) and development preferably for Big Data and GCP technologies Strong understanding of Data Mesh Approach and integration patterns Understanding of Party data and integration with Product data Your architectural skills for big data solutions, especially interface architecture allows a fast start You have experience in at least: Spark, Java ,Scala and Python, Maven, Artifactory, Hadoop Ecosystem, Github Actions, GitHub, Terraform scripting You have knowledge in customer reference data, customer opening processes and preferably regulatory topics around know your customer processes You can work very well in teams but also independent and are constructive and target oriented Your English skills are good and you can both communicate professionally but also informally in small talks with the team

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies