Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Engineer at Aptiv, you will play a crucial role in designing, developing, and implementing a cost-effective, scalable, reusable, and secured Ingestion framework. Your primary responsibility will be to work closely with business leaders, stakeholders, and source system Subject Matter Experts (SMEs) to understand and define the business needs, translate them into technical specifications, and ingest data into Google Cloud Platform, specifically BigQuery. You will be involved in designing and implementing processes for data ingestion, transformation, storage, analysis, modeling, reporting, monitoring, availability, governance, and security of high volumes of structured and unstructured data. Your role will involve developing and deploying high-throughput data pipelines using the latest Google Cloud Platform (GCP) technologies, serving as a specialist in data engineering and GCP data technologies, and engaging with clients to understand their requirements and translate them into technical data solutions. You will also be responsible for analyzing business requirements, creating source-to-target mappings, enhancing ingestion frameworks, and transforming data according to business rules. Additionally, you will develop capabilities to support enterprise-wide data cataloging, design data solutions with a focus on security and privacy, and utilize Agile and DataOps methodologies in project delivery. To qualify for this role, you should have a Bachelor's or Master's degree in Computer Science, Data & Analytics, or a similar relevant field, along with at least 4 years of hands-on IT experience in a similar role. You should possess proven expertise in SQL, including subqueries, aggregations, functions, triggers, indexes, and database optimization, as well as deep experience working with various Google Data Products such as BigQuery, Dataproc, Data Catalog, Dataflow, Cloud SQL, among others. Experience in tools like Qlik replicate, Spark, and Kafka is also required. Strong communication skills, the ability to work with globally distributed teams, and knowledge of statistical methods and data modeling are essential for this role. Experience with designing and creating Tableau, Qlik, or Power BI dashboards, as well as knowledge of Alteryx and Informatica Data Quality, will be beneficial. Aptiv provides an inclusive work environment where individuals can grow and develop, irrespective of gender, ethnicity, or beliefs. Safety is a core value at Aptiv, aiming for a world with zero fatalities, zero injuries, and zero accidents. The company offers a competitive health insurance package to support the physical and mental health of its employees. Additionally, Aptiv provides benefits such as personal holidays, healthcare, pension, tax saver scheme, free onsite breakfast, discounted corporate gym membership, and access to transportation options at the Grand Canal Dock location. If you are passionate about data engineering, GCP technologies, and driving value creation through data analytics, Aptiv offers a challenging and rewarding opportunity to grow and make a meaningful impact in a dynamic and innovative environment.,
Posted 2 weeks ago
8.0 - 13.0 years
12 - 15 Lacs
hyderabad
Work from Office
We are seeking a Technical Architect specializing in Healthcare Data Analytics with expertise in Google Cloud Platform (GCP). The role involves designing and implementing data solutions tailored to healthcare analytics requirements. The ideal candidate will have experience in GCP tools like BigQuery, Dataflow, Dataprep, and Healthcare APIs, and should stay up to date with GCP updates. Knowledge of healthcare data standards, compliance requirements (e.g., HIPAA), and healthcare interoperability is essential. The role requires experience in microservices, containerization (Docker, Kubernetes), and programming languages like Python and Spark. The candidate will lead the implementation of data analytics solutions and collaborate with cross-functional teams, data scientists, and engineers to deliver secure, scalable systems.
Posted 2 weeks ago
9.0 - 14.0 years
35 - 50 Lacs
noida, bengaluru, delhi / ncr
Work from Office
Design & optimize BigQuery, Dataflow, Dataproc, Pub/Sub, Composer; ETL/ELT, governance, security, migration, SQL/Python. 8+ years of experience in data engineering, data architecture, or analytics. At least 3 years data or solutions architect role.
Posted 2 weeks ago
7.0 - 12.0 years
5 - 15 Lacs
bengaluru
Work from Office
Role - GCP Staff Data Engineer Experience: 8 - 13 years Preferred - Data Engineering Background Location -Bangalore, Chennai, Hyderabad, Kolkata, Pune, Gurgaon Job Requirement: Have Implemented and Architected solutions on Google Cloud Platform using the components of GCP Experience with Apache Beam/Google Dataflow/Apache Spark in creating end to end data pipelines. Experience in some of the following: Python, Hadoop, Spark, SQL, Big Query, Big Table Cloud Storage, Datastore, Spanner, Cloud SQL, Machine Learning. Experience programming in Java, Python, etc. Expertise in at least two of these technologies: Relational Databases, Analytical Databases, NoSQL databases. Certified in Google Professional Data Engineer/ Solution Architect is a major Advantage Skills Required: 8 + years' experience in IT or professional services experience in IT delivery or large-scale IT analytics projects Candidates must have expertise knowledge of Google Cloud Platform; the other cloud platforms are nice to have. Expert knowledge in SQL development. Expertise in building data integration and preparation tools using cloud technologies (like Snaplogic, Google Dataflow, Cloud Dataprep, Python, etc). Experience with Apache Beam/Google Dataflow/Apache Spark in creating end to end data pipelines. Experience in some of the following: Python, Hadoop, Spark, SQL, Big Query, Big Table Cloud Storage, Datastore, Spanner, Cloud SQL, Machine Learning. Experience programming in Java, Python, etc. Identify downstream implications of data loads/migration (e.g., data quality, regulatory, etc.) Implement data pipelines to automate the ingestion, transformation, and augmentation of data sources, and provide best practices for pipeline operations. Capability to work in a rapidly changing business environment and to enable simplified user access to massive data by building scalable data solutions Advanced SQL writing and experience in data mining (SQL, ETL, data warehouse, etc.) and using databases in a business environment with complex datasets Required Skills Required Skills - GCP DE Experience, Big query, SQL, Cloud compressor/Python, Cloud functions, Dataproc+pyspark, Python injection, Dataflow+PUB/SUB
Posted 3 weeks ago
7.0 - 12.0 years
5 - 15 Lacs
bengaluru
Work from Office
Role - GCP Staff Data Engineer Experience: 8 - 13 years Preferred - Data Engineering Background Location -Bangalore, Chennai, Hyderabad, Kolkata, Pune, Gurgaon Job Requirement: Have Implemented and Architected solutions on Google Cloud Platform using the components of GCP Experience with Apache Beam/Google Dataflow/Apache Spark in creating end to end data pipelines. Experience in some of the following: Python, Hadoop, Spark, SQL, Big Query, Big Table Cloud Storage, Datastore, Spanner, Cloud SQL, Machine Learning. Experience programming in Java, Python, etc. Expertise in at least two of these technologies: Relational Databases, Analytical Databases, NoSQL databases. Certified in Google Professional Data Engineer/ Solution Architect is a major Advantage Skills Required: 8 + years' experience in IT or professional services experience in IT delivery or large-scale IT analytics projects Candidates must have expertise knowledge of Google Cloud Platform; the other cloud platforms are nice to have. Expert knowledge in SQL development. Expertise in building data integration and preparation tools using cloud technologies (like Snaplogic, Google Dataflow, Cloud Dataprep, Python, etc). Experience with Apache Beam/Google Dataflow/Apache Spark in creating end to end data pipelines. Experience in some of the following: Python, Hadoop, Spark, SQL, Big Query, Big Table Cloud Storage, Datastore, Spanner, Cloud SQL, Machine Learning. Experience programming in Java, Python, etc. Identify downstream implications of data loads/migration (e.g., data quality, regulatory, etc.) Implement data pipelines to automate the ingestion, transformation, and augmentation of data sources, and provide best practices for pipeline operations. Capability to work in a rapidly changing business environment and to enable simplified user access to massive data by building scalable data solutions Advanced SQL writing and experience in data mining (SQL, ETL, data warehouse, etc.) and using databases in a business environment with complex datasets Required Skills Required Skills - GCP DE Experience, Big query, SQL, Cloud compressor/Python, Cloud functions, Dataproc+pyspark, Python injection, Dataflow+PUB/SUB
Posted 4 weeks ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
You will be responsible for applying Natural Language Processing (NLP) AI techniques, conducting machine learning, and developing high-quality prediction systems for data classification. Your main tasks will include presenting information through data visualization techniques, as well as collecting, preprocessing, and harmonizing data. As a part of your role, you will develop applications in machine learning and artificial intelligence, while selecting features, building, and optimizing classifiers using machine learning techniques. Understanding business objectives and creating models to support them, along with relevant metrics to monitor progress, will be crucial. You will need to manage available resources efficiently, such as hardware, data, and personnel, to ensure project deadlines are met. Your responsibilities will also involve analyzing various ML algorithms to identify the most suitable ones for solving a specific problem and ranking them based on success probability. Exploring and visualizing data to comprehend it better, detecting variations in data distribution that could impact model performance in real-world deployment, and verifying data quality through cleaning processes will be essential. You will supervise the data acquisition process if additional data is required, search for relevant datasets online for training purposes, define validation strategies, and determine pre-processing or feature engineering on datasets. Furthermore, you will be involved in defining data augmentation pipelines, training models, tuning hyperparameters, analyzing model errors, and developing strategies to address them. Deployment of models to production environments will also be a key aspect of your role. As a desired candidate, you should possess a strong understanding of machine learning (ML) and deep learning (DL) algorithms, as well as an architectural comprehension of CNN and RNN algorithms. Experience with NLP data models and libraries, entity extraction using NLP, and proficiency in tools like TensorFlow, scikit-learn, and spaCy is required. Additionally, familiarity with transfer learning, scripting and programming skills in Python and Streamlit, common data science toolkits such as NumPy and Fast.AI, and various machine learning techniques and algorithms like k-NN, Naive Bayes, and SVM is essential. Proficiency in query languages, applied statistics skills, data wrangling, data exploration, as well as tools like Tableau and DataPrep will be advantageous. The ideal candidate will be an immediate joiner with no restrictions on salary for the right individual.,
Posted 1 month ago
6.0 - 10.0 years
0 Lacs
haryana
On-site
Join GlobalLogic as a valuable member of the team working on a significant software project for a world-class company that provides M2M / IoT 4G/5G modules to industries such as automotive, healthcare, and logistics. Your engagement will involve contributing to the development of end-user modules" firmware, implementing new features, maintaining compatibility with the latest telecommunication and industry standards, and analyzing and estimating customer requirements. Requirements - BA / BS degree in Computer Science, Mathematics, or a related technical field, or equivalent practical experience. - Proficiency in Cloud SQL and Cloud Bigtable. - Experience with Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub / Sub, and Genomics. - Familiarity with Google Transfer Appliance, Cloud Storage Transfer Service, and BigQuery Data Transfer. - Knowledge of data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and data processing algorithms (MapReduce, Flume). - Previous experience working with technical customers. - Proficiency in writing software in languages like Java or Python. - 6-10 years of relevant consulting, industry, or technology experience. - Strong problem-solving and troubleshooting skills. - Excellent communication skills. Job Responsibilities - Hands-on experience working with data warehouses, including technical architectures, infrastructure components, ETL / ELT, and reporting / analytic tools. - Experience in technical consulting. - Proficiency in architecting and developing software or internet-scale Big Data solutions in virtualized environments like Google Cloud Platform (mandatory) and AWS / Azure (good to have). - Familiarity with big data, information retrieval, data mining, machine learning, and building high availability applications with modern web technologies. - Working knowledge of ITIL and / or agile methodologies. - Google Data Engineer certification. What We Offer - Culture of caring: Prioritize a culture of caring, where people come first, fostering an inclusive environment of acceptance and belonging. - Learning and development: Commitment to continuous learning and growth, offering various programs, training curricula, and hands-on opportunities for personal and professional advancement. - Interesting & meaningful work: Engage in impactful projects that allow for creative problem-solving and exploration of new solutions. - Balance and flexibility: Embrace work-life balance with diverse career areas, roles, and work arrangements to support personal well-being. - High-trust organization: Join a high-trust organization with a focus on integrity, trustworthiness, and ethical practices. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner known for collaborating with forward-thinking companies to create innovative digital products and experiences. Join the team in transforming businesses and industries through intelligent products, platforms, and services, contributing to cutting-edge solutions that shape the world today.,
Posted 1 month ago
4.0 - 9.0 years
5 - 14 Lacs
Pune, Chennai, Bengaluru
Work from Office
Dear Candidate, This is with reference to your profile on the job portal. Deloitte India Consulting has an immediate requirement for the following role. Job Summary: We are looking for a skilled GCP Data Engineer to design, build, and maintain scalable data pipelines and solutions on Google Cloud Platform . The ideal candidate will have hands-on experience with GCP services, data warehousing, ETL processes, and big data technologies. Key Responsibilities: Design and implement scalable data pipelines using Cloud Dataflow , Apache Beam , and Cloud Composer . Develop and maintain data models and data marts in BigQuery . Build ETL/ELT workflows to ingest, transform, and load data from various sources. Optimize data storage and query performance in BigQuery and other GCP services. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements. Ensure data quality, integrity, and security across all data solutions. Monitor and troubleshoot data pipeline issues and implement improvements. Required Skills & Qualifications: Bachelors or Master’s degree in Computer Science, Engineering, or related field. 3+ years of experience in data engineering, with at least 1–2 years on Google Cloud Platform . Proficiency in SQL , Python , and Apache Beam . Hands-on experience with GCP services like BigQuery , Cloud Storage , Cloud Pub/Sub , Cloud Dataflow , and Cloud Composer . Experience with data modeling , data warehousing , and ETL/ELT processes. Familiarity with CI/CD pipelines , Terraform , and Git . Strong problem-solving and communication skills. Nice to Have: GCP certifications (e.g., Professional Data Engineer ). Incase if you are interested, please share your updated resume along with the following details.(Mandatory) To smouni@deloitte.com Candidate Name Mobile No. Email ID Skill Total Experience Education Details Current Location Requested location Current Firm Current CTC Exp CTC Notice Period/LWD Feedback
Posted 1 month ago
7.0 - 12.0 years
20 - 30 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Work Location: Bangalore/Pune/Hyderabad/ NCR Experience: 5-12yrs Required Skills: Proven experience as a Data Engineer with expertise in GCP. Strong understanding of data warehousing concepts and ETL processes. Experience with BigQuery, Dataflow, and other GCP data services Design, develop, and maintain data pipelines on GCP. Implement data storage solutions and optimize data processing workflows. Ensure data quality and integrity throughout the data lifecycle. Collaborate with data scientists and analysts to understand data requirements. Monitor and maintain the health of the data infrastructure. Troubleshoot and resolve data-related issues. Thanks & Regards Suganya R Suganya@spstaffing.in
Posted 2 months ago
5.0 - 7.0 years
12 - 16 Lacs
Bengaluru
Work from Office
Proven Experience in Business and Data Analytics Mentor team-mates on various business critical projects Solid experience in data analysis and reporting; Exposure to BFSI customer and business data is a plus. Able to communicate with the various stakeholders, manage tasks and issues and monitor progress to ensure the project is on track Proficient in SQL (Data Prep, Procedures, etc.) and Adv. Excel (Pivots, Data Models, Adv. Formulas, etc.) Experience working on MSSQL, Redshift, Databricks and business intelligence tools (e.g. Tableau) Problem-solving skills; methodical and logical approach Willingness to learn and adapt to new technologies Excellent written and verbal communication skills Roles and Responsibilities Effective data crunching and data analysis. Analyse all complex data, business logic, processes and help business take data driven decision. Prove to be a liaison between various teams and stakeholders in ensuring project runs smoothly and is completed and delivered within the stipulated time Working alongside teams to establish business needs Provide recommendations to optimize current systems and process
Posted 2 months ago
8.0 - 13.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Skill Extensive experience with Google Data Products (Cloud Data Fusion,BigQuery, Dataflow, Dataproc, AI Building Blocks, Looker, Dataprep, etc.). Expertise in Cloud Data Fusion,BigQuery & Dataproc Experience in MDM, Metadata Management, Data Quality and Data Lineage tools. E2E Data Engineering and Lifecycle (including non-functional requirements and operations) management. Experience with SQL and NoSQL modern data stores. E2E Solution Design skills - Prototyping, Usability testing and data visualization literacy. Excellent knowledge of the software development life cycle
Posted 2 months ago
4.0 - 8.0 years
6 - 16 Lacs
Hyderabad, Chennai
Hybrid
Role & responsibilities Bachelors degree or four or more years of work experience. Four or more years of work experience. Experience with Data Warehouse concepts and Data Management life cycle. Experience in any DBMS Experience in Shell scripting, Spark, Scala. Experience in GCP/Big Query, composer, Airflow. Experience in real time streaming Experience in DevOps
Posted 3 months ago
5 - 10 years
9 - 19 Lacs
Chennai, Bengaluru, Mumbai (All Areas)
Hybrid
Google BigQuery Location- Pan India Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Key Responsibilities : Analyze and model client market and key performance data Use analytical tools and techniques to develop business insights and improve decisionmaking \n1:Data Proc PubSub Data flow Kalka Streaming Looker SQL No FLEX\n2:Proven track record of delivering data integration data warehousing soln\n3: Strong SQL And Handson Pro in BigQuery SQL languageExp in Shell Scripting Python No FLEX\n4:Exp with data integration and migration projects Oracle SQL Technical Experience : Google BigQuery\n\n1: Expert in Python NO FLEX Strong handson knowledge in SQL NO FLEX Python programming using Pandas NumPy deep understanding of various data structure dictionary array list tree etc experiences in pytest code coverage skills\n2: Exp with building solutions using cloud native services: bucket storage Big Query cloud function pub sub composer and Kubernetes NO FLEX\n3: Pro with tools to automate AZDO CI CD pipelines like ControlM GitHub JIRA confluence CI CD Pipeline Professional Attributes :
Posted 4 months ago
3.0 - 5.0 years
5 - 15 Lacs
hyderabad
Work from Office
About the Role We are looking for a skilled GCP Data Engineer to design, build, and optimize scalable data pipelines and platforms on Google Cloud. The ideal candidate will have hands-on experience with BigQuery, Dataflow, Composer, and Cloud Storage , along with strong SQL and programming skills. Key Responsibilities Design, build, and maintain ETL/ELT pipelines on GCP. Develop scalable data models using BigQuery and optimize query performance. Orchestrate workflows using Cloud Composer (Airflow) . Work with both structured and unstructured data from diverse sources. Implement data quality checks, monitoring, and governance frameworks. Collaborate with Data Scientists, Analysts, and Business teams to deliver reliable datasets. Ensure data security, compliance, and cost optimization on GCP. Debug, monitor, and improve existing pipelines for reliability and efficiency. Required Skills & Experience Strong experience in GCP services : BigQuery, Dataflow, Pub/Sub, Cloud Storage, Cloud Composer. Expertise in SQL (BigQuery SQL / Presto SQL) and performance tuning. Hands-on experience in Python/Java/Scala for data processing. Experience with workflow orchestration tools (Airflow, Composer, or similar). Familiarity with CI/CD pipelines, GitHub, and deployments . Knowledge of data warehouse design, dimensional modeling, and best practices . Strong problem-solving and analytical skills. Nice-to-Have Skills Experience with other cloud platforms (AWS/Azure) is a plus. Exposure to Machine Learning pipelines on GCP (Vertex AI). Knowledge of Terraform/Infrastructure as Code . Understanding of real-time streaming solutions (Kafka, Pub/Sub). Education Bachelors/Master’s degree in Computer Science, Engineering, or related field.
Posted Date not available
8.0 - 13.0 years
35 - 50 Lacs
bengaluru, delhi / ncr
Work from Office
Exp 8 Years + Skill Data Solutions Architect Google Cloud Platform to Architect and design end-to-end data solutions leveraging awide array of GCP services including BigQuery, Dataflow, Dataproc, Pub/Sub Cloud Storage Cloud Composer and Data Catalog
Posted Date not available
8.0 - 13.0 years
6 - 10 Lacs
hyderabad
Work from Office
Skill Extensive experience with Google Data Products (Cloud Data Fusion,BigQuery, Dataflow, Dataproc, AI Building Blocks, Looker, Dataprep, etc.). Expertise in Cloud Data Fusion,BigQuery & Dataproc Experience in MDM, Metadata Management, Data Quality and Data Lineage tools. E2E Data Engineering and Lifecycle (including non-functional requirements and operations) management. Experience with SQL and NoSQL modern data stores. E2E Solution Design skills - Prototyping, Usability testing and data visualization literacy. Excellent knowledge of the software development life cycle
Posted Date not available
6.0 - 10.0 years
2 - 6 Lacs
pune
Work from Office
We need someone who 6+ yrs exp and has hands on experience in migrating Google Analytics UA360 data to BigQuery. Experience working with Google Cloud data products (CloudSQL, Spanner, Cloud Storage, Pub/Sub, Dataflow, Dataproc, Bigtable, BigQuery, Dataprep, Composer, etc) Experience with IoT architectures and building real-time data streaming pipelines Experience operationalizing machine learning models on large datasets Demonstrated leadership and self-direction -- a willingness to teach others and learn new techniques Demonstrated skills in selecting the right statistical tools given a data analysis problem Understanding of Chaos Engineering Understanding of PCI, SOC2, and HIPAA compliance standards Understanding of the principle of least privilege and security best practices Experience working with Google Support. Understanding of cryptocurrency and blockchain technology
Posted Date not available
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |