Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
6.0 - 7.0 years
6 - 7 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities : Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required technical and professional expertise Total 6 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers
Posted 1 week ago
6.0 - 7.0 years
6 - 7 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities : Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required technical and professional expertise Total 6 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers
Posted 1 week ago
5.0 - 12.0 years
5 - 6 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Description We are seeking an experienced AWS Glue Engineer to join our team in India. The ideal candidate will have a strong background in ETL processes and AWS services, with the ability to design and implement efficient data pipelines. Responsibilities Design, develop, and maintain ETL processes using AWS Glue. Collaborate with data architects and data scientists to optimize data pipelines. Implement data transformation processes to ensure data integrity and accessibility. Monitor and troubleshoot ETL jobs to ensure performance and reliability. Work with AWS services such as S3, Redshift, and RDS to support data workflows. Skills and Qualifications 5-12 years of experience in data engineering or ETL development. Strong proficiency in AWS Glue and AWS ecosystem services. Experience with Python or Scala for scripting and data transformation. Knowledge of data modeling and database design principles. Familiarity with data warehousing concepts and tools. Understanding of data governance and security best practices. Experience with version control systems like Git.
Posted 1 week ago
5.0 - 7.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Introduction In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers AWS S3, Redshift, and EMR for data storage and distributed processing. AWS Lambda, AWS Step Functions, and AWS Glue to build serverless, event-driven data workflows and orchestrate ETL processes
Posted 2 weeks ago
5 - 8 years
22 - 27 Lacs
Mumbai, Bengaluru, Noida
Work from Office
We are looking for SQL Database Developer with good experience in ETL. Hands-on experience in SQL Database design and development. Candidate should well versed in writing queries, stored procedures and Query optimizations. Candidate should have minimum 2 years experience in ETL programming. Working knowledge on tools like Informatica or Ab Initio is preferred. Candidate should be good in Data Modelling. Experience with Java/Scala/Python and Spark Framework preferred Good knowledge in Database design Should have knowledge on Shell scripting Working experience in Python is good to have. Spark working experience is a big plus. Perl knowledge would be plus. Unix and Shell scripting experience would be huge plus. Designing, developing and implementing quality database solutions. Candidate should be able to create technical Design documents Good experience in optimizing SQL queries, writing the stored Procedures, Functions, Views and Triggers. Ability to independently deliver complex development projects Experience in monitoring SQL Server logs and recovery model, ensure the backup operations, batch commands, or other scripts and processes have completed successfully. Candidate should have good written and verbal communication skills.
Posted 2 months ago
5 - 8 years
22 - 27 Lacs
Mumbai, Bengaluru, Noida
Work from Office
We are looking for SQL Database Developer with good experience in ETL. Hands-on experience in SQL Database design and development. Candidate should well versed in writing queries, stored procedures and Query optimizations. Candidate should have minimum 2 years experience in ETL programming. Working knowledge on tools like Informatica or Ab Initio is preferred. Candidate should be good in Data Modelling. Experience with Java/Scala/Python and Spark Framework preferred Good knowledge in Database design Should have knowledge on Shell scripting Working experience in Python is good to have. Spark working experience is a big plus. Perl knowledge would be plus. Unix and Shell scripting experience would be huge plus. Designing, developing and implementing quality database solutions. Candidate should be able to create technical Design documents Good experience in optimizing SQL queries, writing the stored Procedures, Functions, Views and Triggers. Ability to independently deliver complex development projects Experience in monitoring SQL Server logs and recovery model, ensure the backup operations, batch commands, or other scripts and processes have completed successfully. Candidate should have good written and verbal communication skills.
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2