4 - 6 years
1.0 - 5.5 Lacs P.A.
Hyderabad, Chennai, Bengaluru
Posted:3 weeks ago| Platform:
Work from Office
Full Time
Role & responsibilities Design, develop, and implement scalable big data solutions using technologies such as Hadoop, Spark, Hive, and Kafka. Build data pipelines for batch and real-time data ingestion, transformation, and loading. Collaborate with data scientists, analysts, and other stakeholders to understand data needs and deliver clean, reliable datasets. Optimize existing data workflows and troubleshoot data-related issues. Implement best practices for data management, security, and performance. Work with cloud platforms (e.g., AWS, Azure, GCP) for scalable data solutions. Preferred candidate profile Bachelors or Master’s degree in Computer Science, Engineering, or a related field. 3+ years of experience in big data development. Strong programming skills in Java, Scala, or Python. Hands-on experience with Hadoop ecosystem tools (e.g., HDFS, Hive, Pig, MapReduce). Experience with Apache Spark and distributed computing frameworks. Experience in real-time data streaming using Apache Kafka or similar technologies. Proficient in SQL and working with large datasets. Experience with cloud platforms like AWS EMR, Azure HDInsight, or GCP Dataproc is a plus.
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Kochi, Bengaluru
INR 12.0 - 22.0 Lacs P.A.
Hyderabad, Pune, Bengaluru
INR 15.0 - 25.0 Lacs P.A.
Hyderabad, Pune, Mumbai (All Areas)
INR 40.0 - 55.0 Lacs P.A.
Pune, Chennai, Bengaluru
INR 0.7 - 1.0 Lacs P.A.
Pune, Chennai, Bengaluru
INR 15.0 - 25.0 Lacs P.A.
Pune, Chennai, Bengaluru
INR 10.0 - 20.0 Lacs P.A.
Bengaluru
INR 12.0 - 22.0 Lacs P.A.
Bhubaneswar, Pune, Bengaluru
INR 8.0 - 18.0 Lacs P.A.
INR 20.0 - 25.0 Lacs P.A.
INR 6.0 - 16.0 Lacs P.A.