Posted:2 weeks ago|
Platform:
Hybrid
Full Time
Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data processing pipelines using Scala programming language. Collaborate with cross-functional teams to gather requirements and deliver high-quality solutions on time. Troubleshoot complex issues related to big data processing using Hadoop ecosystem (HDFS, MapReduce, Hive) and Spark. Ensure scalability, performance, and reliability of big data systems by implementing efficient algorithms and optimizing system resources. Participate in code reviews and contribute to the development of technical documentation. Desired Candidate Profile 5-10 years of experience in Big Data development with expertise in Scala programming language. Bachelor's degree in B.Tech/B.E. from a reputed institution or Master's degree in M.Tech from a reputed institution. Strong understanding of distributed computing concepts such as Hadoop Distributed File System (HDFS), MapReduce, Hive etc. . Experience working with relational databases like MySQL/PostgreSQL/Oracle; NoSQL databases like MongoDB/Cassandra; cloud platforms like AWS/Azure/GCP.
Lancesoft
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Pune
15.0 - 30.0 Lacs P.A.
Pune, Maharashtra, India
Salary: Not disclosed
Bengaluru, Karnataka, India
Salary: Not disclosed
Hyderabad, Telangana, India
Salary: Not disclosed
Hyderabad
12.0 - 20.0 Lacs P.A.
Hyderabad
8.0 - 14.0 Lacs P.A.
Gurugram, Haryana, India
Salary: Not disclosed
Chennai, Tamil Nadu, India
Salary: Not disclosed
Pune, Maharashtra, India
Salary: Not disclosed
Chennai, Tamil Nadu, India
Salary: Not disclosed