Posted:3 months ago|
Platform:
Work from Office
Full Time
Design and optimize distributed data pipelines using Java and Apache Spark/Flink Build and manage scalable Data Lake solutions (AWS S3, HDFS, etc.) Implement cloud-based data processing solutions on AWS, Azure, or GCP Collaborate with teams to integrate and improve data workflows What We re Looking For: 5+ years of experience in Java development with expertise in distributed systems Strong hands-on experience with Apache Spark or Apache Flink Experience working with Data Lake technologies (e.g., AWS S3, HDFS) Familiarity with cloud platforms (AWS, Azure, GCP) and data formats (Parquet, Avro) Strong knowledge of NoSQL databases and CI/CD practices Nice-to-Have: Experience with Docker , Kubernetes , and Apache Kafka Knowledge of data governance and security best practices
Invimatic
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Gurugram
6.0 - 9.0 Lacs P.A.
25.0 - 30.0 Lacs P.A.
Bengaluru
5.0 - 9.0 Lacs P.A.
Bengaluru
15.0 - 18.0 Lacs P.A.
Hyderabad / Secunderabad, Telangana, Telangana, India
6.0 - 11.0 Lacs P.A.
Delhi, Delhi, India
6.0 - 11.0 Lacs P.A.
Visakhapatnam, Hyderabad
6.0 - 16.0 Lacs P.A.
Pune, Ahmedabad
30.0 - 40.0 Lacs P.A.
Hyderabad
18.0 - 30.0 Lacs P.A.
Mumbai, Bengaluru
15.0 - 30.0 Lacs P.A.