Bengaluru
INR 20.0 - 35.0 Lacs P.A.
Work from Office
Full Time
REQUIRED QUALIFICATIONS: • 5+ years of experience in software development including - Springboot, SpringCloud, web application development using REST • Expert level knowledge of a JavaScript framework like AngularJS, React, Ember, Backbone, Dojo, Ext JS • Strong knowledge of Object Oriented JavaScript , HTML5, CSS3 • Deep understanding of software engineering principles and process along with the ability to apply this knowledge to execute projects and optimize development strategies • Strong skills in critical thinking, decision making, problem-solving, and attention to details • Experience with building Cloud vendor agnostic SaaS product. • Experience in Java, Spring boot microservices, deployed as containers in Kubernetes ecosystem. • In depth understanding of micro services architectures, technological familiarity with public/private/hybrid cloud, Openstack, GCE, Kubernetes, AWS • Have deep understanding of building API's/services: • That are built on top of MQ's - RabbitMQ, Kafka, NATS etc. • That uses cache like Redis, Memcached to improve the performance of the platform • That scales to millions of users in a cloud environment like Private cloud, GCP, AWS, Azure, etc. • Good to have OAuth, OpenID, SAML based authentication experience. • Strong written and oral communication and interpersonal skill • In general, the successful candidate needs to be multi-faceted, a clear communicator, identifying risks and clearly communicating inwards as well as upwards. You must be able to work with Geographically diversified teams.
Hyderabad
INR 20.0 - 35.0 Lacs P.A.
Remote
Full Time
Job Title: Senior Big Data Engineer (Scala-Focused) Location: Hybrid: Hyderabad / Chennai (23 days in office per week) Remote: Open to candidates from other cities across India Experience Required: 5 to 10 years About the Role: We are seeking a highly skilled Big Data Engineer with deep expertise in Scala to join our fast-paced data engineering team. The ideal candidate will have hands-on experience in designing, developing, and optimizing large-scale data processing systems and cloud-based data solutions. This role is central to our mission of building reliable, scalable, and high-performance data platforms. Key Responsibilities: Design and implement scalable batch and near-real-time data pipelines using Scala Build cloud-native data solutions leveraging AWS services like S3, Glue, Lambda, Redshift, Athena, and Kinesis Write robust, maintainable code with a focus on performance , scalability , and testability Design and implement CI/CD pipelines to automate build, test, and deployment processes Collaborate with cross-functional teams including data science, analytics, and product to deliver data-driven solutions Ensure high standards of data quality , security , and governance across all stages of the data lifecycle Continuously monitor and improve pipeline efficiency and cost optimization in the cloud Required Skills and Experience: 510 years of experience in Big Data Engineering Strong proficiency in Scala (mandatory), with clean coding practices and experience in functional programming Proven experience with AWS cloud services , particularly in building and deploying data-centric applications Hands-on experience in building CI/CD pipelines using tools like Jenkins, GitHub Actions, GitLab CI/CD, etc. Proficient in SQL and working with large-scale datasets Good understanding of data lakes , ETL , data modeling , and data warehousing concepts Ability to work independently in a remote or hybrid setup, with strong problem-solving and communication skills Work Model: Hybrid: Openings in Hyderabad and Chennai (3 days per week in office) Remote: Open to candidates across other Indian cities
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.