Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As a Big Data Engineer, you will be responsible for expanding and optimizing the data and database architecture, as well as optimizing data flow and collection for cross-functional teams. You should be an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. Your role will involve supporting software developers, database architects, data analysts, and data scientists on data initiatives, ensuring optimal data delivery architecture is consistent throughout ongoing projects. You must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products. You should have sound knowledge in Spark architecture and distributed computing, including Spark streaming. Proficiency in Spark, including RDD and DataFrames core functions, troubleshooting, and performance tuning is essential. A good understanding of object-oriented concepts and hands-on experience with Scala/Java/Kotlin, along with excellent programming logic and technique, is required. Additionally, experience in functional programming and OOPS concepts in Scala/Java/Kotlin is necessary. Your responsibilities will include managing a team of Associates and Senior Associates, ensuring proper utilization across projects, and mentoring new members for project onboarding. You should be able to understand client requirements, design, develop, and deliver solutions from scratch. Experience in AWS cloud would be preferable, along with the ability to analyze, re-architect, and re-platform on-premises data warehouses to data platforms on the cloud. Leading client calls to address delays, blockers, escalations, and requirements collation, managing project timing, client expectations, and meeting deadlines are key aspects of the role. Project and team management roles, facilitating meetings within the team regularly, understanding business requirements, analyzing different approaches, and planning deliverables and milestones for projects are also part of your responsibilities. Optimization, maintenance, and support of pipelines, strong analytical and logical skills, and the ability to tackle new challenges comfortably and learn are essential qualities for this role. The ideal candidate should have 4 to 7 years of relevant experience. Must-have skills for this position include Scala/Java/Kotlin, Spark, SQL (Intermediate to advanced level), Spark Streaming, any cloud platform (AWS preferred), Kafka/Kinesis/any streaming services, Object-Oriented Programming, Hive, and ETL/ELT design experience, as well as CICD experience for ETL pipeline deployment. Good-to-have skills include proficiency in Git or similar version control tools, knowledge in CI/CD, and Microservices.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi