Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As an AWS Big Data Engineer working in a remote location, you will be a crucial part of a company that provides enterprise-grade platforms designed to accelerate the adoption of Kubernetes and Data. Our flagship platform, Gravity, offers developers a simplified Kubernetes experience by handling all the underlying complexities. You will have the opportunity to utilize tailor-made workflows to deploy Microservices, Workers, Data, and MLOps workloads across multiple Cloud Providers. Gravity takes care of various Kubernetes-related orchestration tasks including cluster provisioning, workload deployments, configuration management, secret management, scaling, and provisioning of cloud services. Additionally, Gravity provides out-of-the-box Observability for workloads, enabling developers to quickly engage in Day 2 operations. Moreover, you will work with Dark Matter, a unified data platform that enables enterprises to extract value from their data lakes. Within this platform, Data Engineers and Data Analysts can easily discover datasets through an Augmented Data Catalog. The Data Profile, Data Quality, and Data Privacy functionalities are deeply integrated within the catalog, offering an immediate snapshot of datasets in Data Lakes. Organizations can maintain Data Quality by defining quality rules that automatically monitor Accuracy, Validity, and Consistency of data to meet their data governance standards. The built-in Data Privacy engine can identify sensitive data in data lakes and take automated actions, such as redactions, through an integrated Policy and Governance engine. Your responsibilities will include having a minimum of 5+ years of experience working with high-volume data infrastructure, proficiency in AWS and/or Databricks, Kubernetes, ETL, and Job orchestration tooling. You should have extensive programming experience in either Python or Java, along with skills in data modeling, optimizing SQL queries, and system performance tuning. It is essential to possess knowledge and proficiency in the latest open-source data frameworks, modern data platform tech stacks, and tools. You should be proficient in SQL, AWS, Databases, Apache Spark, Spark Streaming, EMR, Kubernetes, and Kinesis/Kafka. Your passion should lie in tackling messy unstructured data and transforming it into clean, usable data that contributes to a more organized world. Continuous learning and staying updated with the rapidly evolving data landscape should be a priority for you. Strong communication skills, the ability to work independently, and a degree in Computer Science, Software Engineering, Mathematics, or equivalent experience are necessary for this role. Additionally, the benefit of working from home will be provided as part of this position.,
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough