Posted:2 days ago|
Platform:
Work from Office
Full Time
We enable #HumanFirstDigital Job Summary: We are looking for a highly experienced and strategic Data Engineer to drive the design, development, and optimization of our enterprise data platform. This role requires deep technical expertise in AWS, StreamSets, and Snowflake, along with solid experience in Kubernetes, Apache Airflow, and unit testing. The ideal candidate will lead a team of data engineers and play a key role in delivering scalable, secure, and high-performance data solutions for both historical and incremental data loads. Key Responsibilities: Lead the architecture, design, and implementation of end-to-end data pipelines using StreamSets and Snowflake. Oversee the development of scalable ETL/ELT processes for historical data migration and incremental data ingestion. Guide the team in leveraging AWS services (S3, Lambda, Glue, IAM, etc.) to build cloud-native data solutions. Provide technical leadership in deploying and managing containerized applications using Kubernetes. Define and implement workflow orchestration strategies using Apache Airflow. Establish best practices for unit testing, code quality, and data validation. Collaborate with data architects, analysts, and business stakeholders to align data solutions with business goals. Mentor junior engineers and foster a culture of continuous improvement and innovation. Monitor and optimize data workflows for performance, scalability, and cost-efficiency. Required Skills & Qualifications: High proficiency in AWS, including hands-on experience with core services (S3, Lambda, Glue, IAM, CloudWatch). Expert-level experience with StreamSets, including Data Collector, Transformer, and Control Hub. Strong Snowflake expertise, including data modeling, SnowSQL, and performance tuning. Medium-level experience with Kubernetes, including container orchestration and deployment. Working knowledge of Apache Airflow for workflow scheduling and monitoring. Experience with unit testing frameworks and practices in data engineering. Proven experience in building and managing ETL pipelines for both batch and real-time data. Strong command of SQL and scripting languages such as Python or Shell. Experience with CI/CD pipelines and version control tools (e.g., Git, Jenkins). Preferred Qualifications: AWS certification (e.g., AWS Certified Data Analytics, Solutions Architect). Experience with data governance, security, and compliance frameworks. Familiarity with Agile methodologies and tools like Jira and Confluence. Prior experience in a leadership or mentoring role within a data engineering team. Our Commitment to Diversity & Inclusion: Our Perks and Benefits: Our benefits and rewards program has been thoughtfully designed to recognize your skills and contributions, elevate your learning/upskilling experience and provide care and support for you and your loved ones. As an Apexon Associate, you get continuous skill-based development, opportunities for career advancement, and access to comprehensive health and well-being benefits and assistance. We also offer: o Group Health Insurance covering family of 4 o Term Insurance and Accident Insurance o Paid Holidays & Earned Leaves o Paid Parental LeaveoLearning & Career Development o Employee Wellness Job Location : Bengaluru, India
Apexon
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Chennai
35.0 - 40.0 Lacs P.A.
Cochin
0.25 - 0.35 Lacs P.A.
Gurgaon
25.0 - 25.0 Lacs P.A.
Chennai
4.0 - 7.53 Lacs P.A.
Thane, Maharashtra, India
14.0 - 30.0 Lacs P.A.
Thane, Maharashtra, India
10.0 - 17.5 Lacs P.A.
Chennai, Tamil Nadu, India
Salary: Not disclosed
Chennai, Tamil Nadu, India
Salary: Not disclosed
Bengaluru
13.0 - 23.0 Lacs P.A.
Bengaluru
30.0 - 37.5 Lacs P.A.