Posted:8 hours ago|
Platform:
Remote
Contractual
Are you passionate about building and supporting modern data platforms in the cloud? We’re looking for a Sr. Data Platform Engineer who thrives in a hybrid role—60% administration and 40% development/support—to help us scale our data and DataOps infrastructure. You’ll work with cutting-edge technologies like Databricks, Apache Spark, Delta Lake, and AWS CloudOps, Cloud Security, while supporting mission-critical data pipelines and integrations. If you’re a hands-on engineer with strong Python skills, deep AWS experience, and a knack for solving complex data challenges, we want to hear from you.
Are you passionate about building and supporting modern data platforms in the cloud? We’re looking for a Sr. Data Platform Engineer who thrives in a hybrid role—60% administration and 40% development/support—to help us scale our data and DataOps infrastructure. You’ll work with cutting-edge technologies like Databricks, Apache Spark, Delta Lake, and AWS CloudOps, Cloud Security, while supporting mission-critical data pipelines and integrations. If you’re a hands-on engineer with strong Python skills, deep AWS experience, and a knack for solving complex data challenges, we want to hear from you.
• Design, develop, and maintain scalable ETL pipelines and integration frameworks.
• Administer and optimize Databricks and Apache Spark environments for data engineering workloads.
• Build and manage data workflows using AWS services such as Lambda, Glue, Redshift, SageMaker, and S3.
• Support and troubleshoot DataOps pipelines, ensuring reliability and performance across environments.
• Automate platform operations using Python, PySpark, and infrastructure-as-code tools.
• Collaborate with cross-functional teams to support data ingestion, transformation, and deployment.
• Provide technical leadership and mentorship to junior developers and third-party teams.
• Create and maintain technical documentation and training materials.
• Troubleshoot recurring issues and implement long-term resolutions.
• Bachelor’s or Master’s degree in Computer Science or related field.
• 5+ years of experience in data engineering or platform administration.
• 3+ years of experience in integration framework development with a strong emphasis on Databricks, AWS, and ETL.
• Strong programming skills in Python and PySpark.
• Expertise in Databricks, Apache Spark, and Delta Lake.
• Proficiency in AWS CloudOps, Cloud Security, including configuration, deployment, and monitoring.
• Strong SQL skills and hands-on experience with Amazon Redshift.
• Experience with ETL development, data transformation, and orchestration tools.
• Kafka for real-time data streaming and integration.
• Fivetran and DBT for data ingestion and transformation.
• Familiarity with DataOps practices and open-source data tooling.
• Experience with integration tools such as Apache Camel and MuleSoft.
• Understanding of RESTful APIs, message queuing, and event-driven architectures
If interested, share your resume at sadiya.mankar@leanitcorp.com
Lean IT Inc.
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python NowSalary: Not disclosed
Salary: Not disclosed