Hyderabad, Telangana, India
Not disclosed
On-site
Full Time
About The Company: At ReKnew, our mission is to empower enterprises to revitalize their core business and organization by positioning themselves for the new world of AI. We're a startup founded by seasoned practitioners, supported by expert advisors, and built on decades of experience in enterprise technology, data, analytics, AI, digital, and automation across diverse industries. We're actively seeking top talent to join us in this mission. What You'll Do: As a DevOps Engineer, you will focus on building and maintaining robust, scalable, and secure continuous delivery platforms. This hands-on role involves designing and implementing infrastructure and application solutions across hybrid cloud environments, with a strong emphasis on automation, reliability, and continuous improvement. You will contribute to the deployment and operational excellence of critical applications, including those within the big data and AI/ML ecosystems. Key Responsibilities: Design, implement, and manage CI/CD pipelines for automated code deployments. Provision and manage cloud infrastructure using Infrastructure as Code (IaC) tools like Terraform and CloudFormation. Administer and optimize container orchestration platforms, primarily Kubernetes and EKS. Develop automation scripts and tools using Python. Implement comprehensive monitoring, logging, and alerting solutions for application and infrastructure health. Ensure security best practices are integrated throughout the development and deployment lifecycle. Support and optimize environments for big data and AI/ML workloads, including Airflow. Qualifications: Solid experience with Kubernetes and EKS. Minimum of 4+ years of professional experience in a DevOps role. Proficiency in Docker and containerization best practices. Hands-on experience with IaC tools (Terraform, CloudFormation). Strong programming skills in Python for automation. Demonstrated experience in implementing robust monitoring, alerting, and logging solutions. Familiarity with CI/CD principles and tools for code deployments. Understanding of security principles in cloud environments. Experience with or knowledge of the big data ecosystem and AI/ML infrastructure. Experience with workflow orchestration tools, specifically Airflow. Exposure to or understanding of all three major cloud platforms (AWS, Azure, GCP). Who You Are: A passionate problem-solver with a strong bias for automation. Committed to building reliable, scalable, and secure systems. An excellent communicator with a collaborative mindset. Proactive in identifying and resolving operational challenges. Show more Show less
Hyderabad, Telangana, India
Not disclosed
On-site
Full Time
Company Overview At ReKnew, our mission is to empower enterprises to revitalize their core business and organization by positioning themselves for the new world of AI. We're a startup founded by seasoned practitioners, supported by expert advisors, and built on decades of experience in enterprise technology, data, analytics, AI, digital, and automation across diverse industries. We're actively seeking top talent to join us in this mission. Job Description We're seeking a highly skilled Senior Data Engineer with deep expertise in AWS-based data solutions. In this role, you'll be responsible for designing, building, and optimizing large-scale data pipelines and frameworks that power analytics and machine learning workloads. You'll lead the modernization of legacy systems by migrating workloads from platforms like Teradata to AWS-native big data environments such as EMR, Glue, and Redshift. A strong emphasis is placed on reusability, automation, observability, performance optimization, and managing schema evolution in dynamic data lake environments . Key Responsibilities Migration & Modernization: Build reusable accelerators and frameworks to migrate data from legacy platforms (e.g., Teradata) to AWS-native architectures such as EMR, Glue, and Redshift. Data Pipeline Development: Design and implement robust ETL/ELT pipelines using Python, PySpark, and SQL on AWS big data platforms. Code Quality & Testing: Drive development standards with test-driven development (TDD), unit testing, and automated validation of data pipelines. Monitoring & Observability: Build operational tooling and dashboards for pipeline observability, including tracking key metrics like latency, throughput, data quality, and cost. Cloud-Native Engineering: Architect scalable, secure data workflows using AWS services such as Glue, Lambda, Step Functions, S3, and Athena. Collaboration: Partner with internal product teams, data scientists, and external stakeholders to clarify requirements and drive solutions aligned with business goals. Architecture & Integration: Work with enterprise architects to evolve data architecture while securely integrating AWS systems with on-premise or hybrid environments. This includes strategic adoption of data lake table formats like Delta Lake, Apache Iceberg, or Apache Hudi for schema management and ACID capabilities. ML Support & Experimentation: Enable data scientists to operationalize machine learning models by providing clean, well-governed datasets at scale. Documentation & Enablement: Document solutions thoroughly and provide technical guidance and knowledge sharing to internal engineering teams. Team Training & Mentoring: Act as a subject matter expert, providing guidance, training, and mentorship to junior and mid-level data engineers, fostering a culture of continuous learning and best practices within the team. Qualifications Experience: 7+ years in technology roles, with at least 5+ years specifically in data engineering, software development, and distributed systems. Programming: Expert in Python and PySpark (Scala is a plus). Deep understanding of software engineering best practices. AWS Expertise: 3+ years of hands-on experience in the AWS data ecosystem. Proficient in AWS Glue, S3, Redshift, EMR, Athena, Step Functions, and Lambda. Experience with AWS Lake Formation and data cataloging tools is a plus. AWS Data Analytics or Solutions Architect certification is a strong plus. Big Data & MPP Systems: Strong grasp of distributed data processing. Experience with MPP data warehouses like Redshift, Snowflake, or Databricks on AWS. Hands-on experience with Delta Lake, Apache Iceberg, or Apache Hudi for building reliable data lakes with schema evolution, ACID transactions, and time travel capabilities. DevOps & Tooling: Experience with version control (e.g., GitHub/CodeCommit) and CI/CD tools (e.g., CodePipeline, Jenkins). Familiarity with containerization and deployment in Kubernetes or ECS. Data Quality & Governance: Experience with data profiling, data lineage, and relevant tools. Understanding of metadata management and data security best practices. Bonus: Experience supporting machine learning or data science workflows. Familiarity with BI tools such as QuickSight, PowerBI, or Tableau. Show more Show less
My Connections ReKnew
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.