Key Responsibilities: Perform installation, configuration, and administration of RHEL servers and OpenShift clusters. Monitor system performance and troubleshoot issues across OCP and RHEL environments. Work closely with DevOps and application teams to ensure containerized applications run efficiently and securely. Apply patches, upgrades, and security updates as required. Manage user access, roles, and permissions in a secure manner. Assist in capacity planning, system optimization, and performance tuning. Document operational procedures and system configurations. Collaborate with L1 teams to escalate and resolve complex issues. Maintain compliance with security and audit standards.
What were looking for: 4+ years of experience in Data Science 12 years of proven work in Machine Learning Minimum 2 years of experience in Deep Learning Proficiency in Python, Linux, and Containers (Docker/K8s) Hands-on with GenAI frameworks like LangChain, LlamaIndex, Hugging Face Transformers Experience deploying models using Vertex AI or Azure AI Currently contributing to a live project (public GitHub profile preferred) Key Responsibilities Deploy and manage AI workloads in hybrid cloud environments using RHEL AI and OpenShift AI, with guidance and training provided. Collaborate with teams to fine-tune and operationalize AI models, including large language models (LLMs), using tools like InstructLab. Build and maintain containerized applications with Kubernetes or similar platforms, adapting to OpenShift as needed. Support the development, training, and deployment of AI/ML models, leveraging frameworks like PyTorch or TensorFlow. Assist in implementing MLOps practices for model lifecycle management, with exposure to CI/CD pipelines. Troubleshoot and optimize Linux-based systems to ensure reliable AI performance. Learn and apply Red Hat-specific tools and best practices through on-the-job training and resources. Document workflows and contribute to Reve Clouds team knowledge sharing.
What were looking for: 4+ years of experience in Data Science 12 years of proven work in Machine Learning Minimum 2 years of experience in Deep Learning Proficiency in Python, Linux, and Containers (Docker/K8s) Hands-on with GenAI frameworks like LangChain, LlamaIndex, Hugging Face Transformers Experience deploying models using Vertex AI or Azure AI Currently contributing to a live project (public GitHub profile preferred) Key Responsibilities Deploy and manage AI workloads in hybrid cloud environments using RHEL AI and OpenShift AI, with guidance and training provided. Collaborate with teams to fine-tune and operationalize AI models, including large language models (LLMs), using tools like InstructLab. Build and maintain containerized applications with Kubernetes or similar platforms, adapting to OpenShift as needed. Support the development, training, and deployment of AI/ML models, leveraging frameworks like PyTorch or TensorFlow. Assist in implementing MLOps practices for model lifecycle management, with exposure to CI/CD pipelines. Troubleshoot and optimize Linux-based systems to ensure reliable AI performance. Learn and apply Red Hat-specific tools and best practices through on-the-job training and resources. Document workflows and contribute to Reve Clouds team knowledge sharing.
Key Responsibilities: Perform installation, configuration, and administration of RHEL servers and OpenShift clusters. Monitor system performance and troubleshoot issues across OCP and RHEL environments. Work closely with DevOps and application teams to ensure containerized applications run efficiently and securely. Apply patches, upgrades, and security updates as required. Manage user access, roles, and permissions in a secure manner. Assist in capacity planning, system optimization, and performance tuning. Document operational procedures and system configurations. Collaborate with L1 teams to escalate and resolve complex issues. Maintain compliance with security and audit standards.
Required Skills & Experience Data Engineering & ETL7+ years in data architecture, pipelines, or ETL frameworks (Airflow, dbt, Spark, Kafka, etc.). Proven experience building pipelines that integrate multiple external APIs. Strong skills in data validation, mapping, and schema design.Search Engines & Marketplace TechHands-on experience with Typesense, Elasticsearch, or Solr. Background in marketplace or price aggregation systems (e.g., e-commerce, travel, fintech, or cloud pricing). AI & NLP Experience with NLP / semantic search (e.g., vector embeddings, Pinecone, Weaviate, FAISS, OpenAI/LLM APIs). Knowledge of relevance ranking, recommendation systems, and personalization. Programming & CloudStrong coding skills in Python/Go/Node.js. Deep understanding of cloud platforms (AWS/Azure/GCP) and their pricing models. Experience with containerized & serverless environments (Docker, Kubernetes, Lambda). Bonus Prior experience in FinOps, cloud cost management, or SaaS marketplaces. Exposure to graph databases for relationship-driven recommendations. This will be remote job WFH Permanent Location - This will be remote job WFH Permanent,Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Required Skills & Experience Data Engineering & ETL 7+ years in data architecture, pipelines, or ETL frameworks (Airflow, dbt, Spark, Kafka, etc.). Proven experience building pipelines that integrate multiple external APIs. Strong skills in data validation, mapping, and schema design. Search Engines & Marketplace Tech Hands-on experience with Typesense, Elasticsearch, or Solr. Background in marketplace or price aggregation systems (e.g., e-commerce, travel, fintech, or cloud pricing). AI & NLP Experience with NLP / semantic search (e.g., vector embeddings, Pinecone, Weaviate, FAISS, OpenAI/LLM APIs). Knowledge of relevance ranking, recommendation systems, and personalization. Programming & Cloud Strong coding skills in Python/Go/Node.js. Deep understanding of cloud platforms (AWS/Azure/GCP) and their pricing models. Experience with containerized & serverless environments (Docker, Kubernetes, Lambda). Bonus Prior experience in FinOps, cloud cost management, or SaaS marketplaces. Exposure to graph databases for relationship-driven recommendations. Location - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote ,job WFH Permanent
Required Skills & Experience Data Engineering & ETL 7+ years in data architecture, pipelines, or ETL frameworks (Airflow, dbt, Spark, Kafka, etc.). Proven experience building pipelines that integrate multiple external APIs. Strong skills in data validation, mapping, and schema design. Search Engines & Marketplace Tech Hands-on experience with Typesense, Elasticsearch, or Solr. Background in marketplace or price aggregation systems (e.g., e-commerce, travel, fintech, or cloud pricing). AI & NLP Experience with NLP / semantic search (e.g., vector embeddings, Pinecone, Weaviate, FAISS, OpenAI/LLM APIs). Knowledge of relevance ranking, recommendation systems, and personalization. Programming & Cloud Strong coding skills in Python/Go/Node.js. Deep understanding of cloud platforms (AWS/Azure/GCP) and their pricing models. Experience with containerized & serverless environments (Docker, Kubernetes, Lambda). Bonus Prior experience in FinOps, cloud cost management, or SaaS marketplaces. Exposure to graph databases for relationship-driven recommendations. Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai