Job Title: Home Chef (Multicuisine) – Bavdhan, Pune Location: Bavdhan, Pune Experience Required: Minimum 5 years Job Type: Part-Time Timings: Daily – 2 shifts Morning: 3 hours Evening: 2–3 hours Total Working Hours: 5–6 hours per day Job Description: We are looking to hire an experienced and passionate Home Chef who is proficient in multicuisine cooking and can manage day-to-day meal preparation for a household in Bavdhan, Pune . Key Responsibilities: Prepare daily meals as per the household’s dietary preferences and requirements. Cook a variety of cuisines including Indian, Continental, Chinese, and more. Maintain hygiene and cleanliness in the kitchen at all times. Manage grocery inventory and inform when stocks are low. Requirements: Minimum 5 years of proven experience as a home chef or in a similar culinary role. Must be skilled in multicuisine cooking . Should be reliable, punctual, and hygienic . Must be located in Bavdhan , or willing to commute/work in Bavdhan area . Ability to adjust recipes and cooking styles based on preferences. Basic understanding of nutrition and healthy cooking. Preferred Qualifications: Formal culinary training or certifications (optional but preferred). Previous experience in private households or as a personal/home chef.
About Pivotchain Solutions: https://pivotchain.com/ We are seeking a skilled Senior DevOps Engineer to join our team and streamline our software development and deployment processes. The ideal candidate will have experience in Kubernetes, CI/CD pipelines, cloud infrastructure management, automation, and monitoring tools. Role: Senior DevOps Engineer Location: Senapati Bapat Road, Pune General Summary of the Role: Design, implement, and maintain CI/CD pipelines for automated software delivery. Manage and optimize infrastructure across on-premise environments as well as cloud platforms (AWS, Azure, GCP). Monitor system performance, troubleshoot issues, and ensure high availability and reliability. Implement security best practices across infrastructure and deployment processes. Collaborate with software engineers to integrate DevOps methodologies into the development lifecycle. Develop and maintain scripts for automation, system maintenance, and monitoring. Manage containerized applications using Docker and Kubernetes. Stay updated with industry trends and best practices to continuously improve DevOps processes. Requirements: Bachelor’s degree in Computer Science, IT, or a related field, or equivalent experience, with at least two year of relevant experience. Managed a team or as similar experience Experience with cloud platforms (AWS, Azure, GCP). Proficiency in scripting language Python. Strong experience with on-premises infrastructure and cluster management. Hands-on experience with CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD, etc.). Experience with containerization and orchestration (Docker, Kubernetes). Strong understanding of Linux system administration and networking concepts. Strong problem-solving skills and ability to work in a fast-paced environment.
Role: Data Scientist ( Deep Learning Engineer/ Machine Learning Engineer) Location: Pune Role Overview We are seeking a highly skilled AI/ML Engineer with expertise in Computer Vision, Deep Learning, backend engineering, and MLOps . The ideal candidate will be responsible for developing real-time vision pipelines, OCR systems, backend services, and deploying optimized ML models across cloud, on-premise, and edge platforms. This role focuses on core AI engineering , production deployment , and system optimization , without front-end or annotation work. Key Responsibilities Computer Vision & Deep Learning Develop, train, and optimize deep learning models using PyTorch, TensorFlow, YOLO , or similar frameworks. Build and maintain image and video processing pipelines using OpenCV and classical CV techniques. Implement and optimize OCR pipelines and text extraction systems. Perform model optimization (quantization, pruning, ONNX/TensorRT conversions) for real-time performance. Develop complete data preprocessing workflows and scalable training pipelines. Backend Engineering Develop backend services using Python (FastAPI/Flask) . Build REST APIs for ML inference, data processing, and workflow management. Integrate backend applications with databases such as MongoDB, SQL, NoSQL . Architect scalable microservices for ML and computer vision workloads. Work on logging, monitoring, and system stability for backend services. MLOps, DevOps & Deployment Containerize and orchestrate applications using Docker and Kubernetes . Implement CI/CD pipelines for automated testing, deployment, and model updates. Deploy AI/ML workloads on cloud, on-premise, and edge computing platforms . Optimize infrastructure for high throughput and low-latency ML processing. Required Skills Core Technical Competencies Expert-level Python (mandatory). Strong experience with: PyTorch / TensorFlow / YOLO OpenCV & Image Processing OCR systems Docker, Kubernetes Git, GitLab, CI/CD tools MongoDB, SQL, NoSQL databases FastAPI or Flask Linux & Bash scripting (Optional) Preferred Competencies Experience deploying ML/DL models on edge devices . Knowledge of ONNX, TensorRT, or hardware-level optimizations. Familiarity with real-time video/image streaming pipelines. Understanding of distributed systems or messaging technologies (Kafka, RabbitMQ, etc.). Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or related field . 1–2+ years of hands-on experience in AI/ML, Computer Vision, or MLOps. Proven experience building production-grade ML systems. Soft Skills Strong analytical and debugging skills. Self-driven with the ability to own and deliver end-to-end solutions. Good communication and collaborative mindset. Passion for exploring new technologies in CV, ML, and systems architecture.