Company Description VOPAIS architects transformative digital solutions that bridge the gap between advanced technology and human experience. We create intuitive applications that solve complex challenges and bring smiles to users' faces. Our company values integrity, transparency, and transformation, offering premium team deployment solutions to seamlessly integrate into your existing infrastructure. We are committed to crafting digital experiences that empower organizations and their people to thrive in the evolving technological landscape. Role Description This is a full-time remote role for an MLOps Engineer. The MLOps Engineer will be responsible for designing, implementing, and managing machine learning infrastructure and pipelines. Day-to-day tasks include optimizing ML models, automating deployments, monitoring and maintaining ML systems, and collaborating with data scientists and software engineers. The role also involves ensuring the scalability, reliability, and security of ML systems. Key Responsibilities • ML Pipeline Design & Automation o Build and maintain CI/CD & CT (Continuous Training) pipelines for ML models using Azure DevOps and Databricks Asset Bundles. o Automate data preprocessing, training, inference and retraining workflows for large-scale ML deployments. o Implement incremental backfills and rolling window retraining for time-series forecasting. • Deployment & Infrastructure o Design job clusters and compute policies in Databricks for optimal cost-performance trade-offs. o Implement multi-environment deployment flows (Dev → QA (stage) → Prod) with approvals and rollback strategies. o Deploy ML models to production with monitoring hooks for performance and drift detection. • Data & Model Governance o Integrate with Unity Catalog for secure, compliant data and model storage. o Set up model versioning, lineage tracking and reproducibility using MLflow. o Establish dataset and feature versioning using tools like Databricks Feature Store. • Monitoring & Observability o Implement structured logging for model metrics, system performance and data quality checks. o Integrate monitoring tools (e.g., Azure Application Insights) for alerting and dashboards. o Develop automated retraining triggers based on performance degradation Required Skills & Experience: Core MLOps Skills: • ML pipeline automation (Azure DevOps, GitHub Actions). • Databricks (Asset Bundles, Unity Catalog, Feature Store). • Model registry and experiment tracking (MLflow, Weights & Biases or similar). • Cloud platforms (Azure mandatory). Programming & Tools: • Python (pandas, PySpark, scikit-learn, Prophet, ML/DL frameworks). • Bash/PowerShell scripting. • Git and branching strategies for ML projects. Testing & Quality: • Data validation, schema enforcement and model testing frameworks. • CI/CD quality gates for model performance and bias/fairness checks. Soft Skills: • Strong communication and stakeholder management. • Experience guiding Data Scientists through productionization. • Ability to work on multiple concurrent projects in a fast-paced environment. Good to Have: • Experience with time-series forecasting at scale (e.g., Prophet, Sarima, XGBoost). • Experience in retail demand forecasting and/or energy sector analytics. • Knowledge of feature engineering at scale with distributed systems. Qualifications Experience with MLOps, Machine Learning, and DevOps Skills in Python, TensorFlow, PyTorch, or other ML frameworks Experience with cloud platforms such as AWS, Azure, or Google Cloud Proficiency in CI/CD, containerization, and orchestration tools (e.g., Docker, Kubernetes) Strong collaboration and communication skills Ability to work independently and remotely Experience with monitoring and logging tools (e.g., Grafana, Prometheus) Bachelor's degree in Computer Science, Engineering, or a related field
Company Description VOPAIS is a pioneer in crafting transformative digital solutions that bridge advanced technology and human experience. We focus on developing intuitive software applications to solve complex challenges while enhancing user experience. Our company operates with integrity, transparency, and accountability, offering not only core development services but also premium team deployment solutions. We provide organizations with highly skilled technical teams that seamlessly integrate into existing infrastructures and cultures, committed to empowering organizations to thrive in evolving technological landscapes. Role Description This is a full-time on-site role for a Principal Engineer, located in Jaipur. The Principal Engineer will take the lead in designing, developing, and maintaining cutting-edge software solutions. Responsibilities include leading technical teams, overseeing the architecture of projects, ensuring best practices in coding and design, and collaborating with stakeholders to meet project goals. The Principal Engineer will mentor and guide junior engineers, contribute to technical discussions, and drive innovation within the team. Qualifications Extensive experience in software development, including full-stack development and Bian-architecture design Key Responsibilities: Drive high-level solution architecture and design flows. Architect and develop scalable Microservices and BIAN-based solutions . Lead development using .NET Core, ReactJS, TypeScript, Next.js, Node.js, Python . Build and scale APIs ( REST, GraphQL ) and optimize integrations. Manage source control ( GitHub ), CI/CD pipelines ( GitHub Actions ). Design and maintain databases ( Oracle, PostgreSQL, AWS Aurora ). Ensure cloud-native, secure deployments ( AWS, TDD, Secure Coding Practices ). Set up monitoring/observability ( Observe, Prometheus, Grafana ). Champion engineering excellence with AI-powered tools (Cline, GitHub Copilot). Collaborate on DevOps/DevSecOps practices and enforce coding standards. Requirements: 14+ years of experience in software development (18+ preferred). Strong background in solution architecture and enterprise-grade system design . Deep knowledge of cloud ecosystems and modern engineering practices . Excellent communication & presentation skills for cross-functional collaboration. Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field Contract Details: Contract-to-Hire (6 months) CTC: Negotiable 3–4 Interview Rounds (including client round) How to Apply: Send your CV along with a cover letter to: careers@vopais.com
Company Description VOPAIS is dedicated to driving technological innovation that enhances the human experience. Our expertise lies in designing transformative digital solutions, creating intuitive applications that address complex challenges while delivering joy to users. We empower our clients through integrity, transparency, and accountability, offering both comprehensive software development services and premium team deployment solutions. VOPAIS seamlessly integrates skilled technical teams into organizations, enabling them to thrive in an evolving technological landscape. Our mission is to connect advanced technology with meaningful outcomes for businesses and their people. Role Description We are seeking a Senior Data Engineer for a full-time, remote position to architect, implement, and optimize scalable data solutions. In this role, you will design and maintain data pipelines, infrastructure, and systems to support data workflows, analytics, and reporting. You will collaborate with cross-functional teams to ensure data quality and accessibility, implement best practices for data governance and security, and support the strategic use of data insights. As a key part of our engineering team, you will contribute to fostering innovation at the intersection of data and technology. Qualifications Responsibilities: Design, develop & maintain scalable AWS data lakes/pipelines with Databricks Integrate, transform, and centralize large-scale data from varied sources Implement/manage Delta Lake architecture (Databricks Delta or Apache Hudi) Build end-to-end data workflows with PySpark, Databricks Notebooks, Python Develop data warehouses/marts (Snowflake, Redshift, etc.) Optimize data storage, queries, and cost across Databricks/AWS Build CI/CD for Databricks/Python, maintain version control (Git) Collaborate with cross-functional teams for high-performance data solutions Drive technical/architectural decisions, troubleshoot clusters & ETL jobs Core skills and requirements: 5+ years building/managing AWS Data Lake Architectures 3+ years with AWS services (S3, Glue, Redshift, etc.) 3+ years with Databricks, Delta Lake, PySpark, ETL Hands-on with Python, data automation, API integration CI/CD, Git best practices (Terraform/CloudFormation a plus) Bachelor's in Computer Science, IT, Data Science, or related Experience in Agile environments Strong SQL, RDBMS, data modeling, governance (Unity Catalog/DLT/MLflow a plus) AWS/Databricks certifications, security/compliance knowledge are valued Send your CV & Cover Letter to careers@vopais.com. Kindly review the detailed Job Description before applying. #DataEngineer #Databricks #AWS #Python #BigData #DataLake #DataJobs #CI/CD #ETL #Redshift #Snowflake #VopaisCareers #ApplyNow