Jobs
Interviews

MindWise Technologies

3 Job openings at MindWise Technologies
Data Engineer Mumbai Metropolitan Region 5 years None Not disclosed On-site Full Time

We are seeking a highly skilled and experienced Data Engineer based in India to join our growing Data Science team. The ideal candidate will play a critical role in designing, building, and optimizing our data pipelines and infrastructure to support advanced analytics and machine learning initiatives. This individual will work closely with data scientists, analysts, and other engineers to ensure reliable, scalable, and high-quality data solutions. Responsibilities Design, develop, and maintain robust data pipelines to ingest, process, and transform large datasets from diverse sources. Optimize SQL queries and database performance for efficient data storage and retrieval. Collaborate with data scientists to support data preparation and model deployment workflows. Develop and maintain ETL processes to ensure data integrity, consistency, and availability. Monitor data workflows and troubleshoot issues in data pipelines and infrastructure. Contribute to data architecture planning and documentation. Ensure compliance with data governance and security standards. Requirements Minimum 5 years of professional experience in data engineering or a related role. Proficiency in Microsoft SQL Server (MS SQL), including writing complex queries, stored procedures, and performance tuning. Strong programming skills in Python, particularly with libraries relevant to data engineering (e. g., Pandas, SQLAlchemy, Airflow). Bachelor's degree in Computer Engineering, Computer Science, or a related field equivalent practical experience. Experience working with structured and unstructured data, APIs, and cloud data storage. Familiarity with data warehouse concepts and data modeling techniques. Proven ability to work independently and as part of a distributed team. Preferred Qualifications Experience in the healthcare industry or with healthcare-related data systems. Experience with cloud platforms (Azure, AWS, or GCP). Familiarity with data orchestration tools like Apache Airflow or similar. Exposure to version control tools (e. g., Git) and DevOps practices. Knowledge of basic data science workflows and model operationalization. This job was posted by Akshata Alekar from MindWise Technologies.

MLOps Engineer India 5 years None Not disclosed Remote Full Time

Company Description At MindWise, we are dedicated to revolutionizing the US healthcare industry through cutting-edge IT solutions. We provide tailored technology services that empower healthcare organizations to deliver better patient outcomes, enhance operational efficiency, and drive innovation. Our offerings include custom software development, healthcare IT consulting, and advanced healthcare analytics. Our team comprises experienced professionals who intimately understand the challenges and nuances of the US healthcare sector, and we believe in forging strong partnerships with our clients to deliver solutions that exceed expectations. Role Description This is a full-time remote role for an MLOps Engineer. The MLOps Engineer will be responsible for implementing and managing machine learning pipelines and infrastructure. Day-to-day tasks will include developing and maintaining scalable architecture for data processing and model deployment, collaborating with data scientists to optimize model performance, and ensuring the reliability and efficiency of machine learning solutions. The role also involves managing cloud-based resources and ensuring compliance with security and data protection standards. Key Responsibilities Cloud Infrastructure Management o Design, deploy, and maintain AWS resources (EC2, ECS, Elastic Beanstalk, Lambda, VPC, VPN) o Implement infrastructure-as-code using Terraform and Docker for consistent and reproducible deployments o Optimize cost, performance, and security of compute and storage solutions Database & Server Architecture o Manage production-grade RDS MySQL instances with high availability, security, and backups o Design scalable server-side infrastructure and ensure tight integration with Django-based services Job Scheduling & Data Pipelines o Build and monitor asynchronous task workflows with Celery, SQS, and SNS o Manage data processing pipelines, ensuring timely and accurate job execution and messaging Monitoring & Logging o Set up and maintain CloudWatch dashboards, alarms, and centralized logging for proactive incident detection and resolution Machine Learning & NLP Infrastructure o Support deployment of NLP models on SageMaker and Bedrock, and manage interaction with vector databases and LLMs o Assist in productionizing model endpoints, workflows, and monitoring pipelines CI/CD & Automation o Maintain and improve CI/CD pipelines using CircleCI o Ensure automated testing, deployment, and rollback strategies are reliable and efficient Healthcare Data Integration o Support ingestion and transformation of clinical data using HL7 standards, Mirth Connect, and Java-based parsing tools o Enforce data security and compliance best practices in handling PHI and other sensitive healthcare data Qualifications • 5+ years of experience in cloud infrastructure (preferably AWS) • Strong command of Python/Django and container orchestration using Docker • Proficiency with Terraform, infrastructure-as-code best practices • Experience in setting up and managing messaging systems (Celery, SQS, SNS) • Understanding of NLP or ML model operations in production environments • Familiarity with LLM frameworks, vector databases, and SageMaker workflows • Strong CI/CD skills (CircleCI preferred) • Ability to work independently and collaboratively across engineering and data science teams Nice to Have • Exposure to HIPAA compliance, SOC2, or healthcare regulatory requirements • Experience scaling systems in a startup or early-growth environment • Contributions to open-source or community infrastructure projects • Hands-on experience with HL7, Mirth Connect, and Java for healthcare interoperability is a big plus

Senior Python/Django Engineer (with MLOps Exposure) india 5 years None Not disclosed On-site Full Time

We are looking for a Senior Python/Django Engineer to join our team, with exposure to MLOps and cloud infrastructure. This role is primarily focused on backend server development , where you will design, build, and maintain scalable services in a Django-based environment. In addition, you will take ownership of MLOps responsibilities , including deploying and supporting machine learning models, as well as managing cloud infrastructure on AWS. This is a hands-on engineering role that requires strong coding skills in Python/Django along with practical experience in cloud and MLOps workflows. Key Responsibilities Backend Development (Primary) Design, implement, and maintain Python/Django applications and APIs. Contribute directly to the server repository by developing new features and improving existing functionality. Ensure scalability, security, and performance of backend services. MLOps & Model Integration Deploy and manage ML/NLP models in production environments (e.g., SageMaker, Bedrock). Integrate ML endpoints with Django services and monitor performance. Support vector databases, LLM workflows, and related pipelines. Cloud Infrastructure (30–40% of role) Manage and optimize AWS resources (EC2, ECS, RDS, Lambda, VPC). Implement Infrastructure-as-Code using Terraform, Docker, and Ansible. Ensure infrastructure security, performance, and cost efficiency. Job Scheduling & Messaging Build and maintain asynchronous workflows using Celery, SQS, and SNS. Ensure timely and reliable data processing across pipelines. CI/CD & Automation Maintain and improve CI/CD pipelines (CircleCI preferred). Automate testing, deployments, and rollback strategies for backend and ML services. Monitoring & Logging Implement dashboards, alerts, and centralized logging (CloudWatch, Prometheus, Grafana, ELK). Proactively identify and resolve incidents in production. Qualifications 5+ years of experience in Python/Django backend development Strong understanding of REST APIs, relational databases, and server-side architecture Hands-on experience with AWS services and infrastructure-as-code (Terraform, Docker, Ansible) Familiarity with MLOps concepts and experience deploying ML models in production Experience with job scheduling/messaging tools (Celery, SQS, SNS) Proficiency in CI/CD pipelines and modern DevOps practices Ability to work independently and collaborate with cross-functional teams Nice to Have Exposure to NLP, vector databases, or LLM frameworks Knowledge of healthcare standards (HL7, Mirth Connect) and compliance requirements (HIPAA, SOC2) Experience scaling systems in a startup or high-growth environment Open-source contributions or community involvement