Jobs
Interviews

1847 Mlflow Jobs - Page 14

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description: AI Developer Are you passionate about building intelligent systems that make a real-world impact? Do you enjoy working in a fast-paced and dynamic start-up environment? If so, we are looking for a talented AI Developer to join our team! We are a data and AI consultancy start-up with a global client base, headquartered in London UK, and we are looking for someone to join us full time on-site in our vibrant office in Gurugram. About Uptitude Uptitude is a forward-thinking consultancy that specialises in providing exceptional AI, data, and business intelligence solutions to clients worldwide. Our team is passionate about delivering data-driven transformation and intelligent automation, enabling our clients to make smarter decisions and achieve remarkable results. We embrace a vibrant and inclusive culture where innovation, excellence, and collaboration thrive. As an AI Developer at Uptitude, you will be responsible for designing, developing, and deploying AI models and solutions across a wide range of use cases. You will collaborate closely with data engineers, analysts, and business teams to ensure models are well-integrated, explainable, and scalable. We are looking for a candidate who is not only technically skilled but also creative, curious, and excited about pushing the boundaries of AI in real-world business environments. Requirements 1–3 years of hands-on experience in developing AI/ML models in production or research settings. Proficiency in Python and libraries such as scikit-learn, Pandas, TensorFlow, PyTorch. Experience working with structured and unstructured data. Familiarity with model lifecycle management, MLOps, and version control (MLflow, DVC). Ability to communicate technical ideas to cross-functional teams. Experience with data cleaning, EDA, and feature selection. Creativity in applying AI to real-world business problems. Awareness of ISO:27001, ISO:42001 and data governance best practices is a plus. Role based in Gurugram, India. Head office in London, UK. Company Values At Uptitude, we embrace a set of core values that guide our work and define our culture: Be Awesome: Strive for excellence and keep levelling up. Step Up: Take ownership and go beyond the expected. Make a Difference: Innovate with impact. Have Fun: Celebrate wins and build meaningful connections. Benefits Uptitude values its employees and offers a competitive benefits package, including: Competitive salary based on experience and qualifications. Private health insurance. Offsite trips for team building and knowledge sharing. Quarterly outings to celebrate milestones. Corporate English lessons with a UK-based instructor. If you’re ready to develop cutting-edge AI solutions and work on meaningful challenges with a global impact we’d love to hear from you.

Posted 1 week ago

Apply

15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Position: AI Architect Location: Hyderabad Experience: 15+ Years (with 3–5 years in AI/ML-focused roles) Employment Type: Full-Time About the Role: We are looking for a visionary AI Architect to lead the design and deployment of advanced AI solutions. You’ll work closely with data scientists, engineers, and product teams to translate business needs into scalable, intelligent systems. Key Responsibilities: Design end-to-end AI architectures including data pipelines, ML/DL models, APIs, and deployment frameworks Evaluate AI technologies, frameworks, and platforms to meet business and technical needs Collaborate with cross-functional teams to gather requirements and translate them into AI use cases Build scalable AI systems with focus on performance, robustness, and cost-efficiency Implement MLOps pipelines to streamline model lifecycle Ensure AI governance, data privacy, fairness, and explainability across deployments Mentor and guide engineering and data science teams Required Skills: Expertise in Machine Learning / Deep Learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) Strong proficiency in Python and one or more of: R, Java, Scala Deep understanding of cloud platforms (AWS, Azure, GCP) and tools like SageMaker, Bedrock, Vertex AI, Azure ML Familiarity with MLOps tools : MLflow, Kubeflow, Airflow, Docker, Kubernetes Solid understanding of data architecture , APIs , and microservices Knowledge of NLP, computer vision, and generative AI is a plus Excellent communication and leadership skills Preferred Qualifications: Bachelor’s or Master’s in Computer Science, Data Science, AI, or a related field AI/ML certifications (AWS, Azure, Coursera, etc.) are a bonus Experience working with LLMs, RAG architectures, or GenAI platforms is a plus

Posted 1 week ago

Apply

7.0 - 12.0 years

22 - 25 Lacs

India

On-site

TECHNICAL ARCHITECT Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management. Experience Required Technical Architect with 7 - 12 years of experience Salary 13-17 Lpa Job Types: Full-time, Permanent Pay: ₹2,200,000.00 - ₹2,500,000.00 per year Experience: total work: 1 year (Preferred) Work Location: In person

Posted 1 week ago

Apply

3.0 - 5.0 years

6 - 11 Lacs

India

On-site

Experience Required: 3-5 years of hands-on experience in full-stack development, system design, and supporting AI/ML data-driven solutions in a production environment. Key Responsibilities Implementing Technical Designs: Collaborate with architects and senior stakeholders to understand high-level designs and break them down into detailed engineering tasks. Implement system modules and ensure alignment with architectural direction. Cross-Functional Collaboration: Work closely with software developers, data scientists, and UI/UX teams to translate system requirements into working code. Clearly communicate technical concepts and implementation plans to internal teams. Stakeholder Support: Participate in discussions with product and client teams to gather requirements. Provide regular updates on development progress and raise flags early to manage expectations. System Development & Integration: Develop, integrate, and maintain components of AI/ML platforms and data-driven applications. Contribute to scalable, secure, and efficient system components based on guidance from architectural leads. Issue Resolution: Identify and debug system-level issues, including deployment and performance challenges. Proactively collaborate with DevOps and QA to ensure resolution. Quality Assurance & Security Compliance: Ensure that implementations meet coding standards, performance benchmarks, and security requirements. Perform unit and integration testing to uphold quality standards. Agile Execution: Break features into technical tasks, estimate efforts, and deliver components in sprints. Participate in sprint planning, reviews, and retrospectives with a focus on delivering value. Tool & Framework Proficiency: Use modern tools and frameworks in your daily workflow, including AI/ML libraries, backend APIs, front-end frameworks, databases, and cloud services, contributing to robust, maintainable, and scalable systems. Continuous Learning & Contribution: Keep up with evolving tech stacks and suggest optimizations or refactoring opportunities. Bring learnings from the industry into internal knowledge-sharing sessions. Proficiency in using AI-copilots for Coding: Adaptation to emerging tools and knowledge of prompt engineering to effectively use AI for day-to-day coding needs. Technical Skills Hands-on experience with Python-based AI/ML development using libraries such as TensorFlow, PyTorch, scikit-learn, or Keras. Hands-on exposure to self-hosted or managed LLMs, supporting integration and fine-tuning workflows as per system needs while following architectural blueprints. Practical implementation of NLP/CV modules using tools like SpaCy, NLTK, Hugging Face Transformers, and OpenCV, contributing to feature extraction, preprocessing, and inference pipelines. Strong backend experience using Django, Flask, or Node.js, and API development (REST or GraphQL). Front-end development experience with React, Angular, or Vue.js, with a working understanding of responsive design and state management. Development and optimization of data storage solutions, using SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra), with hands-on experience configuring indexes, optimizing queries, and using caching tools like Redis and Memcached. Working knowledge of microservices and serverless patterns, participating in building modular services, integrating event-driven systems, and following best practices shared by architectural leads. Application of design patterns (e.g., Factory, Singleton, Observer) during implementation to ensure code reusability, scalability, and alignment with architectural standards. Exposure to big data tools like Apache Spark, and Kafka for processing datasets. Familiarity with ETL workflows and cloud data warehouse, using tools such as Airflow, dbt, BigQuery, or Snowflake. Understanding of CI/CD, containerization (Docker), IaC (Terraform), and cloud platforms (AWS, GCP, or Azure). Implementation of cloud security guidelines, including setting up IAM roles, configuring TLS/SSL, and working within secure VPC setups, with support from cloud architects. Exposure to MLOps practices, model versioning, and deployment pipelines using MLflow, FastAPI, or AWS SageMaker. Configuration and management of cloud services such as AWS EC2, RDS, S3, Load Balancers, and WAF, supporting scalable infrastructure deployment and reliability engineering efforts. Personal Attributes Proactive Execution and Communication: Able to take architectural direction and implement it independently with minimal rework with regular communication with stakeholders Collaboration: Comfortable working across disciplines with designers, data engineers, and QA teams. Responsibility: Owns code quality and reliability, especially in production systems. Problem Solver: Demonstrated ability to debug complex systems and contribute to solutioning. Key: Python, Django, Django ORM, HTML, CSS, Bootstrap, JavaScript, jQuery, Multi-threading, Multi-processing, Database Design, Database Administration, Cloud Infrastructure, Data Science, self-hosted LLMs Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, Data Science, or a related field. Relevant certifications in cloud or machine learning are a plus. Package: 6-11 LPA Job Types: Full-time, Permanent Pay: ₹600,000.00 - ₹1,100,000.00 per year Work Location: In person

Posted 1 week ago

Apply

2.0 years

3 - 6 Lacs

Hyderābād

On-site

Become our next FutureStarter Are you ready to make an impact? ZF is looking for talented individuals to join our team. As a FutureStarter, you’ll have the opportunity to shape the future of mobility. Join us and be part of something extraordinary! Technical Lead- AI/ML Expert Country/Region: IN Location: Hyderabad, TG, IN, 500032 Req ID 81032 | Hyderabad, India, ZF India Pvt. Ltd. Job Description About the team: AIML is used to create chatbots, virtual assistants, and other forms of artificial intelligence software. AIML is also used in research and development of natural language processing systems. What you can look forward to as a AI/ML expert Lead Development : Own end‑to‑end design, implementation, deployment and maintenance of both traditional ML and Generative AI solutions (e.g., fine‑tuning LLMs, RAG pipelines) Project Execution & Delivery : Translate business requirements into data‑driven and GenAI‑driven use cases; scope features, estimates, and timelines Technical Leadership & Mentorship : Mentor, review and coach junior/mid‑level engineers on best practices in ML, MLOps and GenAI Programming & Frameworks : Expert in Python (pandas, NumPy, scikit‑learn, PyTorch/TensorFlow) Cloud & MLOps : Deep experience with Azure Machine Learning (SDK, Pipelines, Model Registry, hosting GenAI endpoints) Proficient in Azure Databricks : Spark jobs, Delta Lake, MLflow for tracking both ML and GenAI experiments Data & GenAI Engineering : Strong background in building ETL/ELT pipelines, data modeling, orchestration (Azure Data Factory, Databricks Jobs) Experience with embedding stores, vector databases, prompt‑optimization, and cost/performance tuning for large GenAI models Your profile as a Technical Lead: Bachelor’s or master’s in computer science/Engineering/Ph.D in Data Science, or a related field Min of 2 years of professional experience in AI/ML engineering, including at least 2 years of hands‑on Generative AI project delivery Track record of production deployments using Python, Azure ML, Databricks, and GenAI frameworks Hands‑on data engineering experience—designing and operating robust pipelines for both structured data and unstructured (text, embeddings) Preferred: Certifications in Azure AI/ML, Databricks, or Generative AI specialties Experience working in Agile/Scrum environments and collaborating with cross‑functional teams. Why you should choose ZF in India Innovative Environment : ZF is at the forefront of technological advancements, offering a dynamic and innovative work environment that encourages creativity and growth. Diverse and Inclusive Culture : ZF fosters a diverse and inclusive workplace where all employees are valued and respected, promoting a culture of collaboration and mutual support. Career Development: ZF is committed to the professional growth of its employees, offering extensive training programs, career development opportunities Global Presence : As a part of a global leader in driveline and chassis technology, ZF provides opportunities to work on international projects and collaborate with teams worldwide. Sustainability Focus: ZF is dedicated to sustainability and environmental responsibility, actively working towards creating eco-friendly solutions and reducing its carbon footprint. Employee Well-being : ZF prioritizes the well-being of its employees, providing comprehensive health and wellness programs, flexible work arrangements, and a supportive work-life balance. Be part of our ZF team as Technical Lead- AI/ML Expert and apply now! Contact Veerabrahmam Darukumalli What does DEI (Diversity, Equity, Inclusion) mean for ZF as a company? At ZF, we continuously strive to build and maintain a culture where inclusiveness is lived and diversity is valued. We actively seek ways to remove barriers so that all our employees can rise to their full potential. We aim to embed this vision in our legacy through how we operate and build our products as we shape the future of mobility. Find out how we work at ZF: Job Segment: R&D Engineer, Cloud, R&D, Computer Science, Engineer, Engineering, Technology, Research

Posted 1 week ago

Apply

2.0 years

3 - 6 Lacs

Hyderābād

On-site

Become our next FutureStarter Are you ready to make an impact? ZF is looking for talented individuals to join our team. As a FutureStarter, you’ll have the opportunity to shape the future of mobility. Join us and be part of something extraordinary! Specialist- AI/ML Expert Country/Region: IN Location: Hyderabad, TG, IN, 500032 Req ID 81033 | Hyderabad, India, ZF India Pvt. Ltd. Job Description About the team: AIML is used to create chatbots, virtual assistants, and other forms of artificial intelligence software. AIML is also used in research and development of natural language processing systems. What you can look forward to as a AI/ML exper t Lead Development : Own end‑to‑end design, implementation, deployment and maintenance of both traditional ML and Generative AI solutions (e.g., fine‑tuning LLMs, RAG pipelines) Project Execution & Delivery : Translate business requirements into data‑driven and GenAI‑driven use cases; scope features, estimates, and timelines Technical Leadership & Mentorship : Mentor, review and coach junior/mid‑level engineers on best practices in ML, MLOps and GenAI Programming & Frameworks : Expert in Python (pandas, NumPy, scikit‑learn, PyTorch/TensorFlow) Cloud & MLOps : Deep experience with Azure Machine Learning (SDK, Pipelines, Model Registry, hosting GenAI endpoints) Proficient in Azure Databricks : Spark jobs, Delta Lake, MLflow for tracking both ML and GenAI experiments Data & GenAI Engineering : Strong background in building ETL/ELT pipelines, data modeling, orchestration (Azure Data Factory, Databricks Jobs) Experience with embedding stores, vector databases, prompt‑optimization, and cost/performance tuning for large GenAI models Your profile as a Specialist : Bachelor’s or master’s in computer science/Engineering/Ph.D in Data Science, or a related field Min of 2 years of professional experience in AI/ML engineering, including at least 2 years of hands‑on Generative AI project delivery Track record of production deployments using Python, Azure ML, Databricks, and GenAI frameworks Hands‑on data engineering experience—designing and operating robust pipelines for both structured data and unstructured (text, embeddings) Preferred: Certifications in Azure AI/ML, Databricks, or Generative AI specialties Experience working in Agile/Scrum environments and collaborating with cross‑functional teams. Why you should choose ZF in India Innovative Environment : ZF is at the forefront of technological advancements, offering a dynamic and innovative work environment that encourages creativity and growth. Diverse and Inclusive Culture : ZF fosters a diverse and inclusive workplace where all employees are valued and respected, promoting a culture of collaboration and mutual support. Career Development: ZF is committed to the professional growth of its employees, offering extensive training programs, career development opportunities Global Presence : As a part of a global leader in driveline and chassis technology, ZF provides opportunities to work on international projects and collaborate with teams worldwide. Sustainability Focus: ZF is dedicated to sustainability and environmental responsibility, actively working towards creating eco-friendly solutions and reducing its carbon footprint. Employee Well-being : ZF prioritizes the well-being of its employees, providing comprehensive health and wellness programs, flexible work arrangements, and a supportive work-life balance. Be part of our ZF team as Specialist- AI/ML Expert and apply now! Contact Veerabrahmam Darukumalli What does DEI (Diversity, Equity, Inclusion) mean for ZF as a company? At ZF, we continuously strive to build and maintain a culture where inclusiveness is lived and diversity is valued. We actively seek ways to remove barriers so that all our employees can rise to their full potential. We aim to embed this vision in our legacy through how we operate and build our products as we shape the future of mobility. Find out how we work at ZF: Job Segment: R&D Engineer, R&D, Cloud, Computer Science, Engineer, Engineering, Research, Technology

Posted 1 week ago

Apply

2.0 - 3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Designation: AI/ML Developer Location: Ahmedabad Department: Technical Job Summary We are looking enthusiastic AI/ML Developer with 2 to 3 years of relevant experience in machine learning and artificial intelligence. The candidate should be well-versed in designing and developing intelligent systems and have a solid grasp of data handling and model deployment. Key Responsibilities Develop and implement machine learning models tailored to business needs. Develop and fine-tune Generative AI models (e.g., LLMs, diffusion models, VAEs) using platforms like Hugging Face, LangChain, or OpenAI. Conduct data collection, cleaning, and pre-processing for model readiness. Train, test, and optimize models to improve accuracy and performance. Work closely with cross-functional teams to deploy AI models in production environments. Perform data exploration, visualization, and feature selection Stay up-to-date with the latest trends in AI/ML and experiment with new approaches. Design and implement Multi-Agent Systems (MAS) for distributed intelligence, autonomous collaboration, or decision-making. Integrate and orchestrate agentic workflows using tools like Agno, CrewAI or LangGraph. Ensure scalability and efficiency of deployed solutions. Monitor model performance and perform necessary updates or retraining. Requirements Strong programming skills in Python and experience with libraries like Tensor Flow, PyTorch, Scikit-learn, and Keras. Experience working with vector databases (Pinecone, Weaviate, Chroma) for RAG systems. Good understanding of machine learning concepts, including classification, regression, clustering, and deep learning. Knowledge of knowledge graphs, semantic search, or symbolic reasoning. Proficiency in working with tools such as Pandas, NumPy, and data visualization libraries. Hands-on experience deploying models using REST APIs with frameworks like Flask or FastAPI. Familiarity with cloud platforms (AWS, Google Cloud, or Azure) for ML deployment. Knowledge of version control systems like Git. Experience with Natural Language Processing (NLP), computer vision, or predictive analytics. Exposure to MLOps tools and workflows (e.g., MLflow, Kubeflow, Airflow). Basic familiarity with big data frameworks like Apache Spark or Hadoop. Understanding of data pipelines and ETL processes. What We Offer Opportunity to work on live projects and client interactions. A vibrant and learning-driven work culture. 5 days a week & Flexible work timings.

Posted 1 week ago

Apply

3.0 - 6.0 years

4 Lacs

India

On-site

About MostEdge MostEdge empowers retailers with smart, trusted, and sustainable solutions to run their stores more efficiently. Through our Inventory Management Service, powered by the StockUPC app , we provide accurate, real-time insights that help stores track inventory, prevent shrink, and make smarter buying decisions. Our mission is to deliver trusted, profitable experiences—empowering retailers, partners and employees to accelerate commerce in a sustainable manner. Job Summary: We are seeking a highly skilled and motivated AI/ML Engineer with a specialization in Computer Vision & Un-Supervised Learning to join our growing team. You will be responsible for building, optimizing, and deploying advanced video analytics solutions for smart surveillance applications , including real-time detection, facial recognition, and activity analysis. This role combines the core competencies of AI/ML modelling with the practical skills required to deploy and scale models in real-world production environments , both in the cloud and on edge devices . Key Responsibilities: AI/ML Development & Computer Vision Design, train, and evaluate models for: Face detection and recognition Object/person detection and tracking Intrusion and anomaly detection Human activity or pose recognition/estimation Work with models such as YOLOv8, DeepSORT, RetinaNet, Faster-RCNN, and InsightFace. Perform data preprocessing, augmentation, and annotation using tools like LabelImg, CVAT, or custom pipelines. Surveillance System Integration Integrate computer vision models with live CCTV/RTSP streams for real-time analytics. Develop components for motion detection , zone-based event alerts , person re-identification , and multi-camera coordination . Optimize solutions for low-latency inference on edge devices (Jetson Nano, Xavier, Intel Movidius, Coral TPU). Model Optimization & Deployment Convert and optimize trained models using ONNX , TensorRT , or OpenVINO for real-time inference. Build and deploy APIs using FastAPI , Flask , or TorchServe . Package applications using Docker and orchestrate deployments with Kubernetes . Automate model deployment workflows using CI/CD pipelines (GitHub Actions, Jenkins). Monitor model performance in production using Prometheus , Grafana , and log management tools. Manage model versioning, rollback strategies, and experiment tracking using MLflow or DVC . As an AI/ML Engineer, you should be well-versed of AI agent development and finetuning experience Collaboration & Documentation Work closely with backend developers, hardware engineers, and DevOps teams. Maintain clear documentation of ML pipelines, training results, and deployment practices. Stay current with emerging research and innovations in AI vision and MLOps. Required Qualifications: Bachelor’s or master’s degree in computer science, Artificial Intelligence, Data Science, or a related field. 3–6 years of experience in AI/ML, with a strong portfolio in computer vision, Machine Learning . Hands-on experience with: Deep learning frameworks: PyTorch, TensorFlow Image/video processing: OpenCV, NumPy Detection and tracking frameworks: YOLOv8, DeepSORT, RetinaNet Solid understanding of deep learning architectures (CNNs, Transformers, Siamese Networks). Proven experience with real-time model deployment on cloud or edge environments. Strong Python programming skills and familiarity with Git, REST APIs, and DevOps tools. Preferred Qualifications: Experience with multi-camera synchronization and NVR/DVR systems. Familiarity with ONVIF protocols and camera SDKs. Experience deploying AI models on Jetson Nano/Xavier , Intel NCS2 , or Coral Edge TPU . Background in face recognition systems (e.g., InsightFace, FaceNet, Dlib). Understanding of security protocols and compliance in surveillance systems. Tools & Technologies: Languages & AI - Python, PyTorch, TensorFlow, OpenCV, NumPy, Scikit-learn Model Serving - FastAPI, Flask, TorchServe, TensorFlow Serving, REST/gRPC APIs Model Optimization - ONNX, TensorRT, OpenVINO, Pruning, Quantization Deployment - Docker, Kubernetes, Gunicorn, MLflow, DVC CI/CD & DevOps - GitHub Actions, Jenkins, GitLab CI Cloud & Edge - AWS SageMaker, Azure ML, GCP AI Platform, Jetson, Movidius, Coral TPU Monitoring - Prometheus, Grafana, ELK Stack, Sentry Annotation Tools - LabelImg, CVAT, Supervisely Benefits: Competitive compensation and performance-linked incentives. Work on cutting-edge surveillance and AI projects. Friendly and innovative work culture. Job Types: Full-time, Permanent Pay: From ₹400,000.00 per year Benefits: Health insurance Life insurance Paid sick time Paid time off Provident Fund Schedule: Evening shift Monday to Friday Morning shift Night shift Rotational shift US shift Weekend availability Supplemental Pay: Performance bonus Quarterly bonus Work Location: In person Application Deadline: 25/07/2025 Expected Start Date: 01/08/2025

Posted 1 week ago

Apply

3.0 years

4 Lacs

Vadodara

On-site

About MostEdge MostEdge empowers retailers with smart, trusted, and sustainable solutions to run their stores more efficiently. Through our Inventory Management Service, powered by the StockUPC app , we provide accurate, real-time insights that help stores track inventory, prevent shrink, and make smarter buying decisions. Our mission is to deliver trusted, profitable experiences—empowering retailers, partners and employees to accelerate commerce in a sustainable manner. Job Summary: We are hiring an experienced AI Engineer / ML Specialist with deep expertise in Large Language Models (LLMs) , who can fine-tune, customize, and integrate state-of-the-art models like OpenAI GPT, Claude, LLaMA, Mistral, and Gemini into real-world business applications . The ideal candidate should have hands-on experience with foundation model customization , prompt engineering , retrieval-augmented generation (RAG) , and deployment of AI assistants using public cloud AI platforms like Azure OpenAI, Amazon Bedrock, Google Vertex AI , or Anthropic’s Claude . Key Responsibilities: LLM Customization & Fine-Tuning Fine-tune popular open-source LLMs (e.g., LLaMA, Mistral, Falcon, Mixtral) using business/domain-specific data. Customize foundation models via instruction tuning , parameter-efficient fine-tuning (LoRA, QLoRA, PEFT) , or prompt tuning . Evaluate and optimize the performance, factual accuracy, and tone of LLM responses. AI Assistant Development Build and integrate AI assistants/chatbots for internal tools or customer-facing applications. Design and implement Retrieval-Augmented Generation (RAG) pipelines using tools like LangChain , LlamaIndex , Haystack , or OpenAI Assistants API . Use embedding models , vector databases (e.g., Pinecone, FAISS, Weaviate, ChromaDB ), and cloud AI services. Must have experience of finetuning, and maintaining microservices or LLM driven databases. Cloud Integration Deploy and manage LLM-based solutions on AWS Bedrock , Azure OpenAI , Google Vertex AI , Anthropic Claude , or OpenAI API . Optimize API usage, performance, latency, and cost. Secure integrations with identity/auth systems (OAuth2, API keys) and logging/monitoring. Evaluation, Guardrails & Compliance Implement guardrails , content moderation , and RLHF techniques to ensure safe and useful outputs. Benchmark models using human evaluation and standard metrics (e.g., BLEU, ROUGE, perplexity). Ensure compliance with privacy, IP, and data governance requirements. Collaboration & Documentation Work closely with product, engineering, and data teams to scope and build AI-based solutions. Document custom model behaviors, API usage patterns, prompts, and datasets. Stay up-to-date with the latest LLM research and tooling advancements. Required Skills & Qualifications: Bachelor’s or Master’s in Computer Science, AI/ML, Data Science, or related fields. 3–6+ years of experience in AI/ML, with a focus on LLMs, NLP, and GenAI systems . Strong Python programming skills and experience with Hugging Face Transformers, LangChain, LlamaIndex . Hands-on with LLM APIs from OpenAI, Azure, AWS Bedrock, Google Vertex AI, Claude, Cohere, etc. Knowledge of PEFT techniques like LoRA, QLoRA, Prompt Tuning, Adapters . Familiarity with vector databases and document embedding pipelines. Experience deploying LLM-based apps using FastAPI, Flask, Docker, and cloud services . Preferred Skills: Experience with open-source LLMs : Mistral, LLaMA, GPT-J, Falcon, Vicuna, etc. Knowledge of AutoGPT, CrewAI, Agentic workflows , or multi-agent LLM orchestration . Experience with multi-turn conversation modeling , dialogue state tracking. Understanding of model quantization , distillation , or fine-tuning in low-resource environments. Familiarity with ethical AI practices , hallucination mitigation, and user alignment. Tools & Technologies: LLM Frameworks - Hugging Face, Transformers, PEFT, LangChain, LlamaIndex, Haystack LLMs & APIs - OpenAI (GPT-4, GPT-3.5), Claude, Mistral, LLaMA, Cohere, Gemini, Azure OpenAI Vector Databases - FAISS, Pinecone, Weaviate, ChromaDB Serving & DevOps - Docker, FastAPI, Flask, GitHub Actions, Kubernetes Deployment Platforms - AWS Bedrock, Azure ML, GCP Vertex AI, Lambda, Streamlit Monitoring - Prometheus, MLflow, Langfuse, Weights & Biases Benefits: Competitive salary with performance incentives. Work with cutting-edge GenAI and LLM technologies. Build real-world products using state-of-the-art AI research. Job Types: Full-time, Permanent Pay: From ₹400,000.00 per year Benefits: Health insurance Life insurance Paid sick time Paid time off Provident Fund Schedule: Evening shift Monday to Friday Morning shift Night shift Rotational shift US shift Weekend availability Supplemental Pay: Performance bonus Quarterly bonus Yearly bonus Work Location: In person Application Deadline: 25/07/2025 Expected Start Date: 01/08/2025

Posted 1 week ago

Apply

1.0 - 3.0 years

3 - 10 Lacs

Calcutta

Remote

Job Title: Data Scientist / MLOps Engineer (Python, PostgreSQL, MSSQL) Location: Kolkata (Must) Employment Type: Full-Time Experience Level: 1–3 Years About Us: We are seeking a highly motivated and technically strong Data Scientist / MLOps Engineer to join our growing AI & ML team. This role involves the design, development, and deployment of scalable machine learning solutions, with a strong focus on operational excellence, data engineering, and GenAI integration. Key Responsibilities: Build and maintain scalable machine learning pipelines using Python. Deploy and monitor models using MLFlow and MLOps stacks. Design and implement data workflows using standard python libraries such as PySpark. Leverage standard data science libraries (scikit-learn, pandas, numpy, matplotlib, etc.) for model development and evaluation. Work with GenAI technologies, including Azure OpenAI and other open source models, for innovative ML applications. Collaborate closely with cross-functional teams to meet business objectives. Handle multiple ML projects simultaneously with robust branching expertise. Must-Have Qualifications: Expertise in Python for data science and backend development. Solid experience with PostgreSQL and MSSQL databases. Hands-on experience with standard data science packages such as Scikit-Learn, Pandas, Numpy, Matplotlib. Experience working with Databricks , MLFlow , and Azure . Strong understanding of MLOps frameworks and deployment automation. Prior exposure to FastAPI and GenAI tools like Langchain or Azure OpenAI is a big plus. Preferred Qualifications: Experience in the Finance, Legal or Regulatory domain. Working knowledge of clustering algorithms and forecasting techniques. Previous experience in developing reusable AI frameworks or productized ML solutions. Education: B.Tech in Computer Science, Data Science, Mechanical Engineering, or a related field. Why Join Us? Work on cutting-edge ML and GenAI projects. Be part of a collaborative and forward-thinking team. Opportunity for rapid growth and technical leadership. Job Type: Full-time Pay: ₹344,590.33 - ₹1,050,111.38 per year Benefits: Leave encashment Paid sick time Paid time off Provident Fund Work from home Education: Bachelor's (Required) Experience: Python: 3 years (Required) ML: 2 years (Required) Location: Kolkata, West Bengal (Required) Work Location: In person Application Deadline: 02/08/2025 Expected Start Date: 04/08/2025

Posted 1 week ago

Apply

8.0 - 10.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Software Development (Tech Lead) Location: Bangalore, India Experience: 8 years/ WFO Company Overview: Omnicom Global Solutions (OGS) is an integral part of Omnicom Group , a leading global marketing and corporate communications company. Omnicoms branded networks and numerous specialty firms provide advertising and communications services to over 5,000 clients in more than 70 countries. Let us build this together! Flywheel operates a leading cloud-based digital commerce platform across the worlds major digital marketplaces. It enables our clients to access near real-time performance measurement and improve sales, share, and profit. Through our expertise, scale, global reach, and highly sophisticated AI and data-powered solutions, we provide differentiated value for both the worlds largest consumer product companies and fast-growing brands. Job Description: We are seeking an experienced and dynamic Software Development Lead to drive end-to-end development, architecture, and team leadership in a fast-paced ecommerce environment. The ideal candidate combines deep technical expertise with strong people leadership and a strategic mindset. You’ll lead a cross-functional team building scalable, performant, and reliable systems that impact millions of users globally. Roles and Responsibilities: Technical Leadership & Architecture Make high-impact architectural decisions and lead the design of large-scale systems. Guide the team in leveraging AWS/cloud infrastructure and scalable platform components. Lead implementation of performance, scalability, and security non-functional requirements (NFRs). Design and implement engineering metrics that demonstrate improvement in team velocity and delivery. Oversee AI/ML system integrations and support production deployment of machine learning models. Engineering Execution Own release quality and delivery timelines; unblock teams and anticipate risks. Balance technical debt with roadmap delivery and foster a culture of ownership and excellence. Support the CI/CD framework and define operational readiness including alerts, monitoring, and rollback plans. Collaborate with Data Science/ML teams to deploy, monitor, and scale intelligent features (e.g., personalization, predictions, anomaly detection). People Leadership & Mentorship Mentor and grow a high-performing engineering team through feedback, coaching, and hands-on guidance. Drive onboarding and succession planning in alignment with long-term team strategy. Evaluate performance and create career growth plans for direct reports. Cross-Functional Collaboration Represent engineering in product reviews and planning forums with PMs, QA, and Design. Communicate technical vision, delivery risks, and trade-offs with business and technical stakeholders. Work with Product and Business Leaders to align team output with organizational goals. Project Management & Delivery Lead planning, estimation, and execution of complex product features or platform initiatives. Manage competing priorities, refine team capacity, and ensure timely and reliable feature rollout. Provide visibility into team performance through clear reporting and delivery metrics. Culture & Continuous Improvement Lead by example in fostering inclusion, feedback, and a growth-oriented team culture. Promote a DevOps mindset: reliability, ownership, automation, and self-service. Identify AI/ML opportunities within the platform and work with products to operationalize them. This may be the right role for you if you have. 8+ years of experience in software engineering, with at least 3 years in technical leadership or management roles. Strong backend expertise in Java or Python; hands-on experience with Spring Boot, Django, or Flask. Deep understanding of cloud architectures (AWS, GCP) and system design for scale. Strong knowledge of frontend frameworks (React/AngularJS) and building web-based SaaS products. Proven ability to guide large systems design, service decomposition, and integration strategies. Experience in applying ML/AI algorithms in production settings (recommendation engines, ranking models, NLP). Familiarity with ML lifecycle tooling such as MLflow, Vertex AI, or SageMaker is a plus. Proficiency in CI/CD practices, infrastructure-as-code, Git workflows, and monitoring tools. Comfortable with Agile development practices and project management tools like JIRA. Excellent analytical and problem-solving skills; capable of navigating ambiguity. Proven leadership in mentoring and team culture development. Desired Skills Experience in ecommerce, digital advertising, or performance marketing domains Exposure to data engineering pipelines or real-time data processing (Kafka, Spark, Airflow). Agile or Scrum certification. Demonstrated success in delivering high-scale, distributed software platforms. What Will Set You Apart: You’re a system thinker who can break down complex challenges and design for resilience. You proactively support cross-team success and remove friction for others. You build high-performing teams through mentoring, clear expectations, and shared ownership. You champion technical quality and foster a team that thrives on accountability and continuous learning.

Posted 1 week ago

Apply

2.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title: Machine Learning Engineer Location: Bengaluru (Hybrid) Experience: 2-5 years About Wissen Infotech Wissen Infotech has been a trusted leader in the IT Services industry for over 25 years, delivering high-quality solutions to a global clientele. Within Wissen, the AI Center of Excellence (AI-CoE) was conceptualized to drive cutting-edge research and innovation, enabling us to build our own products and intellectual property. This team focuses on solving complex business challenges using AI while setting new benchmarks for reliable and scalable AI solutions. Position Overview We are seeking a passionate Machine Learning Engineer to join our AI-CoE team. This is a unique opportunity for individuals who are software engineers at heart and are driven to design, develop, and deploy robust AI systems. You will work on innovative projects, including building agentic systems, leveraging state-of-the-art technologies to create scalable and reliable distributed systems. Key Responsibilities Design and develop scalable machine learning models and deploy them in production environments. Build and implement agentic systems that can autonomously analyze tasks, process large volumes of unstructured data, and provide actionable insights. Collaborate with data scientists, software engineers, and domain experts to integrate AI capabilities into cutting-edge products and solutions. Develop deterministic and reliable AI systems to address real-world challenges. Create and optimize scalable, distributed ML pipelines. Perform data preprocessing, feature engineering, and model evaluation to ensure high performance and reliability. Stay abreast of advancements in AI technologies and incorporate them into business solutions. Participate in code reviews, contribute to system architecture discussions, and continuously enhance project workflows. Required Skills and Qualifications Software Engineering Fundamentals: Strong foundation in algorithms, data structures, and scalable system design. Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or related fields with a solid academic track record. Experience: 2-5 years of hands-on experience in building AI systems or machine learning applications. Agentic Systems: Proven experience in developing systems that utilize AI agents for automating complex workflows, analyzing unstructured data, and generating actionable outcomes. Programming: Proficiency in programming languages such as Python, Java, or Scala. AI Expertise: Experience with machine learning frameworks like TensorFlow, PyTorch, or Hugging Face libraries (e.g., for working with transformer-based models and LLMs). MLOps Knowledge (preferred): Familiarity with tools like MLflow, Kubeflow, Airflow, Docker, or Kubernetes. Cloud Platforms: Hands-on experience with AWS, Azure, or Google Cloud for deploying machine learning models. Big Data: Experience with data processing tools and platforms such as Apache Spark or Hadoop. Problem-Solving: Strong analytical and problem-solving skills, with the ability to create robust solutions for complex challenges. Collaboration and Communication: Excellent communication skills to articulate technical ideas effectively to both technical and non-technical stakeholders. What We Offer An opportunity to work with cutting-edge AI technologies and solve challenging business problems. A collaborative, innovative, and inclusive work culture. Continuous learning opportunities and access to advanced research. Competitive salary and comprehensive benefits.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Company Description Quick Heal Technologies Limited is a leading provider of IT Security and Data Protection Solutions with a strong presence in India and a growing global footprint. Founded in 1995, we cater to B2B, B2G, and B2C segments, offering solutions across endpoints, network, data, and mobility. Our state-of-the-art R&D center and deep threat intelligence enable us to deliver top-tier protection against advanced cyber threats. Known for our renowned brands 'Quick Heal' and 'Seqrite', we are committed to our employees' development, and societal progress through cybersecurity education and awareness initiatives. Quick Heal is the only IT Security product company listed on both BSE and NSE. Role Description We are seeking a Data Science Manager to lead a high-performing team of data scientists and ML engineers focused on building scalable, intelligent cybersecurity products. You will work at the intersection of data science, threat detection, and real-time analytics to identify cyber threats, automate detection, and enhance risk modelling. Responsibilities Lead and mentor a team of data scientists, analysts, and machine learning engineers. Define and execute data science strategies aligned with cybersecurity use cases (e.g., anomaly detection, threat classification, behavioral analytics). Collaborate with product, threat research, and engineering teams to build end-to-end ML pipelines. Oversee development of models for intrusion detection, malware classification, phishing detection, and insider threat analysis. Manage project roadmaps, deliverables, and performance metrics (precision, recall, F1 score, etc.). Establish MLOps best practices and ensure robust model deployment, versioning, and monitoring. Drive exploratory data analysis on large-scale security datasets (e.g., endpoint logs, network flows, SIEM events). Stay current on adversarial ML, model robustness, and explainable AI in security contexts. Required Qualifications Bachelor's or Master’s degree in Computer Science, Data Science, Statistics, or a related field. Ph.D. is a plus. 7+ years of experience in data science or ML roles, with at least 2+ years in a leadership role. Strong hands-on experience with Python, SQL, and ML libraries (e.g., scikit-learn, TensorFlow, PyTorch). Experience working with security datasets: EDR logs, threat intel feeds, SIEM events, etc. Familiarity with cybersecurity frameworks (MITRE ATT&CK, NIST, etc.). Deep understanding of statistical modelling, classification, clustering, and time-series forecasting. Proven experience managing cross-functional data projects from conception to production. Preferred Skills Experience with anomaly detection, graph-based modelling, or NLP applied to security logs. Understanding of data privacy, encryption, and secure data handling. Exposure to cloud security (AWS, Azure, GCP) and tools like Splunk, Elastic, etc. Experience with MLOps tools like MLflow, Kubeflow, or SageMaker.

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

AI/ML Engineer – Core Algorithm and Model Expert 1. Role Objective: The engineer will be responsible for designing, developing, and optimizing advanced AI/ML models for computer vision, generative AI, Audio processing, predictive analysis and NLP applications. Must possess deep expertise in algorithm development and model deployment as production-ready products for naval applications. Also responsible for ensuring models are modular, reusable, and deployable in resource constrained environments. 2. Key Responsibilities: 2.1. Design and train models using Naval-specific data and deliver them in the form of end products 2.2. Fine-tune open-source LLMs (e.g. LLaMA, Qwen, Mistral, Whisper, Wav2Vec, Conformer models) for Navy-specific tasks. 2.3. Preprocess, label, and augment datasets. 2.4. Implement quantization, pruning, and compression for deployment-ready AI applications. 2.5. The engineer will be responsible for the development, training, fine-tuning, and optimization of Large Language Models (LLMs) and translation models for mission-critical AI applications of the Indian Navy. The candidate must possess a strong foundation in transformer-based architectures (e.g., BERT, GPT, LLaMA, mT5, NLLB) and hands-on experience with pretraining and fine-tuning methodologies such as Supervised Fine-Tuning (SFT), Instruction Tuning, Reinforcement Learning from Human Feedback (RLHF), and Parameter-Efficient Fine-Tuning (LoRA, QLoRA, Adapters). 2.6. Proficiency in building multilingual and domain-specific translation systems using techniques like backtranslation, domain adaptation, and knowledge distillation is essential. 2.7. The engineer should demonstrate practical expertise with libraries such as Hugging Face Transformers, PEFT, Fairseq, and OpenNMT. Knowledge of model compression, quantization, and deployment on GPU-enabled servers is highly desirable. Familiarity with MLOps, version control using Git, and cross-team integration practices is expected to ensure seamless interoperability with other AI modules. 2.8. Collaborate with Backend Engineer for integration via standard formats (ONNX, TorchScript). 2.9. Generate reusable inference modules that can be plugged into microservices or edge devices. 2.10. Maintain reproducible pipelines (e.g., with MLFlow, DVC, Weights & Biases). 3. Educational Qualifications Essential Requirements: 3.1. B Tech / M.Tech in Computer Science, AI/ML, Data Science, Statistics or related field with exceptional academic record. 3.2. Minimum 75% marks or 8.0 CGPA in relevant engineering disciplines. Desired Specialized Certifications: 3.3. Professional ML certifications from Google, AWS, Microsoft, or NVIDIA 3.4. Deep Learning Specialization. 3.5. Computer Vision or NLP specialization certificates. 3.6. TensorFlow/ PyTorch Professional Certification. 4. Core Skills & Tools: 4.1. Languages: Python (must), C++/Rust. 4.2. Frameworks: PyTorch, TensorFlow, Hugging Face Transformers. 4.3. ML Concepts: Transfer learning, RAG, XAI (SHAP/LIME), reinforcement learning LLM finetuning, SFT, RLHF, LoRA, QLorA and PEFT. 4.4. Optimized Inference: ONNX Runtime, TensorRT, TorchScript. 4.5. Data Tooling: Pandas, NumPy, Scikit-learn, OpenCV. 4.6. Security Awareness: Data sanitization, adversarial robustness, model watermarking. 5. Core AI/ML Competencies: 5.1. Deep Learning Architectures: CNNs, RNNs, LSTMs, GRUs, Transformers, GANs, VAEs, Diffusion Models 5.2. Computer Vision: Object detection (YOLO, R-CNN), semantic segmentation, image classification, optical character recognition, facial recognition, anomaly detection. 5.3. Natural Language Processing: BERT, GPT models, sentiment analysis, named entity recognition, machine translation, text summarization, chatbot development. 5.4. Generative AI: Large Language Models (LLMs), prompt engineering, fine-tuning, Quantization, RAG systems, multimodal AI, stable diffusion models. 5.5. Advanced Algorithms: Reinforcement learning, federated learning, transfer learning, few-shot learning, meta-learning 6. Programming & Frameworks: 6.1. Languages: Python (expert level), R, Julia, C++ for performance optimization. 6.2. ML Frameworks: TensorFlow, PyTorch, JAX, Hugging Face Transformers, OpenCV, NLTK, spaCy. 6.3. Scientific Computing: NumPy, SciPy, Pandas, Matplotlib, Seaborn, Plotly 6.4. Distributed Training: Horovod, DeepSpeed, FairScale, PyTorch Lightning 7. Model Development & Optimization: 7.1. Hyperparameter tuning using Optuna, Ray Tune, or Weights & Biases etc. 7.2. Model compression techniques (quantization, pruning, distillation). 7.3. ONNX model conversion and optimization. 8. Generative AI & NLP Applications: 8.1. Intelligence report analysis and summarization. 8.2. Multilingual radio communication translation. 8.3. Voice command systems for naval equipment. 8.4. Automated documentation and report generation. 8.5. Synthetic data generation for training simulations. 8.6. Scenario generation for naval training exercises. 8.7. Maritime intelligence synthesis and briefing generation. 9. Experience Requirements 9.1. Hands-on experience with at least 2 major AI domains. 9.2. Experience deploying models in production environments. 9.3. Contribution to open-source AI projects. 9.4. Led development of multiple end-to-end AI products. 9.5. Experience scaling AI solutions for large user bases. 9.6. Track record of optimizing models for real-time applications. 9.7. Experience mentoring technical teams 10. Product Development Skills 10.1. End-to-end ML pipeline development (data ingestion to model serving). 10.2. User feedback integration for model improvement. 10.3. Cross-platform model deployment (cloud, edge, mobile) 10.4. API design for ML model integration 11. Cross-Compatibility Requirements: 11.1. Define model interfaces (input/output schema) for frontend/backend use. 11.2. Build CLI and REST-compatible inference tools. 11.3. Maintain shared code libraries (Git) that backend/frontend teams can directly call. 11.4. Joint debugging and model-in-the-loop testing with UI and backend teams

Posted 1 week ago

Apply

0 years

0 Lacs

Tamil Nadu, India

On-site

We are looking for a seasoned Senior MLOps Engineer to join our Data Science team. The ideal candidate will have a strong background in Python development, machine learning operations, and cloud technologies. You will be responsible for operationalizing ML/DL models and managing the end-to-end machine learning lifecycle from model development to deployment and monitoring while ensuring high-quality and scalable solutions. Mandatory Skills: Python Programming: Expert in OOPs concepts and testing frameworks (e.g., PyTest) Strong experience with ML/DL libraries (e.g., Scikit-learn, TensorFlow, PyTorch, Prophet, NumPy, Pandas) MLOps & DevOps: Proven experience in executing data science projects with MLOps implementation CI/CD pipeline design and implementation Docker (Mandatory) Experience with ML lifecycle tracking tools such as MLflow, Weights & Biases (W&B), or cloud-based ML monitoring tools Experience in version control (Git) and infrastructure-as-code (Terraform or CloudFormation) Familiarity with code linting, test coverage, and quality tools such as SonarQube Cloud & Orchestration: Hands-on experience with AWS SageMaker or GCP Vertex AI Proficiency with orchestration tools like Apache Airflow or Astronomer Strong understanding of cloud technologies (AWS or GCP) Software Engineering: Experience in building backend APIs using Flask, FastAPI, or Django Familiarity with distributed systems for model training and inference Experience working with Feature Stores Deep understanding of the ML/DL lifecycle from ideation, experimentation, deployment to model sunsetting Understanding of software development best practices, including automated testing and CI/CD integration Agile Practices: Proficient in working within a Scrum/Agile environment using tools like JIRA Cross-Functional Collaboration: Ability to collaborate effectively with product managers, domain experts, and business stakeholders to align ML initiatives with business goals Preferred Skills: Experience building ML solutions for: (Any One) Sales Forecasting Marketing Mix Modelling Demand Forecasting Certified in machine learning or cloud platforms (e.g., AWS or GCP) Strong communication and documentation skills

Posted 1 week ago

Apply

0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Role Title: AI Platform Engineer Location: Bangalore (In Person in office when required) Part of the GenAI COE Team Key Responsibilities Platform Development and Evangelism: Build scalable AI platforms that are customer-facing. Evangelize the platform with customers and internal stakeholders. Ensure platform scalability, reliability, and performance to meet business needs. Machine Learning Pipeline Design: Design ML pipelines for experiment management, model management, feature management, and model retraining. Implement A/B testing of models. Design APIs for model inferencing at scale. Proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. LLM Serving And GPU Architecture Serve as an SME in LLM serving paradigms. Possess deep knowledge of GPU architectures. Expertise in distributed training and serving of large language models. Proficient in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. Model Fine-Tuning And Optimization Demonstrate proven expertise in model fine-tuning and optimization techniques. Achieve better latencies and accuracies in model results. Reduce training and resource requirements for fine-tuning LLM and LVM models. LLM Models And Use Cases Have extensive knowledge of different LLM models. Provide insights on the applicability of each model based on use cases. Proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. DevOps And LLMOps Proficiency Proven expertise in DevOps and LLMOps practices. Knowledgeable in Kubernetes, Docker, and container orchestration. Deep understanding of LLM orchestration frameworks like Flowise, Langflow, and Langgraph. Skill Matrix LLM: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLM Ops: ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI Databases/Datawarehouse: DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery. Cloud Knowledge: AWS/Azure/GCP Dev Ops (Knowledge): Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus Cloud Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert Proficient in Python, SQL, Javascript Email : diksha.singh@aptita.com

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are looking for a highly skilled and proactive Senior DevOps Specialist to join our Infrastructure Management Team. In this role, you will lead initiatives to streamline and automate infrastructure provisioning, CI/CD, observability, and compliance processes using GitLab, containerized environments, and modern DevSecOps tooling. You will work closely with application, data, and ML engineering teams to support MLOps workflows (e.g., model versioning, reproducibility, pipeline orchestration) and implement AIOps practices for intelligent monitoring, anomaly detection, and automated root cause analysis. Your goal will be to deliver secure, scalable, and observable infrastructure across environments. Key Responsibilities: Architect and maintain GitLab CI/CD pipelines to support deployment automation, environment provisioning, and rollback readiness. Implement standardized, reusable CI/CD templates for application, ML, and data services. Collaborate with system engineers to ensure secure, consistent infrastructure-as-code deployments using Terraform, Ansible, and Docker. Integrate security tools such as Vault, Trivy, tfsec, and InSpec into CI/CD pipelines. Govern infrastructure compliance by enforcing policies around secret management, image scanning, and drift detection. Lead internal infrastructure and security audits and maintain compliance records where required. Define and implement observability standards using OpenTelemetry, Grafana, and Graylog. Collaborate with developers to integrate structured logging, tracing, and health checks into services. Enable root cause detection workflows and performance monitoring for infrastructure and deployments. Work closely with application, data, and ML teams to support provisioning, deployment, and infra readiness. Ensure reproducibility and auditability in data/ML pipelines via tools like DVC and MLflow. Participate in release planning, deployment checks, and incident analysis from an infrastructure perspective. Mentor junior DevOps engineers and foster a culture of automation, accountability, and continuous improvement. Lead daily standups, retrospectives, and backlog grooming sessions for infrastructure-related deliverables. Drive internal documentation, runbooks, and reusable DevOps assets. Must Have: Strong experience with GitLab CI/CD, Docker, and SonarQube for pipeline automation and code quality enforcement Proficiency in scripting languages such as Bash, Python, or Shell for automation and orchestration tasks Solid understanding of Linux and Windows systems, including command-line tools, process management, and system troubleshooting Familiarity with SQL for validating database changes, debugging issues, and running schema checks Experience managing Docker-based environments, including container orchestration using Docker Compose, container lifecycle management, and secure image handling Hands-on experience supporting MLOps pipelines, including model versioning, experiment tracking (e.g., DVC, MLflow), orchestration (e.g., Airflow), and reproducible deployments for ML workloads. Hands-on knowledge of test frameworks such as PyTest, Robot Framework, REST-assured, and Selenium Experience with infrastructure testing tools like tfsec, InSpec, or custom Terraform test setups Strong exposure to API testing, load/performance testing, and reliability validation Familiarity with AIOps concepts, including structured logging, anomaly detection, and root cause analysis using observability platforms (e.g., OpenTelemetry, Prometheus, Graylog) Exposure to monitoring/logging tools like Grafana, Graylog, OpenTelemetry. Experience managing containerized environments for testing and deployment, aligned with security-first DevOps practices Ability to define CI/CD governance policies, pipeline quality checks, and operational readiness gates Excellent communication skills and proven ability to lead DevOps initiatives and interface with cross-functional stakeholders

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

Remote

Job Title: AI/ML Engineer Location: 100% Remote Job Type: Full-Time About the Role: We are seeking a highly skilled and motivated AI/ML Engineer to design, develop, and deploy cutting-edge ML models and data-driven solutions. You will work closely with data scientists, software engineers, and product teams to bring AI-powered products to life and scale them effectively. Key Responsibilities: Design, build, and optimize machine learning models for classification, regression, recommendation, and NLP tasks. Collaborate with data scientists to transform prototypes into scalable, production-ready models. Deploy, monitor, and maintain ML pipelines in production environments. Perform data preprocessing, feature engineering, and selection from structured and unstructured data. Implement model performance evaluation metrics and improve accuracy through iterative tuning. Work with cloud platforms (AWS, Azure, GCP) and MLOps tools to manage model lifecycle. Maintain clear documentation and collaborate cross-functionally across teams. Stay updated with the latest ML/AI research and technologies to continuously enhance our solutions. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. 5+ years of experience in ML model development and deployment. Proficient in Python and libraries such as scikit-learn, TensorFlow, PyTorch, pandas, NumPy, etc. Strong understanding of machine learning algorithms, statistical modeling, and data analysis. Experience with building and maintaining ML pipelines using tools like MLflow, Kubeflow, or Airflow. Familiarity with containerization (Docker), version control (Git), and CI/CD for ML models. Experience with cloud services such as AWS SageMaker, GCP Vertex AI, or Azure ML.

Posted 1 week ago

Apply

2.0 - 3.0 years

0 Lacs

Ahmedabad, Gujarat

On-site

Location Ahmedabad, India Experience 2-3 Job Type Full Time Job Description Designation: AI/ML Developer Location: Ahmedabad Department: Technical Job Summary: We Are Looking Enthusiastic AI/ML Developer With 2 To 3 Years Of Relevant Experience In Machine Learning And Artificial Intelligence. The Candidate Should Be Well-Versed In Designing And Developing Intelligent Systems And Have A Solid Grasp Of Data Handling And Model Deployment. Key Responsibilities: Develop and implement machine learning models tailored to business needs. Develop and fine-tune Generative AI models (e.g., LLMs, diffusion models, VAEs) using platforms like Hugging Face, LangChain, or OpenAI. Conduct data collection, cleaning, and pre-processing for model readiness. Train, test, and optimize models to improve accuracy and performance. Work closely with cross-functional teams to deploy AI models in production environments. Perform data exploration, visualization, and feature selection Stay up-to-date with the latest trends in AI/ML and experiment with new approaches. Design and implement Multi-Agent Systems (MAS) for distributed intelligence, autonomous collaboration, or decision-making. Integrate and orchestrate agentic workflows using tools like Agno, CrewAI or LangGraph. Ensure scalability and efficiency of deployed solutions. Monitor model performance and perform necessary updates or retraining. Requirements: Strong programming skills in Python and experience with libraries like Tensor Flow, PyTorch, Scikit-learn, and Keras. Experience working with vector databases (Pinecone, Weaviate, Chroma) for RAG systems. Good understanding of machine learning concepts, including classification, regression, clustering, and deep learning. Knowledge of knowledge graphs, semantic search, or symbolic reasoning. Proficiency in working with tools such as Pandas, NumPy, and data visualization libraries. Hands-on experience deploying models using REST APIs with frameworks like Flask or FastAPI. Familiarity with cloud platforms (AWS, Google Cloud, or Azure) for ML deployment. Knowledge of version control systems like Git. Experience with Natural Language Processing (NLP), computer vision, or predictive analytics. Exposure to MLOps tools and workflows (e.g., MLflow, Kubeflow, Airflow). Basic familiarity with big data frameworks like Apache Spark or Hadoop. Understanding of data pipelines and ETL processes. What We Offer: Opportunity to work on live projects and client interactions. A vibrant and learning-driven work culture. 5 days a week & Flexible work timings. About Company Techify is the Fastest Growing Tech Company with a talented, passionate and learning team. Techify's DNA Is About Solutions & Technologies. We are here to help our customers grow their business. Our Vision is to Become One of the Best Product Engineering companies in India We put client relationships first hence our mission is to build software solutions that help clients transform their business by unleashing hidden potential with technology. So our success mantra is Customer first, Team second and We are the third. Our main focus is our Customers’ and Partners’ success. Our visionary and experienced team turns innovative ideas into efficient products & softwares. Our well-defined processes ensure on-time delivery to our partners giving us an edge over our competitors. The most important pillar in achieving our goals is our dedicated Team and to encourage them and keep them motivated, we have set up a culture that rewards Self Development and Innovation. Our cutting-edge services include intensive research and analysis to identify the appropriate technology to achieve best performances by incurring least cost possible. We take a studied approach towards cost, performance, feature trade-offs to help companies surmount the challenges of delivering high-quality, timely products and services to the marketplace. We have the ability to take up any product be it at the stage of defining, designing, verifying or realizing. Here are our recognitions. We are the winner of Grand Challenge in Vibrant Gujarat Summit’2018. We have also achieved prestigious “Trend Setter” award from Gujarat Innovation Society. Times Coffee Table Book Covered us in “Gujarat the Inspiring edge” edition. We are also Amazon web services consulting and networking partners.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Department : Technology / AI Innovation Reports To : AI/ML Lead or Head of Data Science Location : Pune Role Summary We are looking for an experienced AI/ML & Generative AI Developer to join our growing AI innovation team. You will play a critical role in building advanced machine learning models, Generative AI applications, and LLM-powered solutions. This role demands deep technical expertise, creative problem-solving, and a strong understanding of AI workflows and scalable cloud-based deployments. Key Responsibilities Design, develop, and deploy AI/ML models and Generative AI applications for diverse enterprise use cases. Implement, fine-tune, and integrate Large Language Models (LLMs) using frameworks like LangChain, LlamaIndex, and RAG pipelines. Build Agentic AI systems with multi-step reasoning and autonomous decision-making capabilities. Create secure and scalable data ingestion pipelines for structured and unstructured data, enabling indexing, vector search, and advanced retrieval. Collaborate with cross-functional teams (Data Engineers, Product Managers, Architects) to operationalize AI solutions. Build CI/CD pipelines for ML/GenAI workflows and support end-to-end MLOps practices. Leverage Azure and Databricks for training, serving, and monitoring AI models at scale. Required Qualifications & Skills (Mandatory) 4+ years of hands-on experience in AI/ML development, including Generative AI applications. Expertise in RAG, LLMs, and Agentic AI implementations. Strong knowledge of LangChain, LlamaIndex, or similar LLM orchestration frameworks. Proficient in Python and key ML/DL libraries : TensorFlow, PyTorch, Scikit-learn. Solid foundation in Deep Learning, Natural Language Processing (NLP), and Transformer-based architectures. Experience in building data ingestion, indexing, and retrieval pipelines for real-world enterprise scenarios. Hands-on experience with Azure cloud services and Databricks. Proven experience designing CI/CD pipelines and working with MLOps tools like MLflow, DVC, or Kubeflow. Soft Skills Strong problem-solving and critical thinking ability. Excellent communication skills, with the ability to explain complex AI concepts to non-technical stakeholders. Strong collaboration and teamwork in agile, cross-functional environments. Growth mindset with curiosity to explore and learn emerging technologies. Preferred Qualifications Familiarity with vector databases : FAISS, Pinecone, Weaviate. Experience with AutoGPT, CrewAI, or similar agent frameworks. Exposure to Azure OpenAI, Cognitive Search, or Databricks ML tools. Understanding of AI security, responsible AI, and model governance. Key Relationships Internal : Data Scientists, Data Engineers, DevOps Engineers, Product Managers, Solution Architects. External : AI/ML platform vendors, cloud service providers (Microsoft Azure), third-party data providers. Role Dimensions Contribute to AI strategy, architecture, and reusable AI components. Support multiple projects simultaneously in a fast-paced agile environment. Mentor junior engineers and contribute to best practices and standards. Success Measures (KPIs) % reduction in model development time using reusable pipelines. Successful deployment of GenAI/LLM features in production. Accuracy, latency, and relevance improvements in AI search and retrieval. Uptime and scalability of deployed AI models. Integration of responsible AI and compliance practices. Competency Framework Alignment Technical Excellence in AI/ML/GenAI Cloud Engineering & DevOps Enablement Innovation & Continuous Improvement Business Value Orientation Agile Execution & Ownership Cross-functional Collaboration (ref:hirist.tech)

Posted 1 week ago

Apply

4.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Overview We are seeking a highly experienced and innovative Senior AI Engineer with a strong background in Generative AI, including LLM fine-tuning and prompt engineering. This role requires hands-on expertise across NLP, Computer Vision, and AI agent-based systems, with the ability to build, deploy, and optimize scalable AI solutions using modern tools and Skills & Qualifications : Bachelors or Masters in Computer Science, AI, Machine Learning, or related field. 4+ years of hands-on experience in AI/ML solution development. Proven expertise in fine-tuning LLMs (e.g., LLaMA, Mistral, Falcon, GPT-family) using techniques like LoRA, QLoRA, PEFT. Deep experience in prompt engineering, including zero-shot, few-shot, and retrieval-augmented generation (RAG). Proficient in key AI libraries and frameworks : LLMs & GenAI : Hugging Face Transformers, LangChain, LlamaIndex, OpenAI API, Diffusers NLP : SpaCy, NLTK. Vision : OpenCV, MMDetection, YOLOv5/v8, Detectron2 MLOps : MLflow, FastAPI, Docker, Git Familiarity with vector databases (Pinecone, FAISS, Weaviate) and embedding generation. Experience with cloud platforms like AWS, GCP, or Azure, and deployment on in house GPU-backed infrastructure. Strong communication skills and ability to convert business problems into technical Qualifications : Experience building multimodal systems (text + image, etc.) Practical experience with agent frameworks for autonomous or goal-directed AI. Familiarity with quantization, distillation, or knowledge transfer for efficient model Responsibilities : Design, fine-tune, and deploy generative AI models (LLMs, diffusion models, etc.) for real-world applications. Develop and maintain prompt engineering workflows, including prompt chaining, optimization, and evaluation for consistent output quality. Build NLP solutions for Q&A, summarization, information extraction, text classification, and more. Develop and integrate Computer Vision models for image processing, object detection, OCR, and multimodal tasks. Architect and implement AI agents using frameworks such as LangChain, AutoGen, CrewAI, or custom pipelines. Collaborate with cross-functional teams to gather requirements and deliver tailored AI-driven features. Optimize models for performance, cost-efficiency, and low latency in production. Continuously evaluate new AI research, tools, and frameworks and apply them where relevant. Mentor junior AI engineers and contribute to internal AI best practices and documentation. (ref:hirist.tech)

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are looking for a highly skilled and proactive Team Lead – DevOps to join our Infrastructure Management Team. In this role, you will lead initiatives to streamline and automate infrastructure provisioning, CI/CD, observability, and compliance processes using GitLab, containerized environments, and modern DevSecOps tooling. You will work closely with application, data, and ML engineering teams to support MLOps workflows (e.g., model versioning, reproducibility, pipeline orchestration) and implement AIOps practices for intelligent monitoring, anomaly detection, and automated root cause analysis. Your goal will be to deliver secure, scalable, and observable infrastructure across environments. Key Responsibilities Architect and maintain GitLab CI/CD pipelines to support deployment automation, environment provisioning, and rollback readiness. Implement standardized, reusable CI/CD templates for application, ML, and data services. Collaborate with system engineers to ensure secure, consistent infrastructure-as-code deployments using Terraform, Ansible, and Docker. Integrate security tools such as Vault, Trivy, tfsec, and InSpec into CI/CD pipelines. Govern infrastructure compliance by enforcing policies around secret management, image scanning, and drift detection. Lead internal infrastructure and security audits and maintain compliance records where required. Define and implement observability standards using OpenTelemetry, Grafana, and Graylog. Collaborate with developers to integrate structured logging, tracing, and health checks into services. Enable root cause detection workflows and performance monitoring for infrastructure and deployments. Work closely with application, data, and ML teams to support provisioning, deployment, and infra readiness. Ensure reproducibility and auditability in data/ML pipelines via tools like DVC and MLflow. Participate in release planning, deployment checks, and incident analysis from an infrastructure perspective. Mentor junior DevOps engineers and foster a culture of automation, accountability, and continuous improvement. Lead daily standups, retrospectives, and backlog grooming sessions for infrastructure-related deliverables. Drive internal documentation, runbooks, and reusable DevOps assets. Must Have Strong experience with GitLab CI/CD, Docker, and SonarQube for pipeline automation and code quality enforcement Proficiency in scripting languages such as Bash, Python, or Shell for automation and orchestration tasks Solid understanding of Linux and Windows systems, including command-line tools, process management, and system troubleshooting Familiarity with SQL for validating database changes, debugging issues, and running schema checks Experience managing Docker-based environments, including container orchestration using Docker Compose, container lifecycle management, and secure image handling Hands-on experience supporting MLOps pipelines, including model versioning, experiment tracking (e.g., DVC, MLflow), orchestration (e.g., Airflow), and reproducible deployments for ML workloads. Hands-on knowledge of test frameworks such as PyTest, Robot Framework, REST-assured, and Selenium Experience with infrastructure testing tools like tfsec, InSpec, or custom Terraform test setups Strong exposure to API testing, load/performance testing, and reliability validation Familiarity with AIOps concepts, including structured logging, anomaly detection, and root cause analysis using observability platforms (e.g., OpenTelemetry, Prometheus, Graylog) Exposure to monitoring/logging tools like Grafana, Graylog, OpenTelemetry. Experience managing containerized environments for testing and deployment, aligned with security-first DevOps practices Ability to define CI/CD governance policies, pipeline quality checks, and operational readiness gates Excellent communication skills and proven ability to lead DevOps initiatives and interface with cross-functional stakeholders

Posted 1 week ago

Apply

2.0 - 4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About ValGenesis ValGenesis is a leading digital validation platform provider for life sciences companies. ValGenesis suite of products are used by 30 of the top 50 global pharmaceutical and biotech companies to achieve digital transformation, total compliance and manufacturing excellence/intelligence across their product lifecycle. Learn more about working for ValGenesis, the de facto standard for paperless validation in Life Sciences: https://www.youtube.com/watch?v=tASq7Ld0JsQ About The Role We are seeking a highly skilled AI/ML Engineer to join our dynamic team to build the next gen applications for our global customers. If you are a technology enthusiast and highly passionate, we are eager to discuss with you about the potential role. Responsibilities Implement, and deploy Machine Learning solutions to solve complex problems and deliver real business value, ie. revenue, engagement, and customer satisfaction. Collaborate with data product managers, software engineers and SMEs to identify AI/ML opportunities for improving process efficiency. Develop production-grade ML models to enhance customer experience, content recommendation, content generation, and predictive analysis. Monitor and improve model performance via data enhancement, feature engineering, experimentation and online/offline evaluation. Stay up-to-date with the latest in machine learning and artificial intelligence, and influence AI/ML for the Life science industry. Stay up-to-date with the latest in machine learning and artificial intelligence, and influence AI/ML for the Life science industry. Requirements 2 - 4 years of experience in AI/ML engineering, with a track record of handling increasingly complex projects. Strong programming skills in Python, Rust. Experience with Pandas, NumPy, SciPy, OpenCV (for image processing) Experience with ML frameworks, such as scikit-learn, Tensorflow, PyTorch. Experience with GenAI tools, such as Langchain, LlamaIndex, and open source Vector DBs. Experience with one or more Graph DBs - Neo4J, ArangoDB Experience with MLOps platforms, such as Kubeflow or MLFlow. Expertise in one or more of the following AI/ML domains: Causal AI, Reinforcement Learning, Generative AI, NLP, Dimension Reduction, Computer Vision, Sequential Models. Expertise in building, deploying, measuring, and maintaining machine learning models to address real-world problems. Thorough understanding of software product development lifecycle, DevOps (build, continuous integration, deployment tools) and best practices. Excellent written and verbal communication skills and interpersonal skills. Advanced degree in Computer Science, Machine Learning or related field. We’re on a Mission In 2005, we disrupted the life sciences industry by introducing the world’s first digital validation lifecycle management system. ValGenesis VLMS® revolutionized compliance-based corporate validation activities and has remained the industry standard. Today, we continue to push the boundaries of innovation ― enhancing and expanding our portfolio beyond validation with an end-to-end digital transformation platform. We combine our purpose-built systems with world-class consulting services to help every facet of GxP meet evolving regulations and quality expectations. The Team You’ll Join Our customers’ success is our success. We keep the customer experience centered in our decisions, from product to marketing to sales to services to support. Life sciences companies exist to improve humanity’s quality of life, and we honor that mission. We work together. We communicate openly, support each other without reservation, and never hesitate to wear multiple hats to get the job done. We think big. Innovation is the heart of ValGenesis. That spirit drives product development as well as personal growth. We never stop aiming upward. We’re in it to win it. We’re on a path to becoming the number one intelligent validation platform in the market, and we won’t settle for anything less than being a market leader. How We Work Our Chennai, Hyderabad and Bangalore offices are onsite, 5 days per week. We believe that in-person interaction and collaboration fosters creativity, and a sense of community, and is critical to our future success as a company. ValGenesis is an equal-opportunity employer that makes employment decisions on the basis of merit. Our goal is to have the best-qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, religion, sex, sexual orientation, gender identity, national origin, disability, or any other characteristics protected by local law.

Posted 1 week ago

Apply

8.0 - 12.0 years

14 - 18 Lacs

Namakkal

Work from Office

We are looking for 8+years experienced candidates for this role. Job Description A minimum of 8 years of professional experience, with at least 6 years in a data science role. Strong knowledge of statistical modeling, machine learning, deep learning and GenAI. Proficiency in Python and hands on experience optimizing code for performance. Experience with data preprocessing, feature engineering, data visualization and hyperparameter tuning. Solid understanding of database concepts and experience working with large datasets. Experience deploying and scaling machine learning models in a production environment. Familiarity with machine learning operations (MLOps) and related tools. Good understanding of Generative AI concepts and LLM finetuning. Excellent communication and collaboration skills. Responsibilities include: Lead a high performance team, guide and mentor them on the latest technology landscape, patterns and design standards and prepare them to take on new roles and responsibilities. Provide strategic direction and technical leadership for AI initiatives, guiding the team in designing and implementing state-of-the-art AI solutions. Lead the design and architecture of complex AI systems, ensuring scalability, reliability, and performance. Lead the development and deployment of machine learning/deep learning models to address key business challenges. Apply statistical modeling, data preprocessing, feature engineering, machine learning, and deep learning techniques to build and improve models. Utilize expertise in at least two of the following areas: computer vision, predictive analytics, natural language processing, time series analysis, recommendation systems. Design, implement, and optimize data pipelines for model training and deployment. Experience with model serving frameworks (e.g., TensorFlow Serving, TorchServe, KServe, or similar). Design and implement APIs for model serving and integration with other systems. Collaborate with cross-functional teams to define project requirements, develop solutions, and communicate results. Mentor junior data scientists, providing guidance on technical skills and project execution. Stay up-to-date with the latest advancements in data science and machine learning, particularly in generative AI, and evaluate their potential applications. Communicate complex technical concepts and analytical findings to both technical and non-technical audiences. Serves as a primary point of contact for client managers and liaises frequently with internal stakeholders to gather data or inputs needed for project work Certifications : Bachelor's or Master's degree in a quantitative field such as statistics, mathematics, computer science, or a related area. Primary Skills : Python Data Science concepts Pandas, NumPy, Matplotlib Artificial Intelligence Statistical Modeling Machine Learning, Natural Language Processing (NLP), Deep Learning Model Serving Frameworks (e.g., TensorFlow Serving, TorchServe) MLOps(e.g; MLflow, Tensorboard, Kubeflow etc) Computer Vision, Predictive Analytics, Time Series Analysis, Anomaly Detection, Recommendation Systems (Atleast 2) Generative AI, RAG, Finetuning(LoRa, QLoRa) Proficent in any of Cloud Computing Platforms (e.g., AWS, Azure, GCP) Secondary Skills : Expertise in designing scalable and efficient model architectures is crucial for developing robust AI solutions. Ability to assess and forecast the financial requirements of data science projects ensures alignment with budgetary constraints and organizational goals. Strong communication skills are vital for conveying complex technical concepts to both technical and non-technical stakeholders.

Posted 1 week ago

Apply

8.0 - 12.0 years

14 - 18 Lacs

Ramanathapuram

Work from Office

We are looking for 8+years experienced candidates for this role. Job Description A minimum of 8 years of professional experience, with at least 6 years in a data science role. Strong knowledge of statistical modeling, machine learning, deep learning and GenAI. Proficiency in Python and hands on experience optimizing code for performance. Experience with data preprocessing, feature engineering, data visualization and hyperparameter tuning. Solid understanding of database concepts and experience working with large datasets. Experience deploying and scaling machine learning models in a production environment. Familiarity with machine learning operations (MLOps) and related tools. Good understanding of Generative AI concepts and LLM finetuning. Excellent communication and collaboration skills. Responsibilities include: Lead a high performance team, guide and mentor them on the latest technology landscape, patterns and design standards and prepare them to take on new roles and responsibilities. Provide strategic direction and technical leadership for AI initiatives, guiding the team in designing and implementing state-of-the-art AI solutions. Lead the design and architecture of complex AI systems, ensuring scalability, reliability, and performance. Lead the development and deployment of machine learning/deep learning models to address key business challenges. Apply statistical modeling, data preprocessing, feature engineering, machine learning, and deep learning techniques to build and improve models. Utilize expertise in at least two of the following areas: computer vision, predictive analytics, natural language processing, time series analysis, recommendation systems. Design, implement, and optimize data pipelines for model training and deployment. Experience with model serving frameworks (e.g., TensorFlow Serving, TorchServe, KServe, or similar). Design and implement APIs for model serving and integration with other systems. Collaborate with cross-functional teams to define project requirements, develop solutions, and communicate results. Mentor junior data scientists, providing guidance on technical skills and project execution. Stay up-to-date with the latest advancements in data science and machine learning, particularly in generative AI, and evaluate their potential applications. Communicate complex technical concepts and analytical findings to both technical and non-technical audiences. Serves as a primary point of contact for client managers and liaises frequently with internal stakeholders to gather data or inputs needed for project work Certifications : Bachelor's or Master's degree in a quantitative field such as statistics, mathematics, computer science, or a related area. Primary Skills : Python Data Science concepts Pandas, NumPy, Matplotlib Artificial Intelligence Statistical Modeling Machine Learning, Natural Language Processing (NLP), Deep Learning Model Serving Frameworks (e.g., TensorFlow Serving, TorchServe) MLOps(e.g; MLflow, Tensorboard, Kubeflow etc) Computer Vision, Predictive Analytics, Time Series Analysis, Anomaly Detection, Recommendation Systems (Atleast 2) Generative AI, RAG, Finetuning(LoRa, QLoRa) Proficent in any of Cloud Computing Platforms (e.g., AWS, Azure, GCP) Secondary Skills : Expertise in designing scalable and efficient model architectures is crucial for developing robust AI solutions. Ability to assess and forecast the financial requirements of data science projects ensures alignment with budgetary constraints and organizational goals. Strong communication skills are vital for conveying complex technical concepts to both technical and non-technical stakeholders.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies