Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Greater Kolkata Area
Remote
ML Ops Engineer (Remote). Are you passionate about scaling machine learning models in the cloud? Were on the hunt for an experienced ML Ops Engineer to help us build scalable, automated, and production-ready ML infrastructure across multi-cloud environments. Location : Remote. Experience : 5+ Years. What Youll Do Design and manage scalable ML pipelines and deployment frameworks. Own the full ML lifecycle: training versioning deployment monitoring. Build cloud-native infrastructure on AWS, GCP, or Azure. Automate deployment using CI/CD tools like Jenkins, GitLab. Containerize and orchestrate ML apps with Docker and Kubernetes. Use tools like MLflow, TensorFlow Serving, Kubeflow. Partner with Data Scientists & DevOps to ship robust ML solutions. Set up monitoring systems for model drift and performance Were Looking For : 5+ years of experience in MLOps or DevOps for ML systems. Hands-on with at least two cloud platforms : AWS, GCP, or Azure. Proficient in Python and ML libraries (TensorFlow, Scikit-learn, etc.) Strong skills in Docker, Kubernetes, and cloud infrastructure automation Experience building CI/CD pipelines (Jenkins, GitLab CI/CD, etc. Familiarity with tools like MLflow, TensorFlow have skills : Strong experience in any two cloud technologies (Azure, AWS, GCP). (ref:hirist.tech)
Posted 1 week ago
15.0 - 19.0 years
0 Lacs
hyderabad, telangana
On-site
House of Shipping is seeking a high-caliber Data Science Lead to join their team in Hyderabad. With a background of 15-18 years in data science, including at least 5 years in leadership roles, the ideal candidate will have a proven track record in building and scaling data science teams in logistics, e-commerce, or manufacturing. Strong expertise in statistical learning, ML architecture, productionizing models, and impact tracking is essential for this role. As the Data Science Lead, you will be responsible for leading enterprise-scale data science initiatives in supply chain optimization, forecasting, network analytics, and predictive maintenance. This position requires a blend of technical leadership and strategic alignment with various business units to deliver measurable business impact. Key responsibilities include defining and driving the data science roadmap across forecasting, route optimization, warehouse simulation, inventory management, and fraud detection. You will work closely with engineering teams to architect end-to-end pipelines, from data ingestion to API deployment. Proficiency in Python and MLOps tools like Scikit-Learn, XGBoost, PyTorch, MLflow, Vertex AI, or AWS SageMaker is crucial for success in this role. Collaboration with operations, product, and technology teams to prioritize AI use cases and define business metrics will be a key aspect of the job. Additionally, you will be responsible for managing experimentation frameworks, mentoring team members, ensuring model validation, and contributing to organizational data maturity. The ideal candidate will possess a Bachelor's, Master's, or Ph.D. degree in Computer Science, Mathematics, Statistics, or Operations Research. Certifications in Cloud ML stacks, MLOps, or Applied AI are preferred. To excel in this role, you should have a strategic vision in AI applications across the supply chain, strong team mentorship skills, expertise in statistical and ML frameworks, and experience in MLOps pipeline management. Excellent business alignment and executive communication skills are also essential for this position. If you are a data science leader looking to make a significant impact in the logistics industry, we encourage you to apply for this exciting opportunity with House of Shipping.,
Posted 1 week ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Title: Senior AI Developer Years of Experience 8+ years Location: The selected candidate is required to work onsite for the initial 1 to 3-month project training and execution period at either our Kovilpatti or Chennai location, which will be confirmed during the onboarding process. After the initial period, remote work opportunities will be offered. Job Description: The Senior AI Developer will be responsible for designing, building, training, and deploying advanced artificial intelligence and machine learning models to solve complex business challenges across industries. This role demands a strategic thinker and hands-on practitioner who can work at the intersection of data science, software engineering, and innovation. The candidate will contribute to scalable production-grade AI pipelines and mentor junior AI engineers within the Center of Excellence (CoE). Key responsibilities · Design, train, and fine-tune deep learning models (NLP, CV, LLMs, GANs) for high-value applications · Architect AI model pipelines and implement scalable inference engines in cloud-native environments · Collaborate with data scientists, engineers, and solution architects to productionize ML prototypes · Evaluate and integrate pre-trained models like GPT-4o, Gemini, Claude, and fine-tune based on domain needs · Optimize algorithms for real-time performance, efficiency, and fairness · Write modular, maintainable code and perform rigorous unit testing and validation · Contribute to AI codebase management, CI/CD, and automated retraining infrastructure · Research emerging AI trends and propose innovative applications aligned with business objectives Technical Skills · Expert in Python, PyTorch, TensorFlow, Scikit-learn, Hugging Face Transformers · LLM deployment & tuning: OpenAI (GPT), Google Gemini, Claude, Falcon, Mistral · Experience with RESTful APIs, Flask/FastAPI for AI service exposure · Proficient in Azure Machine Learning, Databricks, MLflow, Docker, Kubernetes · Hands-on experience with vector databases, prompt engineering, and retrieval-augmented generation (RAG) · Knowledge of Responsible AI frameworks (bias detection, fairness, explain ability) Qualification · Master’s in Artificial Intelligence, Machine Learning, Data Science, or Computer Engineering · Certifications in AI/ML (e.g., Microsoft Azure AI Engineer, Google Professional ML Engineer) preferred · Demonstrated success in building scalable AI applications in production environments · Publications or contributions to open-source AI/ML projects are a plus
Posted 1 week ago
6.0 years
0 Lacs
India
On-site
About Us At Artisan, we're creating AI Employees, called Artisans, and software which is sleek, easy to use, and replaces the endless stack of point solutions. We're starting with outbound sales and our AI BDR, Ava. Our platform contains every tool needed for outbound sales - B2B data, AI email sequences, deliverability optimization tools and so much more. We're growing very rapidly and recently raised a $25M Series A round from top investors. We are looking for superstars to join us on our rocketship growth as we relentlessly work towards building a multi-billion dollar company. About The Role We’re building the next generation of autonomous software employees—agents who don’t just assist but own workflows end to end. As our Head of AI , you’ll lead the charge in architecting, scaling, and evolving the core AI systems powering Ava and future Artisans. This is a high-impact leadership role where you’ll combine hands-on engineering, strategic technical vision, and team building to push the limits of what agentic AI can do. You won’t just manage an AI roadmap—you’ll help invent the future of work. From shipping fully autonomous agents to building adaptive learning loops across users and industries, this role is central to making Ava smarter, faster, and more valuable every single day. What You'll Do Define and lead the technical roadmap for all AI initiatives—LLM pipelines, reasoning systems, agent architecture, and adaptive feedback loops. Build and scale a world-class team of ML, applied AI, and agentic system engineers. Architect and oversee the development of end-to-end agentic workflows—from prompt design to tool orchestration to behavior modeling. Collaborate closely with product, engineering, and design to embed intelligent behaviors throughout our user experience. Stay on the edge of LLM, RLHF, RAG, and agentic research—and drive rapid implementation of relevant innovations. Establish safety, performance, and observability standards for all AI systems in production. What You Bring 6+ years of experience in ML/AI, with 2+ years in technical leadership roles (staff engineer, team lead, or higher). Deep experience working with LLMs in production (e.g., fine-tuning, prompt chaining, agent design, vector databases). Strong backend engineering skills in Python and fluency in modern MLOps and orchestration tools (MLflow, LangChain, LangGraph, etc). Proven success building and scaling real-world AI applications and systems with measurable impact. Strategic thinker with strong execution skills—comfortable making architectural decisions while staying close to the code when needed. Excellent communicator and cross-functional collaborator; comfortable aligning stakeholders and mentoring technical talent. Why Join Us Build the future of work by pioneering a new AI-native product category. Collaborate with a mission-driven, ambitious, and high-caliber team. Competitive salary, generous equity, and full benefits. Regular company off-sites and team events. Fast-moving culture where you'll ship meaningful work every week.
Posted 1 week ago
3.0 - 6.0 years
5 - 9 Lacs
Ahmedabad, Vadodara
Work from Office
We are hiring an experienced AI Engineer / ML Specialist with deep expertise in Large Language Models (LLMs), who can fine-tune, customize, and integrate state-of-the-art models like OpenAI GPT, Claude, LLaMA, Mistral, and Gemini into real-world business applications. The ideal candidate should have hands-on experience with foundation model customization, prompt engineering, retrieval-augmented generation (RAG), and deployment of AI assistants using public cloud AI platforms like Azure OpenAI, Amazon Bedrock, Google Vertex AI, or Anthropics Claude. Key Responsibilities: LLM Customization & Fine-Tuning Fine-tune popular open-source LLMs (e.g., LLaMA, Mistral, Falcon, Mixtral) using business/domain-specific data. Customize foundation models via instruction tuning, parameter-efficient fine-tuning (LoRA, QLoRA, PEFT), or prompt tuning. Evaluate and optimize the performance, factual accuracy, and tone of LLM responses. AI Assistant Development Build and integrate AI assistants/chatbots for internal tools or customer-facing applications. Design and implement Retrieval-Augmented Generation (RAG) pipelines using tools like LangChain, LlamaIndex, Haystack, or OpenAI Assistants API. Use embedding models, vector databases (e.g., Pinecone, FAISS, Weaviate, ChromaDB), and cloud AI services. Must have experience of finetuning, and maintaining microservices or LLM driven databases. Cloud Integration Deploy and manage LLM-based solutions on AWS Bedrock, Azure OpenAI, Google Vertex AI, Anthropic Claude, or OpenAI API. Optimize API usage, performance, latency, and cost. Secure integrations with identity/auth systems (OAuth2, API keys) and logging/monitoring. Evaluation, Guardrails & Compliance Implement guardrails, content moderation, and RLHF techniques to ensure safe and useful outputs. Benchmark models using human evaluation and standard metrics (e.g., BLEU, ROUGE, perplexity). Ensure compliance with privacy, IP, and data governance requirements. Collaboration & Documentation Work closely with product, engineering, and data teams to scope and build AI-based solutions. Document custom model behaviors, API usage patterns, prompts, and datasets. Stay up-to-date with the latest LLM research and tooling advancements. Required Skills & Qualifications: Bachelors or Masters in Computer Science, AI/ML, Data Science, or related fields. 3-6+ years of experience in AI/ML, with a focus on LLMs, NLP, and GenAI systems. Strong Python programming skills and experience with Hugging Face Transformers, LangChain, LlamaIndex. Hands-on with LLM APIs from OpenAI, Azure, AWS Bedrock, Google Vertex AI, Claude, Cohere, etc. Knowledge of PEFT techniques like LoRA, QLoRA, Prompt Tuning, Adapters. Familiarity with vector databases and document embedding pipelines. Experience deploying LLM-based apps using FastAPI, Flask, Docker, and cloud services. Preferred Skills: Experience with open-source LLMs: Mistral, LLaMA, GPT-J, Falcon, Vicuna, etc. Knowledge of AutoGPT, CrewAI, Agentic workflows, or multi-agent LLM orchestration. Experience with multi-turn conversation modeling, dialogue state tracking. Understanding of model quantization, distillation, or fine-tuning in low-resource environments. Familiarity with ethical AI practices, hallucination mitigation, and user alignment. Tools & Technologies: Category Tools & Platforms LLM Frameworks Hugging Face, Transformers, PEFT, LangChain, LlamaIndex, Haystack LLMs & APIs OpenAI (GPT-4, GPT-3.5), Claude, Mistral, LLaMA, Cohere, Gemini, Azure OpenAI Vector Databases FAISS, Pinecone, Weaviate, ChromaDB Serving & DevOps Docker, FastAPI, Flask, GitHub Actions, Kubernetes Deployment Platforms AWS Bedrock, Azure ML, GCP Vertex AI, Lambda, Streamlit Monitoring Prometheus, MLflow, Langfuse, Weights & Biases
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
About the Role: We are looking for a highly skilled and experienced Machine Learning / AI Engineer to join our team at Zenardy. The ideal candidate needs to have a proven track record of building, deploying, and optimizing machine learning models in real-world applications. You will be responsible for designing scalable ML systems, collaborating with cross-functional teams, and driving innovation through AI-powered solutions. Location: Chennai & Hyderabad Key Responsibilities: Design, develop, and deploy machine learning models to solve complex business problems Work across the full ML lifecycle: data collection, preprocessing, model training, evaluation, deployment, and monitoring Collaborate with data engineers, product managers, and software engineers to integrate ML models into production systems Conduct research and stay up-to-date with the latest ML/AI advancements, applying them where appropriate Optimize models for performance, scalability, and robustness Document methodologies, experiments, and findings clearly for both technical and non-technical audiences Mentor junior ML engineers or data scientists as needed Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Machine Learning, Data Science, or related field Minimum of 3 hands-on ML/AI projects, preferably in production or with real-world datasets Proficiency in Python and ML libraries/frameworks like TensorFlow, PyTorch, Scikit-learn, XGBoost Solid understanding of core ML concepts: supervised/unsupervised learning, neural networks, NLP, computer vision, etc. Experience with model deployment using APIs, containers (Docker), cloud platforms (AWS/GCP/Azure) Strong data manipulation and analysis skills using Pandas, NumPy, and SQL Knowledge of software engineering best practices: version control (Git), CI/CD, unit testing Preferred Skills: Experience with MLOps tools (MLflow, Kubeflow, SageMaker, etc.) Familiarity with big data technologies like Spark, Hadoop, or distributed training frameworks Experience working in Fintech environments would be a plus Strong problem-solving mindset with excellent communication skills Experience in working with vector database. Understanding of RAG vs Fine-tuning vs Prompt Engineering Why Join Us: Work on impactful, real-world AI challenges Collaborate with a passionate and innovative team Opportunities for career advancement and learning Flexible work environment (remote/hybrid options) Competitive compensation and benefits To Apply: Please send your resume, portfolio (if applicable), and a brief summary of your ML/AI projects to ranjana.g@zenardy.com
Posted 1 week ago
2.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Fynd is India’s largest omnichannel platform and a multi-platform tech company specializing in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries. About The Role We are hiring a Product Growth Manager to lead initiatives at the intersection of retail, AI, and product execution. This role requires someone who can conceptualize and build AI-native features, scale them effectively, and drive adoption through experimentation, data insights, and structured execution. You will work closely with engineering, machine learning, product, and business teams to deliver AI-powered capabilities that solve real-world retail challenges. The ideal candidate has strong technical foundations, sharp product instincts, and a proven ability to operate with speed and ownership in high-growth environments. What will you do at Fynd? Build and launch AI-native product features in collaboration with ML and engineering teams. Drive product-led growth initiatives focused on activation, retention, and adoption. Translate AI/ML capabilities into scalable and intuitive product experiences. Work hands-on with model deployments, inference systems, and API integrations. Own end-to-end product execution—from problem discovery to roadmap delivery and post-launch iteration. Contribute to platform strategies that scale AI features from MVPs to production-grade adoption. Understand and influence product flows specific to retail, catalog systems, and commerce automation. Some Specific Requirements AI and Technical Foundations Strong grasp of LLMs, embeddings, vector databases, RAG, and fine-tuning methods like LoRA and QLoRA. Hands-on experience with OpenAI, Hugging Face, LangChain, LlamaIndex, and production-grade AI tooling. Familiar with AI workflows using FastAPI, Docker, MLflow, and model serving via Ray or TorchServe. Comfortable working with GitHub, Git, and tools for managing models, code, and experiments. Good understanding of microservices, API design (REST/GraphQL), and scalable backend systems. Experienced in CI/CD setup for training and deploying ML models to production. Data and Analytics Proficient in SQL, BigQuery, and Looker Studio for data exploration and dashboards. Able to design KPIs, success metrics, and user journey insights for product analytics. Knowledge of tools like dbt, Airflow, and event-based platforms like PostHog or Mixpanel. Experience with A/B testing, funnel analysis, and behavioral cohort tracking. Retail and Product Execution Solid understanding of retail workflows, product catalogs, and commerce ecosystems. Experience building and scaling digital products with real-world user adoption. Strong product judgment and ability to balance business, user, and technical priorities. Stakeholder Management and Execution Leadership Ability to lead and influence cross-functional teams to drive high-quality execution. Strong communication skills to present AI-driven concepts to both technical and non-technical audiences. Experience with Agile methodologies and tools such as Jira, Confluence, and Asana for planning and tracking. Preferred Experience 2 to 5 years in product, growth, or AI-focused roles. Experience in building and scaling AI-powered or technology-driven platforms. Exposure to retail, e-commerce, or SaaS environments. Track record of delivering outcomes in fast-paced, cross-functional teams. Why Join Us Be part of a team shaping the future of AI-native platforms in digital commerce. Work closely with leading AI engineers, product teams, and business stakeholders. Own and execute high-impact initiatives with autonomy and accountability. Operate in a culture that values speed, clarity, and innovation. If you're someone who thrives on execution, loves solving complex problems, and wants to build the future of AI-native platforms, we’d love to have you on board Growth Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially. Flex University We help you upskill by organising in-house courses on important subjects Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you. Culture Community and Team building activities Host weekly, quarterly and annual events/parties. Wellness Mediclaim policy for you + parents + spouse + kids Experienced therapist for better mental health, improve productivity & work-life balance We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment
Posted 1 week ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description: AI Developer Are you passionate about building intelligent systems that make a real-world impact? Do you enjoy working in a fast-paced and dynamic start-up environment? If so, we are looking for a talented AI Developer to join our team! We are a data and AI consultancy start-up with a global client base, headquartered in London UK, and we are looking for someone to join us full time on-site in our vibrant office in Gurugram. About Uptitude Uptitude is a forward-thinking consultancy that specialises in providing exceptional AI, data, and business intelligence solutions to clients worldwide. Our team is passionate about delivering data-driven transformation and intelligent automation, enabling our clients to make smarter decisions and achieve remarkable results. We embrace a vibrant and inclusive culture where innovation, excellence, and collaboration thrive. As an AI Developer at Uptitude, you will be responsible for designing, developing, and deploying AI models and solutions across a wide range of use cases. You will collaborate closely with data engineers, analysts, and business teams to ensure models are well-integrated, explainable, and scalable. We are looking for a candidate who is not only technically skilled but also creative, curious, and excited about pushing the boundaries of AI in real-world business environments. Requirements 1–3 years of hands-on experience in developing AI/ML models in production or research settings. Proficiency in Python and libraries such as scikit-learn, Pandas, TensorFlow, PyTorch. Experience working with structured and unstructured data. Familiarity with model lifecycle management, MLOps, and version control (MLflow, DVC). Ability to communicate technical ideas to cross-functional teams. Experience with data cleaning, EDA, and feature selection. Creativity in applying AI to real-world business problems. Awareness of ISO:27001, ISO:42001 and data governance best practices is a plus. Role based in Gurugram, India. Head office in London, UK. Company Values At Uptitude, we embrace a set of core values that guide our work and define our culture: Be Awesome: Strive for excellence and keep levelling up. Step Up: Take ownership and go beyond the expected. Make a Difference: Innovate with impact. Have Fun: Celebrate wins and build meaningful connections. Benefits Uptitude values its employees and offers a competitive benefits package, including: Competitive salary based on experience and qualifications. Private health insurance. Offsite trips for team building and knowledge sharing. Quarterly outings to celebrate milestones. Corporate English lessons with a UK-based instructor. If you’re ready to develop cutting-edge AI solutions and work on meaningful challenges with a global impact we’d love to hear from you.
Posted 1 week ago
15.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Position: AI Architect Location: Hyderabad Experience: 15+ Years (with 3–5 years in AI/ML-focused roles) Employment Type: Full-Time About the Role: We are looking for a visionary AI Architect to lead the design and deployment of advanced AI solutions. You’ll work closely with data scientists, engineers, and product teams to translate business needs into scalable, intelligent systems. Key Responsibilities: Design end-to-end AI architectures including data pipelines, ML/DL models, APIs, and deployment frameworks Evaluate AI technologies, frameworks, and platforms to meet business and technical needs Collaborate with cross-functional teams to gather requirements and translate them into AI use cases Build scalable AI systems with focus on performance, robustness, and cost-efficiency Implement MLOps pipelines to streamline model lifecycle Ensure AI governance, data privacy, fairness, and explainability across deployments Mentor and guide engineering and data science teams Required Skills: Expertise in Machine Learning / Deep Learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) Strong proficiency in Python and one or more of: R, Java, Scala Deep understanding of cloud platforms (AWS, Azure, GCP) and tools like SageMaker, Bedrock, Vertex AI, Azure ML Familiarity with MLOps tools : MLflow, Kubeflow, Airflow, Docker, Kubernetes Solid understanding of data architecture , APIs , and microservices Knowledge of NLP, computer vision, and generative AI is a plus Excellent communication and leadership skills Preferred Qualifications: Bachelor’s or Master’s in Computer Science, Data Science, AI, or a related field AI/ML certifications (AWS, Azure, Coursera, etc.) are a bonus Experience working with LLMs, RAG architectures, or GenAI platforms is a plus
Posted 1 week ago
7.0 - 12.0 years
22 - 25 Lacs
India
On-site
TECHNICAL ARCHITECT Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management. Experience Required Technical Architect with 7 - 12 years of experience Salary 13-17 Lpa Job Types: Full-time, Permanent Pay: ₹2,200,000.00 - ₹2,500,000.00 per year Experience: total work: 1 year (Preferred) Work Location: In person
Posted 1 week ago
3.0 - 5.0 years
6 - 11 Lacs
India
On-site
Experience Required: 3-5 years of hands-on experience in full-stack development, system design, and supporting AI/ML data-driven solutions in a production environment. Key Responsibilities Implementing Technical Designs: Collaborate with architects and senior stakeholders to understand high-level designs and break them down into detailed engineering tasks. Implement system modules and ensure alignment with architectural direction. Cross-Functional Collaboration: Work closely with software developers, data scientists, and UI/UX teams to translate system requirements into working code. Clearly communicate technical concepts and implementation plans to internal teams. Stakeholder Support: Participate in discussions with product and client teams to gather requirements. Provide regular updates on development progress and raise flags early to manage expectations. System Development & Integration: Develop, integrate, and maintain components of AI/ML platforms and data-driven applications. Contribute to scalable, secure, and efficient system components based on guidance from architectural leads. Issue Resolution: Identify and debug system-level issues, including deployment and performance challenges. Proactively collaborate with DevOps and QA to ensure resolution. Quality Assurance & Security Compliance: Ensure that implementations meet coding standards, performance benchmarks, and security requirements. Perform unit and integration testing to uphold quality standards. Agile Execution: Break features into technical tasks, estimate efforts, and deliver components in sprints. Participate in sprint planning, reviews, and retrospectives with a focus on delivering value. Tool & Framework Proficiency: Use modern tools and frameworks in your daily workflow, including AI/ML libraries, backend APIs, front-end frameworks, databases, and cloud services, contributing to robust, maintainable, and scalable systems. Continuous Learning & Contribution: Keep up with evolving tech stacks and suggest optimizations or refactoring opportunities. Bring learnings from the industry into internal knowledge-sharing sessions. Proficiency in using AI-copilots for Coding: Adaptation to emerging tools and knowledge of prompt engineering to effectively use AI for day-to-day coding needs. Technical Skills Hands-on experience with Python-based AI/ML development using libraries such as TensorFlow, PyTorch, scikit-learn, or Keras. Hands-on exposure to self-hosted or managed LLMs, supporting integration and fine-tuning workflows as per system needs while following architectural blueprints. Practical implementation of NLP/CV modules using tools like SpaCy, NLTK, Hugging Face Transformers, and OpenCV, contributing to feature extraction, preprocessing, and inference pipelines. Strong backend experience using Django, Flask, or Node.js, and API development (REST or GraphQL). Front-end development experience with React, Angular, or Vue.js, with a working understanding of responsive design and state management. Development and optimization of data storage solutions, using SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra), with hands-on experience configuring indexes, optimizing queries, and using caching tools like Redis and Memcached. Working knowledge of microservices and serverless patterns, participating in building modular services, integrating event-driven systems, and following best practices shared by architectural leads. Application of design patterns (e.g., Factory, Singleton, Observer) during implementation to ensure code reusability, scalability, and alignment with architectural standards. Exposure to big data tools like Apache Spark, and Kafka for processing datasets. Familiarity with ETL workflows and cloud data warehouse, using tools such as Airflow, dbt, BigQuery, or Snowflake. Understanding of CI/CD, containerization (Docker), IaC (Terraform), and cloud platforms (AWS, GCP, or Azure). Implementation of cloud security guidelines, including setting up IAM roles, configuring TLS/SSL, and working within secure VPC setups, with support from cloud architects. Exposure to MLOps practices, model versioning, and deployment pipelines using MLflow, FastAPI, or AWS SageMaker. Configuration and management of cloud services such as AWS EC2, RDS, S3, Load Balancers, and WAF, supporting scalable infrastructure deployment and reliability engineering efforts. Personal Attributes Proactive Execution and Communication: Able to take architectural direction and implement it independently with minimal rework with regular communication with stakeholders Collaboration: Comfortable working across disciplines with designers, data engineers, and QA teams. Responsibility: Owns code quality and reliability, especially in production systems. Problem Solver: Demonstrated ability to debug complex systems and contribute to solutioning. Key: Python, Django, Django ORM, HTML, CSS, Bootstrap, JavaScript, jQuery, Multi-threading, Multi-processing, Database Design, Database Administration, Cloud Infrastructure, Data Science, self-hosted LLMs Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, Data Science, or a related field. Relevant certifications in cloud or machine learning are a plus. Package: 6-11 LPA Job Types: Full-time, Permanent Pay: ₹600,000.00 - ₹1,100,000.00 per year Work Location: In person
Posted 1 week ago
2.0 years
3 - 6 Lacs
Hyderābād
On-site
Become our next FutureStarter Are you ready to make an impact? ZF is looking for talented individuals to join our team. As a FutureStarter, you’ll have the opportunity to shape the future of mobility. Join us and be part of something extraordinary! Technical Lead- AI/ML Expert Country/Region: IN Location: Hyderabad, TG, IN, 500032 Req ID 81032 | Hyderabad, India, ZF India Pvt. Ltd. Job Description About the team: AIML is used to create chatbots, virtual assistants, and other forms of artificial intelligence software. AIML is also used in research and development of natural language processing systems. What you can look forward to as a AI/ML expert Lead Development : Own end‑to‑end design, implementation, deployment and maintenance of both traditional ML and Generative AI solutions (e.g., fine‑tuning LLMs, RAG pipelines) Project Execution & Delivery : Translate business requirements into data‑driven and GenAI‑driven use cases; scope features, estimates, and timelines Technical Leadership & Mentorship : Mentor, review and coach junior/mid‑level engineers on best practices in ML, MLOps and GenAI Programming & Frameworks : Expert in Python (pandas, NumPy, scikit‑learn, PyTorch/TensorFlow) Cloud & MLOps : Deep experience with Azure Machine Learning (SDK, Pipelines, Model Registry, hosting GenAI endpoints) Proficient in Azure Databricks : Spark jobs, Delta Lake, MLflow for tracking both ML and GenAI experiments Data & GenAI Engineering : Strong background in building ETL/ELT pipelines, data modeling, orchestration (Azure Data Factory, Databricks Jobs) Experience with embedding stores, vector databases, prompt‑optimization, and cost/performance tuning for large GenAI models Your profile as a Technical Lead: Bachelor’s or master’s in computer science/Engineering/Ph.D in Data Science, or a related field Min of 2 years of professional experience in AI/ML engineering, including at least 2 years of hands‑on Generative AI project delivery Track record of production deployments using Python, Azure ML, Databricks, and GenAI frameworks Hands‑on data engineering experience—designing and operating robust pipelines for both structured data and unstructured (text, embeddings) Preferred: Certifications in Azure AI/ML, Databricks, or Generative AI specialties Experience working in Agile/Scrum environments and collaborating with cross‑functional teams. Why you should choose ZF in India Innovative Environment : ZF is at the forefront of technological advancements, offering a dynamic and innovative work environment that encourages creativity and growth. Diverse and Inclusive Culture : ZF fosters a diverse and inclusive workplace where all employees are valued and respected, promoting a culture of collaboration and mutual support. Career Development: ZF is committed to the professional growth of its employees, offering extensive training programs, career development opportunities Global Presence : As a part of a global leader in driveline and chassis technology, ZF provides opportunities to work on international projects and collaborate with teams worldwide. Sustainability Focus: ZF is dedicated to sustainability and environmental responsibility, actively working towards creating eco-friendly solutions and reducing its carbon footprint. Employee Well-being : ZF prioritizes the well-being of its employees, providing comprehensive health and wellness programs, flexible work arrangements, and a supportive work-life balance. Be part of our ZF team as Technical Lead- AI/ML Expert and apply now! Contact Veerabrahmam Darukumalli What does DEI (Diversity, Equity, Inclusion) mean for ZF as a company? At ZF, we continuously strive to build and maintain a culture where inclusiveness is lived and diversity is valued. We actively seek ways to remove barriers so that all our employees can rise to their full potential. We aim to embed this vision in our legacy through how we operate and build our products as we shape the future of mobility. Find out how we work at ZF: Job Segment: R&D Engineer, Cloud, R&D, Computer Science, Engineer, Engineering, Technology, Research
Posted 1 week ago
2.0 years
3 - 6 Lacs
Hyderābād
On-site
Become our next FutureStarter Are you ready to make an impact? ZF is looking for talented individuals to join our team. As a FutureStarter, you’ll have the opportunity to shape the future of mobility. Join us and be part of something extraordinary! Specialist- AI/ML Expert Country/Region: IN Location: Hyderabad, TG, IN, 500032 Req ID 81033 | Hyderabad, India, ZF India Pvt. Ltd. Job Description About the team: AIML is used to create chatbots, virtual assistants, and other forms of artificial intelligence software. AIML is also used in research and development of natural language processing systems. What you can look forward to as a AI/ML exper t Lead Development : Own end‑to‑end design, implementation, deployment and maintenance of both traditional ML and Generative AI solutions (e.g., fine‑tuning LLMs, RAG pipelines) Project Execution & Delivery : Translate business requirements into data‑driven and GenAI‑driven use cases; scope features, estimates, and timelines Technical Leadership & Mentorship : Mentor, review and coach junior/mid‑level engineers on best practices in ML, MLOps and GenAI Programming & Frameworks : Expert in Python (pandas, NumPy, scikit‑learn, PyTorch/TensorFlow) Cloud & MLOps : Deep experience with Azure Machine Learning (SDK, Pipelines, Model Registry, hosting GenAI endpoints) Proficient in Azure Databricks : Spark jobs, Delta Lake, MLflow for tracking both ML and GenAI experiments Data & GenAI Engineering : Strong background in building ETL/ELT pipelines, data modeling, orchestration (Azure Data Factory, Databricks Jobs) Experience with embedding stores, vector databases, prompt‑optimization, and cost/performance tuning for large GenAI models Your profile as a Specialist : Bachelor’s or master’s in computer science/Engineering/Ph.D in Data Science, or a related field Min of 2 years of professional experience in AI/ML engineering, including at least 2 years of hands‑on Generative AI project delivery Track record of production deployments using Python, Azure ML, Databricks, and GenAI frameworks Hands‑on data engineering experience—designing and operating robust pipelines for both structured data and unstructured (text, embeddings) Preferred: Certifications in Azure AI/ML, Databricks, or Generative AI specialties Experience working in Agile/Scrum environments and collaborating with cross‑functional teams. Why you should choose ZF in India Innovative Environment : ZF is at the forefront of technological advancements, offering a dynamic and innovative work environment that encourages creativity and growth. Diverse and Inclusive Culture : ZF fosters a diverse and inclusive workplace where all employees are valued and respected, promoting a culture of collaboration and mutual support. Career Development: ZF is committed to the professional growth of its employees, offering extensive training programs, career development opportunities Global Presence : As a part of a global leader in driveline and chassis technology, ZF provides opportunities to work on international projects and collaborate with teams worldwide. Sustainability Focus: ZF is dedicated to sustainability and environmental responsibility, actively working towards creating eco-friendly solutions and reducing its carbon footprint. Employee Well-being : ZF prioritizes the well-being of its employees, providing comprehensive health and wellness programs, flexible work arrangements, and a supportive work-life balance. Be part of our ZF team as Specialist- AI/ML Expert and apply now! Contact Veerabrahmam Darukumalli What does DEI (Diversity, Equity, Inclusion) mean for ZF as a company? At ZF, we continuously strive to build and maintain a culture where inclusiveness is lived and diversity is valued. We actively seek ways to remove barriers so that all our employees can rise to their full potential. We aim to embed this vision in our legacy through how we operate and build our products as we shape the future of mobility. Find out how we work at ZF: Job Segment: R&D Engineer, R&D, Cloud, Computer Science, Engineer, Engineering, Research, Technology
Posted 1 week ago
2.0 - 3.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Designation: AI/ML Developer Location: Ahmedabad Department: Technical Job Summary We are looking enthusiastic AI/ML Developer with 2 to 3 years of relevant experience in machine learning and artificial intelligence. The candidate should be well-versed in designing and developing intelligent systems and have a solid grasp of data handling and model deployment. Key Responsibilities Develop and implement machine learning models tailored to business needs. Develop and fine-tune Generative AI models (e.g., LLMs, diffusion models, VAEs) using platforms like Hugging Face, LangChain, or OpenAI. Conduct data collection, cleaning, and pre-processing for model readiness. Train, test, and optimize models to improve accuracy and performance. Work closely with cross-functional teams to deploy AI models in production environments. Perform data exploration, visualization, and feature selection Stay up-to-date with the latest trends in AI/ML and experiment with new approaches. Design and implement Multi-Agent Systems (MAS) for distributed intelligence, autonomous collaboration, or decision-making. Integrate and orchestrate agentic workflows using tools like Agno, CrewAI or LangGraph. Ensure scalability and efficiency of deployed solutions. Monitor model performance and perform necessary updates or retraining. Requirements Strong programming skills in Python and experience with libraries like Tensor Flow, PyTorch, Scikit-learn, and Keras. Experience working with vector databases (Pinecone, Weaviate, Chroma) for RAG systems. Good understanding of machine learning concepts, including classification, regression, clustering, and deep learning. Knowledge of knowledge graphs, semantic search, or symbolic reasoning. Proficiency in working with tools such as Pandas, NumPy, and data visualization libraries. Hands-on experience deploying models using REST APIs with frameworks like Flask or FastAPI. Familiarity with cloud platforms (AWS, Google Cloud, or Azure) for ML deployment. Knowledge of version control systems like Git. Experience with Natural Language Processing (NLP), computer vision, or predictive analytics. Exposure to MLOps tools and workflows (e.g., MLflow, Kubeflow, Airflow). Basic familiarity with big data frameworks like Apache Spark or Hadoop. Understanding of data pipelines and ETL processes. What We Offer Opportunity to work on live projects and client interactions. A vibrant and learning-driven work culture. 5 days a week & Flexible work timings.
Posted 1 week ago
3.0 - 6.0 years
4 Lacs
India
On-site
About MostEdge MostEdge empowers retailers with smart, trusted, and sustainable solutions to run their stores more efficiently. Through our Inventory Management Service, powered by the StockUPC app , we provide accurate, real-time insights that help stores track inventory, prevent shrink, and make smarter buying decisions. Our mission is to deliver trusted, profitable experiences—empowering retailers, partners and employees to accelerate commerce in a sustainable manner. Job Summary: We are seeking a highly skilled and motivated AI/ML Engineer with a specialization in Computer Vision & Un-Supervised Learning to join our growing team. You will be responsible for building, optimizing, and deploying advanced video analytics solutions for smart surveillance applications , including real-time detection, facial recognition, and activity analysis. This role combines the core competencies of AI/ML modelling with the practical skills required to deploy and scale models in real-world production environments , both in the cloud and on edge devices . Key Responsibilities: AI/ML Development & Computer Vision Design, train, and evaluate models for: Face detection and recognition Object/person detection and tracking Intrusion and anomaly detection Human activity or pose recognition/estimation Work with models such as YOLOv8, DeepSORT, RetinaNet, Faster-RCNN, and InsightFace. Perform data preprocessing, augmentation, and annotation using tools like LabelImg, CVAT, or custom pipelines. Surveillance System Integration Integrate computer vision models with live CCTV/RTSP streams for real-time analytics. Develop components for motion detection , zone-based event alerts , person re-identification , and multi-camera coordination . Optimize solutions for low-latency inference on edge devices (Jetson Nano, Xavier, Intel Movidius, Coral TPU). Model Optimization & Deployment Convert and optimize trained models using ONNX , TensorRT , or OpenVINO for real-time inference. Build and deploy APIs using FastAPI , Flask , or TorchServe . Package applications using Docker and orchestrate deployments with Kubernetes . Automate model deployment workflows using CI/CD pipelines (GitHub Actions, Jenkins). Monitor model performance in production using Prometheus , Grafana , and log management tools. Manage model versioning, rollback strategies, and experiment tracking using MLflow or DVC . As an AI/ML Engineer, you should be well-versed of AI agent development and finetuning experience Collaboration & Documentation Work closely with backend developers, hardware engineers, and DevOps teams. Maintain clear documentation of ML pipelines, training results, and deployment practices. Stay current with emerging research and innovations in AI vision and MLOps. Required Qualifications: Bachelor’s or master’s degree in computer science, Artificial Intelligence, Data Science, or a related field. 3–6 years of experience in AI/ML, with a strong portfolio in computer vision, Machine Learning . Hands-on experience with: Deep learning frameworks: PyTorch, TensorFlow Image/video processing: OpenCV, NumPy Detection and tracking frameworks: YOLOv8, DeepSORT, RetinaNet Solid understanding of deep learning architectures (CNNs, Transformers, Siamese Networks). Proven experience with real-time model deployment on cloud or edge environments. Strong Python programming skills and familiarity with Git, REST APIs, and DevOps tools. Preferred Qualifications: Experience with multi-camera synchronization and NVR/DVR systems. Familiarity with ONVIF protocols and camera SDKs. Experience deploying AI models on Jetson Nano/Xavier , Intel NCS2 , or Coral Edge TPU . Background in face recognition systems (e.g., InsightFace, FaceNet, Dlib). Understanding of security protocols and compliance in surveillance systems. Tools & Technologies: Languages & AI - Python, PyTorch, TensorFlow, OpenCV, NumPy, Scikit-learn Model Serving - FastAPI, Flask, TorchServe, TensorFlow Serving, REST/gRPC APIs Model Optimization - ONNX, TensorRT, OpenVINO, Pruning, Quantization Deployment - Docker, Kubernetes, Gunicorn, MLflow, DVC CI/CD & DevOps - GitHub Actions, Jenkins, GitLab CI Cloud & Edge - AWS SageMaker, Azure ML, GCP AI Platform, Jetson, Movidius, Coral TPU Monitoring - Prometheus, Grafana, ELK Stack, Sentry Annotation Tools - LabelImg, CVAT, Supervisely Benefits: Competitive compensation and performance-linked incentives. Work on cutting-edge surveillance and AI projects. Friendly and innovative work culture. Job Types: Full-time, Permanent Pay: From ₹400,000.00 per year Benefits: Health insurance Life insurance Paid sick time Paid time off Provident Fund Schedule: Evening shift Monday to Friday Morning shift Night shift Rotational shift US shift Weekend availability Supplemental Pay: Performance bonus Quarterly bonus Work Location: In person Application Deadline: 25/07/2025 Expected Start Date: 01/08/2025
Posted 1 week ago
3.0 years
4 Lacs
Vadodara
On-site
About MostEdge MostEdge empowers retailers with smart, trusted, and sustainable solutions to run their stores more efficiently. Through our Inventory Management Service, powered by the StockUPC app , we provide accurate, real-time insights that help stores track inventory, prevent shrink, and make smarter buying decisions. Our mission is to deliver trusted, profitable experiences—empowering retailers, partners and employees to accelerate commerce in a sustainable manner. Job Summary: We are hiring an experienced AI Engineer / ML Specialist with deep expertise in Large Language Models (LLMs) , who can fine-tune, customize, and integrate state-of-the-art models like OpenAI GPT, Claude, LLaMA, Mistral, and Gemini into real-world business applications . The ideal candidate should have hands-on experience with foundation model customization , prompt engineering , retrieval-augmented generation (RAG) , and deployment of AI assistants using public cloud AI platforms like Azure OpenAI, Amazon Bedrock, Google Vertex AI , or Anthropic’s Claude . Key Responsibilities: LLM Customization & Fine-Tuning Fine-tune popular open-source LLMs (e.g., LLaMA, Mistral, Falcon, Mixtral) using business/domain-specific data. Customize foundation models via instruction tuning , parameter-efficient fine-tuning (LoRA, QLoRA, PEFT) , or prompt tuning . Evaluate and optimize the performance, factual accuracy, and tone of LLM responses. AI Assistant Development Build and integrate AI assistants/chatbots for internal tools or customer-facing applications. Design and implement Retrieval-Augmented Generation (RAG) pipelines using tools like LangChain , LlamaIndex , Haystack , or OpenAI Assistants API . Use embedding models , vector databases (e.g., Pinecone, FAISS, Weaviate, ChromaDB ), and cloud AI services. Must have experience of finetuning, and maintaining microservices or LLM driven databases. Cloud Integration Deploy and manage LLM-based solutions on AWS Bedrock , Azure OpenAI , Google Vertex AI , Anthropic Claude , or OpenAI API . Optimize API usage, performance, latency, and cost. Secure integrations with identity/auth systems (OAuth2, API keys) and logging/monitoring. Evaluation, Guardrails & Compliance Implement guardrails , content moderation , and RLHF techniques to ensure safe and useful outputs. Benchmark models using human evaluation and standard metrics (e.g., BLEU, ROUGE, perplexity). Ensure compliance with privacy, IP, and data governance requirements. Collaboration & Documentation Work closely with product, engineering, and data teams to scope and build AI-based solutions. Document custom model behaviors, API usage patterns, prompts, and datasets. Stay up-to-date with the latest LLM research and tooling advancements. Required Skills & Qualifications: Bachelor’s or Master’s in Computer Science, AI/ML, Data Science, or related fields. 3–6+ years of experience in AI/ML, with a focus on LLMs, NLP, and GenAI systems . Strong Python programming skills and experience with Hugging Face Transformers, LangChain, LlamaIndex . Hands-on with LLM APIs from OpenAI, Azure, AWS Bedrock, Google Vertex AI, Claude, Cohere, etc. Knowledge of PEFT techniques like LoRA, QLoRA, Prompt Tuning, Adapters . Familiarity with vector databases and document embedding pipelines. Experience deploying LLM-based apps using FastAPI, Flask, Docker, and cloud services . Preferred Skills: Experience with open-source LLMs : Mistral, LLaMA, GPT-J, Falcon, Vicuna, etc. Knowledge of AutoGPT, CrewAI, Agentic workflows , or multi-agent LLM orchestration . Experience with multi-turn conversation modeling , dialogue state tracking. Understanding of model quantization , distillation , or fine-tuning in low-resource environments. Familiarity with ethical AI practices , hallucination mitigation, and user alignment. Tools & Technologies: LLM Frameworks - Hugging Face, Transformers, PEFT, LangChain, LlamaIndex, Haystack LLMs & APIs - OpenAI (GPT-4, GPT-3.5), Claude, Mistral, LLaMA, Cohere, Gemini, Azure OpenAI Vector Databases - FAISS, Pinecone, Weaviate, ChromaDB Serving & DevOps - Docker, FastAPI, Flask, GitHub Actions, Kubernetes Deployment Platforms - AWS Bedrock, Azure ML, GCP Vertex AI, Lambda, Streamlit Monitoring - Prometheus, MLflow, Langfuse, Weights & Biases Benefits: Competitive salary with performance incentives. Work with cutting-edge GenAI and LLM technologies. Build real-world products using state-of-the-art AI research. Job Types: Full-time, Permanent Pay: From ₹400,000.00 per year Benefits: Health insurance Life insurance Paid sick time Paid time off Provident Fund Schedule: Evening shift Monday to Friday Morning shift Night shift Rotational shift US shift Weekend availability Supplemental Pay: Performance bonus Quarterly bonus Yearly bonus Work Location: In person Application Deadline: 25/07/2025 Expected Start Date: 01/08/2025
Posted 1 week ago
1.0 - 3.0 years
3 - 10 Lacs
Calcutta
Remote
Job Title: Data Scientist / MLOps Engineer (Python, PostgreSQL, MSSQL) Location: Kolkata (Must) Employment Type: Full-Time Experience Level: 1–3 Years About Us: We are seeking a highly motivated and technically strong Data Scientist / MLOps Engineer to join our growing AI & ML team. This role involves the design, development, and deployment of scalable machine learning solutions, with a strong focus on operational excellence, data engineering, and GenAI integration. Key Responsibilities: Build and maintain scalable machine learning pipelines using Python. Deploy and monitor models using MLFlow and MLOps stacks. Design and implement data workflows using standard python libraries such as PySpark. Leverage standard data science libraries (scikit-learn, pandas, numpy, matplotlib, etc.) for model development and evaluation. Work with GenAI technologies, including Azure OpenAI and other open source models, for innovative ML applications. Collaborate closely with cross-functional teams to meet business objectives. Handle multiple ML projects simultaneously with robust branching expertise. Must-Have Qualifications: Expertise in Python for data science and backend development. Solid experience with PostgreSQL and MSSQL databases. Hands-on experience with standard data science packages such as Scikit-Learn, Pandas, Numpy, Matplotlib. Experience working with Databricks , MLFlow , and Azure . Strong understanding of MLOps frameworks and deployment automation. Prior exposure to FastAPI and GenAI tools like Langchain or Azure OpenAI is a big plus. Preferred Qualifications: Experience in the Finance, Legal or Regulatory domain. Working knowledge of clustering algorithms and forecasting techniques. Previous experience in developing reusable AI frameworks or productized ML solutions. Education: B.Tech in Computer Science, Data Science, Mechanical Engineering, or a related field. Why Join Us? Work on cutting-edge ML and GenAI projects. Be part of a collaborative and forward-thinking team. Opportunity for rapid growth and technical leadership. Job Type: Full-time Pay: ₹344,590.33 - ₹1,050,111.38 per year Benefits: Leave encashment Paid sick time Paid time off Provident Fund Work from home Education: Bachelor's (Required) Experience: Python: 3 years (Required) ML: 2 years (Required) Location: Kolkata, West Bengal (Required) Work Location: In person Application Deadline: 02/08/2025 Expected Start Date: 04/08/2025
Posted 1 week ago
8.0 - 10.0 years
30 - 35 Lacs
Bengaluru
Work from Office
Software Development (Tech Lead) Location: Bangalore, India Experience: 8 years/ WFO Company Overview: Omnicom Global Solutions (OGS) is an integral part of Omnicom Group , a leading global marketing and corporate communications company. Omnicoms branded networks and numerous specialty firms provide advertising and communications services to over 5,000 clients in more than 70 countries. Let us build this together! Flywheel operates a leading cloud-based digital commerce platform across the worlds major digital marketplaces. It enables our clients to access near real-time performance measurement and improve sales, share, and profit. Through our expertise, scale, global reach, and highly sophisticated AI and data-powered solutions, we provide differentiated value for both the worlds largest consumer product companies and fast-growing brands. Job Description: We are seeking an experienced and dynamic Software Development Lead to drive end-to-end development, architecture, and team leadership in a fast-paced ecommerce environment. The ideal candidate combines deep technical expertise with strong people leadership and a strategic mindset. You’ll lead a cross-functional team building scalable, performant, and reliable systems that impact millions of users globally. Roles and Responsibilities: Technical Leadership & Architecture Make high-impact architectural decisions and lead the design of large-scale systems. Guide the team in leveraging AWS/cloud infrastructure and scalable platform components. Lead implementation of performance, scalability, and security non-functional requirements (NFRs). Design and implement engineering metrics that demonstrate improvement in team velocity and delivery. Oversee AI/ML system integrations and support production deployment of machine learning models. Engineering Execution Own release quality and delivery timelines; unblock teams and anticipate risks. Balance technical debt with roadmap delivery and foster a culture of ownership and excellence. Support the CI/CD framework and define operational readiness including alerts, monitoring, and rollback plans. Collaborate with Data Science/ML teams to deploy, monitor, and scale intelligent features (e.g., personalization, predictions, anomaly detection). People Leadership & Mentorship Mentor and grow a high-performing engineering team through feedback, coaching, and hands-on guidance. Drive onboarding and succession planning in alignment with long-term team strategy. Evaluate performance and create career growth plans for direct reports. Cross-Functional Collaboration Represent engineering in product reviews and planning forums with PMs, QA, and Design. Communicate technical vision, delivery risks, and trade-offs with business and technical stakeholders. Work with Product and Business Leaders to align team output with organizational goals. Project Management & Delivery Lead planning, estimation, and execution of complex product features or platform initiatives. Manage competing priorities, refine team capacity, and ensure timely and reliable feature rollout. Provide visibility into team performance through clear reporting and delivery metrics. Culture & Continuous Improvement Lead by example in fostering inclusion, feedback, and a growth-oriented team culture. Promote a DevOps mindset: reliability, ownership, automation, and self-service. Identify AI/ML opportunities within the platform and work with products to operationalize them. This may be the right role for you if you have. 8+ years of experience in software engineering, with at least 3 years in technical leadership or management roles. Strong backend expertise in Java or Python; hands-on experience with Spring Boot, Django, or Flask. Deep understanding of cloud architectures (AWS, GCP) and system design for scale. Strong knowledge of frontend frameworks (React/AngularJS) and building web-based SaaS products. Proven ability to guide large systems design, service decomposition, and integration strategies. Experience in applying ML/AI algorithms in production settings (recommendation engines, ranking models, NLP). Familiarity with ML lifecycle tooling such as MLflow, Vertex AI, or SageMaker is a plus. Proficiency in CI/CD practices, infrastructure-as-code, Git workflows, and monitoring tools. Comfortable with Agile development practices and project management tools like JIRA. Excellent analytical and problem-solving skills; capable of navigating ambiguity. Proven leadership in mentoring and team culture development. Desired Skills Experience in ecommerce, digital advertising, or performance marketing domains Exposure to data engineering pipelines or real-time data processing (Kafka, Spark, Airflow). Agile or Scrum certification. Demonstrated success in delivering high-scale, distributed software platforms. What Will Set You Apart: You’re a system thinker who can break down complex challenges and design for resilience. You proactively support cross-team success and remove friction for others. You build high-performing teams through mentoring, clear expectations, and shared ownership. You champion technical quality and foster a team that thrives on accountability and continuous learning.
Posted 1 week ago
2.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Machine Learning Engineer Location: Bengaluru (Hybrid) Experience: 2-5 years About Wissen Infotech Wissen Infotech has been a trusted leader in the IT Services industry for over 25 years, delivering high-quality solutions to a global clientele. Within Wissen, the AI Center of Excellence (AI-CoE) was conceptualized to drive cutting-edge research and innovation, enabling us to build our own products and intellectual property. This team focuses on solving complex business challenges using AI while setting new benchmarks for reliable and scalable AI solutions. Position Overview We are seeking a passionate Machine Learning Engineer to join our AI-CoE team. This is a unique opportunity for individuals who are software engineers at heart and are driven to design, develop, and deploy robust AI systems. You will work on innovative projects, including building agentic systems, leveraging state-of-the-art technologies to create scalable and reliable distributed systems. Key Responsibilities Design and develop scalable machine learning models and deploy them in production environments. Build and implement agentic systems that can autonomously analyze tasks, process large volumes of unstructured data, and provide actionable insights. Collaborate with data scientists, software engineers, and domain experts to integrate AI capabilities into cutting-edge products and solutions. Develop deterministic and reliable AI systems to address real-world challenges. Create and optimize scalable, distributed ML pipelines. Perform data preprocessing, feature engineering, and model evaluation to ensure high performance and reliability. Stay abreast of advancements in AI technologies and incorporate them into business solutions. Participate in code reviews, contribute to system architecture discussions, and continuously enhance project workflows. Required Skills and Qualifications Software Engineering Fundamentals: Strong foundation in algorithms, data structures, and scalable system design. Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or related fields with a solid academic track record. Experience: 2-5 years of hands-on experience in building AI systems or machine learning applications. Agentic Systems: Proven experience in developing systems that utilize AI agents for automating complex workflows, analyzing unstructured data, and generating actionable outcomes. Programming: Proficiency in programming languages such as Python, Java, or Scala. AI Expertise: Experience with machine learning frameworks like TensorFlow, PyTorch, or Hugging Face libraries (e.g., for working with transformer-based models and LLMs). MLOps Knowledge (preferred): Familiarity with tools like MLflow, Kubeflow, Airflow, Docker, or Kubernetes. Cloud Platforms: Hands-on experience with AWS, Azure, or Google Cloud for deploying machine learning models. Big Data: Experience with data processing tools and platforms such as Apache Spark or Hadoop. Problem-Solving: Strong analytical and problem-solving skills, with the ability to create robust solutions for complex challenges. Collaboration and Communication: Excellent communication skills to articulate technical ideas effectively to both technical and non-technical stakeholders. What We Offer An opportunity to work with cutting-edge AI technologies and solve challenging business problems. A collaborative, innovative, and inclusive work culture. Continuous learning opportunities and access to advanced research. Competitive salary and comprehensive benefits.
Posted 1 week ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company Description Quick Heal Technologies Limited is a leading provider of IT Security and Data Protection Solutions with a strong presence in India and a growing global footprint. Founded in 1995, we cater to B2B, B2G, and B2C segments, offering solutions across endpoints, network, data, and mobility. Our state-of-the-art R&D center and deep threat intelligence enable us to deliver top-tier protection against advanced cyber threats. Known for our renowned brands 'Quick Heal' and 'Seqrite', we are committed to our employees' development, and societal progress through cybersecurity education and awareness initiatives. Quick Heal is the only IT Security product company listed on both BSE and NSE. Role Description We are seeking a Data Science Manager to lead a high-performing team of data scientists and ML engineers focused on building scalable, intelligent cybersecurity products. You will work at the intersection of data science, threat detection, and real-time analytics to identify cyber threats, automate detection, and enhance risk modelling. Responsibilities Lead and mentor a team of data scientists, analysts, and machine learning engineers. Define and execute data science strategies aligned with cybersecurity use cases (e.g., anomaly detection, threat classification, behavioral analytics). Collaborate with product, threat research, and engineering teams to build end-to-end ML pipelines. Oversee development of models for intrusion detection, malware classification, phishing detection, and insider threat analysis. Manage project roadmaps, deliverables, and performance metrics (precision, recall, F1 score, etc.). Establish MLOps best practices and ensure robust model deployment, versioning, and monitoring. Drive exploratory data analysis on large-scale security datasets (e.g., endpoint logs, network flows, SIEM events). Stay current on adversarial ML, model robustness, and explainable AI in security contexts. Required Qualifications Bachelor's or Master’s degree in Computer Science, Data Science, Statistics, or a related field. Ph.D. is a plus. 7+ years of experience in data science or ML roles, with at least 2+ years in a leadership role. Strong hands-on experience with Python, SQL, and ML libraries (e.g., scikit-learn, TensorFlow, PyTorch). Experience working with security datasets: EDR logs, threat intel feeds, SIEM events, etc. Familiarity with cybersecurity frameworks (MITRE ATT&CK, NIST, etc.). Deep understanding of statistical modelling, classification, clustering, and time-series forecasting. Proven experience managing cross-functional data projects from conception to production. Preferred Skills Experience with anomaly detection, graph-based modelling, or NLP applied to security logs. Understanding of data privacy, encryption, and secure data handling. Exposure to cloud security (AWS, Azure, GCP) and tools like Splunk, Elastic, etc. Experience with MLOps tools like MLflow, Kubeflow, or SageMaker.
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
AI/ML Engineer – Core Algorithm and Model Expert 1. Role Objective: The engineer will be responsible for designing, developing, and optimizing advanced AI/ML models for computer vision, generative AI, Audio processing, predictive analysis and NLP applications. Must possess deep expertise in algorithm development and model deployment as production-ready products for naval applications. Also responsible for ensuring models are modular, reusable, and deployable in resource constrained environments. 2. Key Responsibilities: 2.1. Design and train models using Naval-specific data and deliver them in the form of end products 2.2. Fine-tune open-source LLMs (e.g. LLaMA, Qwen, Mistral, Whisper, Wav2Vec, Conformer models) for Navy-specific tasks. 2.3. Preprocess, label, and augment datasets. 2.4. Implement quantization, pruning, and compression for deployment-ready AI applications. 2.5. The engineer will be responsible for the development, training, fine-tuning, and optimization of Large Language Models (LLMs) and translation models for mission-critical AI applications of the Indian Navy. The candidate must possess a strong foundation in transformer-based architectures (e.g., BERT, GPT, LLaMA, mT5, NLLB) and hands-on experience with pretraining and fine-tuning methodologies such as Supervised Fine-Tuning (SFT), Instruction Tuning, Reinforcement Learning from Human Feedback (RLHF), and Parameter-Efficient Fine-Tuning (LoRA, QLoRA, Adapters). 2.6. Proficiency in building multilingual and domain-specific translation systems using techniques like backtranslation, domain adaptation, and knowledge distillation is essential. 2.7. The engineer should demonstrate practical expertise with libraries such as Hugging Face Transformers, PEFT, Fairseq, and OpenNMT. Knowledge of model compression, quantization, and deployment on GPU-enabled servers is highly desirable. Familiarity with MLOps, version control using Git, and cross-team integration practices is expected to ensure seamless interoperability with other AI modules. 2.8. Collaborate with Backend Engineer for integration via standard formats (ONNX, TorchScript). 2.9. Generate reusable inference modules that can be plugged into microservices or edge devices. 2.10. Maintain reproducible pipelines (e.g., with MLFlow, DVC, Weights & Biases). 3. Educational Qualifications Essential Requirements: 3.1. B Tech / M.Tech in Computer Science, AI/ML, Data Science, Statistics or related field with exceptional academic record. 3.2. Minimum 75% marks or 8.0 CGPA in relevant engineering disciplines. Desired Specialized Certifications: 3.3. Professional ML certifications from Google, AWS, Microsoft, or NVIDIA 3.4. Deep Learning Specialization. 3.5. Computer Vision or NLP specialization certificates. 3.6. TensorFlow/ PyTorch Professional Certification. 4. Core Skills & Tools: 4.1. Languages: Python (must), C++/Rust. 4.2. Frameworks: PyTorch, TensorFlow, Hugging Face Transformers. 4.3. ML Concepts: Transfer learning, RAG, XAI (SHAP/LIME), reinforcement learning LLM finetuning, SFT, RLHF, LoRA, QLorA and PEFT. 4.4. Optimized Inference: ONNX Runtime, TensorRT, TorchScript. 4.5. Data Tooling: Pandas, NumPy, Scikit-learn, OpenCV. 4.6. Security Awareness: Data sanitization, adversarial robustness, model watermarking. 5. Core AI/ML Competencies: 5.1. Deep Learning Architectures: CNNs, RNNs, LSTMs, GRUs, Transformers, GANs, VAEs, Diffusion Models 5.2. Computer Vision: Object detection (YOLO, R-CNN), semantic segmentation, image classification, optical character recognition, facial recognition, anomaly detection. 5.3. Natural Language Processing: BERT, GPT models, sentiment analysis, named entity recognition, machine translation, text summarization, chatbot development. 5.4. Generative AI: Large Language Models (LLMs), prompt engineering, fine-tuning, Quantization, RAG systems, multimodal AI, stable diffusion models. 5.5. Advanced Algorithms: Reinforcement learning, federated learning, transfer learning, few-shot learning, meta-learning 6. Programming & Frameworks: 6.1. Languages: Python (expert level), R, Julia, C++ for performance optimization. 6.2. ML Frameworks: TensorFlow, PyTorch, JAX, Hugging Face Transformers, OpenCV, NLTK, spaCy. 6.3. Scientific Computing: NumPy, SciPy, Pandas, Matplotlib, Seaborn, Plotly 6.4. Distributed Training: Horovod, DeepSpeed, FairScale, PyTorch Lightning 7. Model Development & Optimization: 7.1. Hyperparameter tuning using Optuna, Ray Tune, or Weights & Biases etc. 7.2. Model compression techniques (quantization, pruning, distillation). 7.3. ONNX model conversion and optimization. 8. Generative AI & NLP Applications: 8.1. Intelligence report analysis and summarization. 8.2. Multilingual radio communication translation. 8.3. Voice command systems for naval equipment. 8.4. Automated documentation and report generation. 8.5. Synthetic data generation for training simulations. 8.6. Scenario generation for naval training exercises. 8.7. Maritime intelligence synthesis and briefing generation. 9. Experience Requirements 9.1. Hands-on experience with at least 2 major AI domains. 9.2. Experience deploying models in production environments. 9.3. Contribution to open-source AI projects. 9.4. Led development of multiple end-to-end AI products. 9.5. Experience scaling AI solutions for large user bases. 9.6. Track record of optimizing models for real-time applications. 9.7. Experience mentoring technical teams 10. Product Development Skills 10.1. End-to-end ML pipeline development (data ingestion to model serving). 10.2. User feedback integration for model improvement. 10.3. Cross-platform model deployment (cloud, edge, mobile) 10.4. API design for ML model integration 11. Cross-Compatibility Requirements: 11.1. Define model interfaces (input/output schema) for frontend/backend use. 11.2. Build CLI and REST-compatible inference tools. 11.3. Maintain shared code libraries (Git) that backend/frontend teams can directly call. 11.4. Joint debugging and model-in-the-loop testing with UI and backend teams
Posted 1 week ago
0 years
0 Lacs
Tamil Nadu, India
On-site
We are looking for a seasoned Senior MLOps Engineer to join our Data Science team. The ideal candidate will have a strong background in Python development, machine learning operations, and cloud technologies. You will be responsible for operationalizing ML/DL models and managing the end-to-end machine learning lifecycle from model development to deployment and monitoring while ensuring high-quality and scalable solutions. Mandatory Skills: Python Programming: Expert in OOPs concepts and testing frameworks (e.g., PyTest) Strong experience with ML/DL libraries (e.g., Scikit-learn, TensorFlow, PyTorch, Prophet, NumPy, Pandas) MLOps & DevOps: Proven experience in executing data science projects with MLOps implementation CI/CD pipeline design and implementation Docker (Mandatory) Experience with ML lifecycle tracking tools such as MLflow, Weights & Biases (W&B), or cloud-based ML monitoring tools Experience in version control (Git) and infrastructure-as-code (Terraform or CloudFormation) Familiarity with code linting, test coverage, and quality tools such as SonarQube Cloud & Orchestration: Hands-on experience with AWS SageMaker or GCP Vertex AI Proficiency with orchestration tools like Apache Airflow or Astronomer Strong understanding of cloud technologies (AWS or GCP) Software Engineering: Experience in building backend APIs using Flask, FastAPI, or Django Familiarity with distributed systems for model training and inference Experience working with Feature Stores Deep understanding of the ML/DL lifecycle from ideation, experimentation, deployment to model sunsetting Understanding of software development best practices, including automated testing and CI/CD integration Agile Practices: Proficient in working within a Scrum/Agile environment using tools like JIRA Cross-Functional Collaboration: Ability to collaborate effectively with product managers, domain experts, and business stakeholders to align ML initiatives with business goals Preferred Skills: Experience building ML solutions for: (Any One) Sales Forecasting Marketing Mix Modelling Demand Forecasting Certified in machine learning or cloud platforms (e.g., AWS or GCP) Strong communication and documentation skills
Posted 1 week ago
0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
Role Title: AI Platform Engineer Location: Bangalore (In Person in office when required) Part of the GenAI COE Team Key Responsibilities Platform Development and Evangelism: Build scalable AI platforms that are customer-facing. Evangelize the platform with customers and internal stakeholders. Ensure platform scalability, reliability, and performance to meet business needs. Machine Learning Pipeline Design: Design ML pipelines for experiment management, model management, feature management, and model retraining. Implement A/B testing of models. Design APIs for model inferencing at scale. Proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. LLM Serving And GPU Architecture Serve as an SME in LLM serving paradigms. Possess deep knowledge of GPU architectures. Expertise in distributed training and serving of large language models. Proficient in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. Model Fine-Tuning And Optimization Demonstrate proven expertise in model fine-tuning and optimization techniques. Achieve better latencies and accuracies in model results. Reduce training and resource requirements for fine-tuning LLM and LVM models. LLM Models And Use Cases Have extensive knowledge of different LLM models. Provide insights on the applicability of each model based on use cases. Proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. DevOps And LLMOps Proficiency Proven expertise in DevOps and LLMOps practices. Knowledgeable in Kubernetes, Docker, and container orchestration. Deep understanding of LLM orchestration frameworks like Flowise, Langflow, and Langgraph. Skill Matrix LLM: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLM Ops: ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI Databases/Datawarehouse: DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery. Cloud Knowledge: AWS/Azure/GCP Dev Ops (Knowledge): Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus Cloud Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert Proficient in Python, SQL, Javascript Email : diksha.singh@aptita.com
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for a highly skilled and proactive Senior DevOps Specialist to join our Infrastructure Management Team. In this role, you will lead initiatives to streamline and automate infrastructure provisioning, CI/CD, observability, and compliance processes using GitLab, containerized environments, and modern DevSecOps tooling. You will work closely with application, data, and ML engineering teams to support MLOps workflows (e.g., model versioning, reproducibility, pipeline orchestration) and implement AIOps practices for intelligent monitoring, anomaly detection, and automated root cause analysis. Your goal will be to deliver secure, scalable, and observable infrastructure across environments. Key Responsibilities: Architect and maintain GitLab CI/CD pipelines to support deployment automation, environment provisioning, and rollback readiness. Implement standardized, reusable CI/CD templates for application, ML, and data services. Collaborate with system engineers to ensure secure, consistent infrastructure-as-code deployments using Terraform, Ansible, and Docker. Integrate security tools such as Vault, Trivy, tfsec, and InSpec into CI/CD pipelines. Govern infrastructure compliance by enforcing policies around secret management, image scanning, and drift detection. Lead internal infrastructure and security audits and maintain compliance records where required. Define and implement observability standards using OpenTelemetry, Grafana, and Graylog. Collaborate with developers to integrate structured logging, tracing, and health checks into services. Enable root cause detection workflows and performance monitoring for infrastructure and deployments. Work closely with application, data, and ML teams to support provisioning, deployment, and infra readiness. Ensure reproducibility and auditability in data/ML pipelines via tools like DVC and MLflow. Participate in release planning, deployment checks, and incident analysis from an infrastructure perspective. Mentor junior DevOps engineers and foster a culture of automation, accountability, and continuous improvement. Lead daily standups, retrospectives, and backlog grooming sessions for infrastructure-related deliverables. Drive internal documentation, runbooks, and reusable DevOps assets. Must Have: Strong experience with GitLab CI/CD, Docker, and SonarQube for pipeline automation and code quality enforcement Proficiency in scripting languages such as Bash, Python, or Shell for automation and orchestration tasks Solid understanding of Linux and Windows systems, including command-line tools, process management, and system troubleshooting Familiarity with SQL for validating database changes, debugging issues, and running schema checks Experience managing Docker-based environments, including container orchestration using Docker Compose, container lifecycle management, and secure image handling Hands-on experience supporting MLOps pipelines, including model versioning, experiment tracking (e.g., DVC, MLflow), orchestration (e.g., Airflow), and reproducible deployments for ML workloads. Hands-on knowledge of test frameworks such as PyTest, Robot Framework, REST-assured, and Selenium Experience with infrastructure testing tools like tfsec, InSpec, or custom Terraform test setups Strong exposure to API testing, load/performance testing, and reliability validation Familiarity with AIOps concepts, including structured logging, anomaly detection, and root cause analysis using observability platforms (e.g., OpenTelemetry, Prometheus, Graylog) Exposure to monitoring/logging tools like Grafana, Graylog, OpenTelemetry. Experience managing containerized environments for testing and deployment, aligned with security-first DevOps practices Ability to define CI/CD governance policies, pipeline quality checks, and operational readiness gates Excellent communication skills and proven ability to lead DevOps initiatives and interface with cross-functional stakeholders
Posted 1 week ago
5.0 years
0 Lacs
India
Remote
Job Title: AI/ML Engineer Location: 100% Remote Job Type: Full-Time About the Role: We are seeking a highly skilled and motivated AI/ML Engineer to design, develop, and deploy cutting-edge ML models and data-driven solutions. You will work closely with data scientists, software engineers, and product teams to bring AI-powered products to life and scale them effectively. Key Responsibilities: Design, build, and optimize machine learning models for classification, regression, recommendation, and NLP tasks. Collaborate with data scientists to transform prototypes into scalable, production-ready models. Deploy, monitor, and maintain ML pipelines in production environments. Perform data preprocessing, feature engineering, and selection from structured and unstructured data. Implement model performance evaluation metrics and improve accuracy through iterative tuning. Work with cloud platforms (AWS, Azure, GCP) and MLOps tools to manage model lifecycle. Maintain clear documentation and collaborate cross-functionally across teams. Stay updated with the latest ML/AI research and technologies to continuously enhance our solutions. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. 5+ years of experience in ML model development and deployment. Proficient in Python and libraries such as scikit-learn, TensorFlow, PyTorch, pandas, NumPy, etc. Strong understanding of machine learning algorithms, statistical modeling, and data analysis. Experience with building and maintaining ML pipelines using tools like MLflow, Kubeflow, or Airflow. Familiarity with containerization (Docker), version control (Git), and CI/CD for ML models. Experience with cloud services such as AWS SageMaker, GCP Vertex AI, or Azure ML.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough