Home
Jobs
Companies
Resume

713 Mlflow Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 7.0 years

4 - 6 Lacs

Thiruvananthapuram

On-site

5 - 7 Years 1 Opening Trivandrum Role description Job Title: AI Engineer Location: Kochi / Trivandrum Experience: 3-7 Years About the Role: We are seeking a talented and experienced AI Engineer to join our growing team and play a pivotal role in the development and deployment of innovative AI solutions. This individual will be a key contributor to our AI transformation, working closely with AI Architects, Data Scientists, and delivery teams to bring cutting-edge AI concepts to life. Key Responsibilities: Model Development & Implementation: Design, develop, and implement machine learning models and AI algorithms, from initial prototyping to production deployment. Data Engineering: Work with large and complex datasets, performing data cleaning, feature engineering, and data pipeline development to prepare data for AI model training. Solution Integration: Integrate AI models and solutions into existing enterprise systems and applications, ensuring seamless functionality and performance. Model Optimization & Performance: Optimize AI models for performance, scalability, and efficiency, and monitor their effectiveness in production environments. Collaboration & Communication: Collaborate effectively with cross-functional teams, including product managers, data scientists, and software engineers, to understand requirements and deliver impactful AI solutions. Code Quality & Best Practices: Write clean, maintainable, and well-documented code, adhering to best practices for software development and MLOps. Research & Evaluation: Stay updated with the latest advancements in AI/ML research and technologies, evaluating their potential application to business challenges. Troubleshooting & Support: Provide technical support and troubleshooting for deployed AI systems, identifying and resolving issues promptly. Key Requirements: 3-7 years of experience in developing and deploying AI/ML solutions. Strong programming skills in Python (or similar languages) with extensive experience in AI/ML frameworks (e.g., TensorFlow, PyTorch, scikit-learn). Solid understanding of machine learning algorithms, deep learning concepts, and statistical modelling. Experience with data manipulation and analysis libraries (e.g., Pandas, NumPy). Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and their AI/ML services. Experience with version control systems (e.g., Git) and collaborative development workflows. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Bachelor’s or master’s degree in computer science, Engineering, Data Science, or a related field. Good to Have: Experience with MLOps practices and tools (e.g., MLflow, Kubeflow). Familiarity with containerization technologies (e.g., Docker, Kubernetes). Experience with big data technologies (e.g., Spark, Hadoop). Prior experience in an IT services or product development environment. Knowledge of specific AI domains such as NLP, computer vision, or time series analysis. Key Skills: Machine Learning, Deep Learning, Python, TensorFlow, PyTorch, Data Preprocessing, Model Deployment, MLOps, Cloud AI Services, Software Development, Problem-solving. Skills Machine Learning,Data Science,Artificial Intelligence About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 1 hour ago

Apply

3.0 - 7.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Role Description Job Title: AI Engineer Location: Kochi / Trivandrum Experience: 3-7 Years About The Role We are seeking a talented and experienced AI Engineer to join our growing team and play a pivotal role in the development and deployment of innovative AI solutions. This individual will be a key contributor to our AI transformation, working closely with AI Architects, Data Scientists, and delivery teams to bring cutting-edge AI concepts to life. Key Responsibilities Model Development & Implementation: Design, develop, and implement machine learning models and AI algorithms, from initial prototyping to production deployment. Data Engineering: Work with large and complex datasets, performing data cleaning, feature engineering, and data pipeline development to prepare data for AI model training. Solution Integration Integrate AI models and solutions into existing enterprise systems and applications, ensuring seamless functionality and performance. Model Optimization & Performance Optimize AI models for performance, scalability, and efficiency, and monitor their effectiveness in production environments. Collaboration & Communication: Collaborate effectively with cross-functional teams, including product managers, data scientists, and software engineers, to understand requirements and deliver impactful AI solutions. Code Quality & Best Practices Write clean, maintainable, and well-documented code, adhering to best practices for software development and MLOps. Research & Evaluation Stay updated with the latest advancements in AI/ML research and technologies, evaluating their potential application to business challenges. Troubleshooting & Support Provide technical support and troubleshooting for deployed AI systems, identifying and resolving issues promptly. Key Requirements 3-7 years of experience in developing and deploying AI/ML solutions. Strong programming skills in Python (or similar languages) with extensive experience in AI/ML frameworks (e.g., TensorFlow, PyTorch, scikit-learn). Solid understanding of machine learning algorithms, deep learning concepts, and statistical modelling. Experience with data manipulation and analysis libraries (e.g., Pandas, NumPy). Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and their AI/ML services. Experience with version control systems (e.g., Git) and collaborative development workflows. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Bachelor’s or master’s degree in computer science, Engineering, Data Science, or a related field. Good To Have Experience with MLOps practices and tools (e.g., MLflow, Kubeflow). Familiarity with containerization technologies (e.g., Docker, Kubernetes). Experience with big data technologies (e.g., Spark, Hadoop). Prior experience in an IT services or product development environment. Knowledge of specific AI domains such as NLP, computer vision, or time series analysis. Key Skills Machine Learning, Deep Learning, Python, TensorFlow, PyTorch, Data Preprocessing, Model Deployment, MLOps, Cloud AI Services, Software Development, Problem-solving. Skills Machine Learning,Data Science,Artificial Intelligence Show more Show less

Posted 2 hours ago

Apply

2.0 years

0 Lacs

Gautam Buddha Nagar, Uttar Pradesh, India

On-site

Linkedin logo

We are seeking a dynamic and experienced Technical Trainer to join our engineering department. The ideal candidate will be responsible for designing and delivering technical training sessions to B.Tech students across various domains, ensuring they are industry-ready and equipped with practical, job-oriented skills. Role & Responsibility To train the students in new age technology (computer Science Engineering) to bridge the industry & academia gap leading to increase in the employability of the students. Knowledge Proven experience in devising technical training programs to UG/PG Engineering students in Higher Education Institutions To be abreast in latest software as per Industry standard & having knowledge of modern training techniques and tools to deliver the technical subjects To prepare training material (presentations, worksheets etc.) To execute training sessions, webinars, workshops for students To determine overall effectiveness of programs and make improvements Technical Skills (Subject Areas of delivering Training with Practical Approach) 1. Core Programming Skills Languages: C, Python, Java, C++, JavaScript 2. Web Development Frontend: HTML, CSS, JavaScript, React.js/Next.js Backend: Node.js, Express, Django, or Spring Boot Full-Stack: MERN stack (MongoDB, Express, React, Node.js) 3. Data Science & Machine Learning Languages: Python (NumPy, pandas, scikit-learn, TensorFlow/PyTorch) Tools: Jupyter Notebook, Google Colab, MLFlow 4. AI & Generative AI LLMs (Large Language Models): Understand how GPT, BERT, Llama models work Prompt Engineering Fine-tuning & RAG (Retrieval-Augmented Generation) Hugging Face Transformers, LangChain, OpenAI APIs 5. Cloud Computing & DevOps Cloud Platforms: AWS, Microsoft Azure, Google Cloud Platform (GCP) DevOps Tools: Docker, Kubernetes, GitHub Actions, Jenkins, Terraform CI/CD Pipelines: Automated testing and deployment 6. Cybersecurity Basics: OWASP Top 10, Network Security, Encryption, Firewalls Tools: Wireshark, Metasploit, Burp Suite 7. Mobile App Development Native: Kotlin (Android), Swift (iOS) Cross-platform: Flutter, React Native 8. Blockchain & Web3 Technologies: Ethereum, Solidity, Smart Contracts Frameworks: Hardhat, Truffle 9. Database & Big Data Databases: SQL (MySQL, PostgreSQL), NoSQL (MongoDB, Redis) Big Data Tools: Apache Hadoop, Spark, Kafka Qualification & Years of Experience as per norms: B.Tech./MCA/M.Tech (IT/CSE) from Top tier Institutes & reputed universities Industry Experience is desirable. Candidate must have minimum 2 years of training experience in the same domain. Show more Show less

Posted 2 hours ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Req ID: 327890 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Python Developer - Digital Engineering Sr. Engineer to join our team in Hyderabad, Telangana (IN-TG), India (IN). PYTHON Data Engineer Exposure to retrieval-augmented generation (RAG) systems and vector databases. Strong programming skills in Python (and optionally Scala or Java). Hands-on experience with data storage solutions (e.g., Delta Lake, Parquet, S3, BigQuery) Experience with data preparation for transformer-based models or LLMs Expertise in working with large-scale data frameworks (e.g., Spark, Kafka, Dask) Familiarity with MLOps tools (e.g., MLflow, Weights & Biases, SageMaker Pipelines) About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here . Show more Show less

Posted 3 hours ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Role name: Automation Test Lead Years of exp : 5 - 8 yrs About Dailoqa Dailoqa’s mission is to bridge human expertise and artificial intelligence to solve the challenges facing financial services. Our founding team of 20+ international leaders, including former CIOs and senior industry experts, combines extensive technical expertise with decades of real-world experience to create tailored solutions that harness the power of combined intelligence. With a focus on Financial Services clients, we have deep expertise across Risk & Regulations, Retail & Institutional Banking, Capital Markets, and Wealth & Asset Management. Dailoqa has global reach in UK, Europe, Africa, India, ASEAN, and Australia. We integrate AI into business strategies to deliver tangible outcomes and set new standards for the financial services industry. Working at Dailoqa will be hard work, our environment is fluid and fast-moving and you'll be part of a community that values innovation, collaboration, and relentless curiosity. We’re looking at people who: Are proactive, curious adaptable, and patient Shape the company's vision and will have a direct impact on its success. Have the opportunity for fast career growth. Have the opportunity to participate in the upside of an ultra-growth venture. Have fun 🙂 Don’t apply if: You want to work on a single layer of the application. You prefer to work on well-defined problems. You need clear, pre-defined processes. You prefer a relaxed and slow paced environment. Role Overview As an Automation Test Lead at Dailoqa, you’ll architect and implement robust testing frameworks for both software and AI/ML systems. You’ll bridge the gap between traditional QA and AI-specific validation, ensuring seamless integration of automated testing into CI/CD pipelines while addressing unique challenges like model accuracy, GenAI output validation, and ethical AI compliance. Key Responsibilities Test Automation Strategy & Framework Design Design and implement scalable test automation frameworks for frontend (UI/UX), backend APIs, and AI/ML model-serving endpoints using tools like Selenium, Playwright, Postman, or custom Python/Java solutions. Build GenAI-specific test suites for validating prompt outputs, LLM-based chat interfaces, RAG systems, and vector search accuracy. Develop performance testing strategies for AI pipelines (e.g., model inference latency, resource utilization). Continuous Testing & CI/CD Integration Establish and maintain continuous testing pipelines integrated with GitHub Actions, Jenkins, or GitLab CI/CD. Implement shift-left testing by embedding automated checks into development workflows (e.g., unit tests, contract testing). AI/ML Model Validation Collaborate with data scientists to test AI/ML models for accuracy, fairness, stability, and bias mitigation using tools like TensorFlow Model Analysis or MLflow. Validate model drift and retraining pipelines to ensure consistent performance in production. Quality Metrics & Reporting Define and track KPIs: Test coverage (code, data, scenarios) Defect leakage rate Automation ROI (time saved vs. maintenance effort) Model accuracy thresholds Report risks and quality trends to stakeholders in sprint reviews. Drive adoption of AI-specific testing tools (e.g., LangChain for LLM testing, Great Expectations for data validation). Technical Requirements Must-Have 5–8 years in test automation, with 2+ years validating AI/ML systems. Expertise in: Automation tools: Selenium, Playwright, Cypress, REST Assured, Locust/JMeter CI/CD: Jenkins, GitHub Actions, GitLab AI/ML testing: Model validation, drift detection, GenAI output evaluation Languages: Python, Java, or JavaScript Certifications: ISTQB Advanced, CAST, or equivalent. Experience with MLOps tools: MLflow, Kubeflow, TFX Familiarity with vector databases (Pinecone, Milvus) and RAG workflows. Strong programming/scripting experience in JavaScript, Python, Java, or similar Experience with API testing, UI testing, and automated pipelines Understanding of AI/ML model testing, output evaluation, and non-deterministic behavior validation Experience with testing AI chatbots, LLM responses, prompt engineering outcomes, or AI fairness/bias Familiarity with MLOps pipelines and automated validation of model performance in production Exposure to Agile/Scrum methodology and tools like Azure Boards Soft Skills Strong problem-solving skills for balancing speed and quality in fast-paced AI development. Ability to communicate technical risks to non-technical stakeholders. Collaborative mindset to work with cross-functional teams (data scientists, ML engineers, DevOps). Show more Show less

Posted 5 hours ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: MLOps Engineer Urgent — High Priority requirement. 1. Location - Hyderabad/ Pune 2. Interview Rounds - 4 round. 3. Contract - 12 Months About Client: We are a fast-growing boutique data engineering firm that empowers enterprises to manage and harness their data landscape efficiently. Leveraging advanced machine learning (ML) methodologies, Job Overview: We are seeking a highly skilled and motivated MLOps Engineer with 3–5 years of experience to join our engineering team. The ideal candidate should possess a strong foundation in DevOps or software engineering principles with practical exposure to machine learning operational workflows. You will be instrumental in operationalizing ML systems, optimizing the deployment lifecycle, and strengthening the integration between data science and engineering teams. Required Skills: ● Hands-on experience with MLOps platforms such as MLflow and Kubeflow. ● Proficiency in Infrastructure as Code (IaC) tools like Terraform or Ansible. ● Strong familiarity with monitoring and alerting frameworks (Prometheus, Grafana, Datadog, AWS CloudWatch). ● Solid understanding of microservices architecture, service discovery, and load balancing. ● Excellent programming skills in Python, with experience in writing modular, testable, and maintainable code. ● Proficient in Docker and container-based application deployments. ● Experience with CI/CD tools such as Jenkins or GitLab CI. ● Basic working knowledge of Kubernetes for container orchestration. ● Practical experience with cloud-based ML platforms such as AWS SageMaker, Databricks, or Google Vertex AI. ● Competency in Linux shell scripting and command-line operations. ● Proficiency with Git and version control best practices. ● Foundational knowledge of machine learning principles and typical ML workflow patterns. Good-to-Have Skills: ● Awareness of security practices specific to ML pipelines, including secure model endpoints and data protection. ● Experience with scripting languages like Bash or PowerShell for automation tasks. ● Exposure to database scripting and data integration pipelines. Experience & Qualifications: ● 3–5+ years of experience in MLOps, Site Reliability Engineering (SRE), or Software Engineering roles. ● At least 2+ years of hands-on experience working on ML/AI systems in production settings. ● Deep understanding of cloud-native architectures, containerization, and the end-to-end ML lifecycle. ● Bachelor’s degree in Computer Science, Software Engineering, or a related technical field. ● Relevant certifications such as AWS Certified DevOps Engineer – Professional are a strong plus. Show more Show less

Posted 5 hours ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Summary: We are seeking an experienced and highly motivated Senior Data Scientist with a strong background in Generative AI , agentic systems , and end-to-end deployment of AI solutions. This role demands proficiency not only in building intelligent systems but also in operationalizing them using MLOps best practices and cloud-based deployment strategies. The ideal candidate will thrive at the intersection of data science, software engineering, and infrastructure. Key Responsibilities: Design, develop, and deploy advanced AI-powered chatbots and agent-based solutions using frameworks like LangChain and LangGraph . Build and maintain backend APIs and user-facing interfaces using FastAPI and Streamlit . Leverage Snowflake for data integration, transformation, and analytics. Implement and manage MLOps pipelines for model training, validation, versioning, and monitoring. Deploy models and applications on AWS and GCP , ensuring scalability, reliability, and security. Use Redis for caching and real-time data access in production-grade applications. Collaborate with cross-functional teams to translate business needs into technical solutions. Prepare and deliver effective presentations to stakeholders and leadership teams. Ensure code quality, testing, and documentation in Python environments. Required Skills: 5–8 years of hands-on experience in Data Science , with a focus on deployment. Proven experience in Chatbot development , Agentic AI , and LLM-based frameworks . Strong Python programming skills, including use of PyTorch , LangChain , and LangGraph . Proficient in MLOps practices : CI/CD for ML, model monitoring, version control, and automated retraining. Experience deploying solutions on cloud platforms (AWS, GCP). Backend and app development experience with FastAPI , Streamlit , and Redis . Strong working knowledge of Snowflake and data engineering workflows. Excellent communication and presentation skills , with the ability to convey technical topics to non-technical audiences. Experience with containerization tools like Docker and orchestration with Kubernetes . Exposure to prompt engineering and LLM fine-tuning. Nice to Have: Familiarity with tools like MLflow , DVC , Vertex AI . Experience working in agile product teams or startup environments. Show more Show less

Posted 6 hours ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Description Jarvis Business Solutions is a leading eCommerce and CRM company specializing in implementing and delivering solutions for small to large enterprises. With expertise in SAP Hybris Commerce, Salesforce CRM, and Commerce, Jarvis serves clients globally by providing innovative solutions tailored to their needs. Role Description This is a full-time on-site role for an AI/ML Technical Lead located in Hyderabad. The Technical Lead will be responsible for leading AI/ML projects, developing algorithms, implementing machine learning models, and providing technical guidance to the team. They will collaborate with stakeholders to understand business requirements and ensure successful project delivery. Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field. 8+ years of overall software development experience with 3+ years in AI/ML roles. Strong knowledge of machine learning frameworks such as TensorFlow, PyTorch, Scikit-learn, etc. Hands-on experience with Python (preferred), R, or similar languages. Experience with cloud platforms (AWS/GCP/Azure) and ML Ops tools (MLflow, SageMaker, Kubeflow). Proven track record of delivering AI/ML solutions at scale. Strong knowledge of data preprocessing, model evaluation, and deployment strategies. Excellent problem-solving, analytical, and communication skills Show more Show less

Posted 6 hours ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Techvantage.ai is a next-generation technology and product engineering company at the forefront of innovation in Generative AI, Agentic AI, and autonomous intelligent systems. We build intelligent, cutting-edge solutions designed to scale and evolve with the future of artificial intelligence. Role Overview We are looking for a skilled and versatile AI Infrastructure Engineer (DevOps/MLOps) to build and manage the cloud infrastructure, deployment pipelines, and machine learning operations behind our AI-powered products. You will work at the intersection of software engineering, ML, and cloud architecture to ensure that our models and systems are scalable, reliable, and production-ready. Key Responsibilities Design and manage CI/CD pipelines for both software applications and machine learning workflows. Deploy and monitor ML models in production using tools like MLflow, SageMaker, Vertex AI, or similar. Automate the provisioning and configuration of infrastructure using IaC tools (Terraform, Pulumi, etc. Build robust monitoring, logging, and alerting systems for AI applications. Manage containerized services with Docker and orchestration platforms like Kubernetes. Collaborate with data scientists and ML engineers to streamline model experimentation, versioning, and deployment. Optimize compute resources and storage costs across cloud environments (AWS, GCP, or Azure). Ensure system reliability, scalability, and security across all environments. Requirements 5+ years of experience in DevOps, MLOps, or infrastructure engineering roles. Hands-on experience with cloud platforms (AWS, GCP, or Azure) and services related to ML workloads. Strong knowledge of CI/CD tools (e.g, GitHub Actions, Jenkins, GitLab CI). Proficiency in Docker, Kubernetes, and infrastructure-as-code frameworks. Experience with ML pipelines, model versioning, and ML monitoring tools. Scripting skills in Python, Bash, or similar for automation tasks. Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK, CloudWatch, etc. Understanding of ML lifecycle management and reproducibility. Preferred Qualifications Experience with Kubeflow, MLflow, DVC, or Triton Inference Server. Exposure to data versioning, feature stores, and model registries. Certification in AWS/GCP DevOps or Machine Learning Engineering is a plus. Background in software engineering, data engineering, or ML research is a bonus. What We Offer Work on cutting-edge AI platforms and infrastructure. Cross-functional collaboration with top ML, research, and product teams. Competitive compensation package - no constraints for the right candidate. (ref:hirist.tech) Show more Show less

Posted 12 hours ago

Apply

7.0 - 10.0 years

0 Lacs

Greater Kolkata Area

Remote

Linkedin logo

Job Title : Senior Data Scientist (Contract | Remote) Location : Remote Experience Required : 7 - 10 Years About The Role We are seeking a highly experienced Senior Data Scientist to join our team on a contract basis. This role is ideal for someone who excels in predictive analytics and has strong hands-on experience with Databricks and PySpark. You will play a key role in building and deploying scalable machine learning models, with a focus on regression, classification, and time-series forecasting. Key Responsibilities Design, build, and deploy predictive models using regression, classification, and time-series techniques. Develop and maintain scalable data pipelines using Databricks and PySpark. Leverage MLflow for experiment tracking and model versioning. Utilize Delta Lake for efficient data storage and version control. Collaborate with cross-functional teams to understand business requirements and translate them into analytical solutions. Implement and manage CI/CD pipelines for model deployment. Work with cloud platforms such as Azure or AWS to develop and deploy ML solutions. Required Skills & Qualifications Minimum 7 years of experience in predictive analytics and machine learning. Strong expertise in Databricks, PySpark, MLflow, and Delta Lake. Proficiency in Python, Spark MLlib, and AutoML frameworks. Experience working with CI/CD pipelines for model deployment. Familiarity with Azure or AWS cloud services. Excellent problem-solving skills and ability to work : Prior experience in the Life Insurance or Property & Casualty (P&C) insurance domain. (ref:hirist.tech) Show more Show less

Posted 12 hours ago

Apply

58.0 years

0 Lacs

Greater Lucknow Area

On-site

Linkedin logo

Job Description We are seeking a high-impact AI/ML Engineer to lead the design, development, and deployment of machine learning and AI solutions across vision, audio, and language modalities. You'll be part of a fast-paced, outcome-oriented AI & Analytics team, working alongside data scientists, engineers, and product leaders to transform business use cases into real-time, scalable AI systems. This role demands strong technical leadership, a product mindset, and hands-on expertise in Computer Vision, Audio Intelligence, and Deep Learning. Key Responsibilities Architect, develop, and deploy ML models for multimodal problems, including vision (image/video), audio (speech/sound), and NLP tasks. Own the complete ML lifecycle : data ingestion, model development, experimentation, evaluation, deployment, and monitoring. Leverage transfer learning, foundation models, or self-supervised approaches where suitable. Design and implement scalable training pipelines and inference APIs using frameworks like PyTorch or TensorFlow. Collaborate with MLOps, data engineering, and DevOps to productionize models using Docker, Kubernetes, or serverless infrastructure. Continuously monitor model performance and implement retraining workflows to ensure accuracy over time. Stay ahead of the curve on cutting-edge AI research (e.g., generative AI, video understanding, audio embeddings) and incorporate innovations into production systems. Write clean, well-documented, and reusable code to support agile experimentation and long-term platform : Bachelors or Masters degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 58+ years of experience in AI/ML Engineering, with at least 3 years in applied deep learning. Technical Skills Languages : Expert in Python; good knowledge of R or Java is a plus. ML/DL Frameworks : Proficient with PyTorch, TensorFlow, Scikit-learn, ONNX. Computer Vision : Image classification, object detection, OCR, segmentation, tracking (YOLO, Detectron2, OpenCV, MediaPipe). Audio AI : Speech recognition (ASR), sound classification, audio embedding models (Wav2Vec2, Whisper, etc.). Data Engineering : Strong with Pandas, NumPy, SQL, and preprocessing pipelines for structured and unstructured data. NLP/LLMs : Working knowledge of Transformers, BERT/LLAMA, Hugging Face ecosystem is preferred. Cloud & MLOps : Experience with AWS/GCP/Azure, MLFlow, SageMaker, Vertex AI, or Azure ML. Deployment & Infrastructure : Experience with Docker, Kubernetes, REST APIs, serverless ML inference. CI/CD & Version Control : Git, DVC, ML pipelines, Jenkins, Airflow, etc. Soft Skills & Competencies Strong analytical and systems thinking; able to break down business problems into ML components. Excellent communication skills able to explain models, results, and decisions to non-technical stakeholders. Proven ability to work cross-functionally with designers, engineers, product managers, and analysts. Demonstrated bias for action, rapid experimentation, and iterative delivery of impact. (ref:hirist.tech) Show more Show less

Posted 12 hours ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Description At Blend, we are award-winning experts who transform businesses by delivering valuable insights that make a difference. From crafting a data strategy that focuses resources on what will make the biggest difference to your company, to standing up infrastructure, and turning raw data into value through data science and visualization: we do it all. We believe that data that doesn't drive value is lost opportunity, and we are passionate about helping our clients drive better outcome through applied analytics. We are obsessed with delivering world class solutions to our customers through our network of industry leading partners. If this sounds like your kind of challenge, we would love to hear from you. For more information, visit www.blend360.com Job Description We are looking for someone who is ready for the next step in their career and is excited by the idea of solving problems and designing best in class. However, they also need to be aware of the practicalities of making a difference in the real world – whilst we love innovative advanced solutions, we also believe that sometimes a simple solution can have the most impact. Our AI Engineer is someone who feels the most comfortable around solving problems, answering questions and proposing solutions. We place a high value on the ability to communicate and translate complex analytical thinking into non-technical and commercially oriented concepts, and experience working on difficult projects and/or with demanding stakeholders is always appreciated. What can you expect from the role? Contribute to design, develop, deploy and maintain AI solutions Use a variety of AI Engineering tools and methods to deliver Own parts of projects end-to-end Contributing to solutions design and proposal submissions Supporting the development of the AI engineering team within Blend Maintain in-depth knowledge of the AI ecosystems and trends Mentor junior colleagues Qualifications Contribute to the design, development, testing, deployment, maintenance, and improvement of robust, scalable, and reliable software systems, adhering to best practices. Apply Python programming skills for both software development and AI/ML tasks. Utilize analytical and problem-solving skills to debug complex software, infrastructure, and AI integration issues. Proficiently use version control systems, especially Git and ML/LLMOps model versioning protocols. Assist in analysing complex or ambiguous AI problems, breaking them down into manageable tasks, and contributing to conceptual solution design within the rapidly evolving field of generative AI. Work effectively within a standard software development lifecycle (e.g., Agile, Scrum). Contribute to the design and utilization of scalable systems using cloud services (AWS, Azure, GCP), including compute, storage, and ML/AI services. (Preferred: Azure) Participate in designing and building scalable and reliable infrastructure to support AI inference workloads, including implementing APIs, microservices, and orchestration layers. Contribute to the design, building, or working with event-driven architectures and relevant technologies (e.g., Kafka, RabbitMQ, cloud event services) for asynchronous processing and system integration. Experience with containerization (e.g., Docker) and orchestration tools (e.g., Kubernetes, Airflow, Kubeflow, Databricks Jobs, etc). Assist in implementing CI/CD pipelines and optionally using IaC principles/tools for deploying and managing infrastructure and ML/LLM models. Contribute to developing and deploying LLM-powered features into production systems, translating experimental outputs into robust services with clear APIs. Demonstrate familiarity with transformer model architectures and a practical understanding of LLM specifics like context handling. Assist in designing, implementing, and optimising prompt strategies (e.g., chaining, templates, dynamic inputs); practical understanding of output post-processing. Experience integrating with third-party LLM providers, managing API usage, rate limits, token efficiency, and applying best practices for versioning, retries, and failover. Contribute to coordinating multi-step AI workflows, potentially involving multiple models or services, and optimising for latency and cost (sequential vs. parallel execution). Assist in monitoring, evaluating, and optimising AI/LLM solutions for performance (latency, throughput, reliability), accuracy, and cost in production environments. Additional Information Experience specifically with the Databricks MLOps platform. Familiarity with fine-tuning classical LLM models. Experience ensuring security and observability for AI services. Contribution to relevant open-source projects. Familiarity with building agentic GenAI modules or systems. Have hands-on experience implementing and automating MLOps/LLMOps practices, including model tracking, versioning, deployment, monitoring (latency, cost, throughput, reliability), logging, and retraining workflows. Experience working with MLOps/experiment tracking and operational tools (e.g., MLflow, Weights & Biases). Show more Show less

Posted 12 hours ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Key Responsibilities Design, develop, and deploy machine learning models for prediction, recommendation, anomaly detection, NLP, or image processing tasks. Work with large, complex datasets to extract insights and build scalable solutions. Collaborate with data engineers to create efficient data pipelines and feature engineering workflows. Evaluate model performance using appropriate metrics and improve models through iterative testing and tuning. Communicate findings, insights, and model outputs clearly to non-technical stakeholders. Stay up to date with the latest machine learning research, frameworks, and technologies. Required Skills Strong programming skills in Python (Pandas, NumPy, Scikit-learn, etc.). Hands-on experience with ML/DL frameworks like TensorFlow, PyTorch, XGBoost, or LightGBM. Experience in building, deploying, and maintaining end-to-end ML models in production. Solid understanding of statistics, probability, and mathematical modeling. Proficiency with SQL and data manipulation in large-scale databases. Familiarity with version control (Git), CI/CD workflows, and model tracking tools (MLflow, DVC, etc.). Preferred Skills Experience with cloud platforms like AWS, GCP, or Azure (e.g., SageMaker, Vertex AI). Knowledge of MLOps practices and tools for scalable ML deployments. Exposure to real-time data processing or streaming (Kafka, Spark). Experience with NLP, Computer Vision, or Time Series Forecasting. Show more Show less

Posted 21 hours ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Appnext offers end-to-end discovery solutions covering all the touchpoints users have with their devices. Thanks to Appnext’s direct partnerships with top OEM brands and carriers, user engagement is achieved from the moment they personalize their device for the first time and throughout their daily mobile journey. Appnext ‘Timeline’, a patented behavioral analytics technology, is uniquely capable of predicting the apps users are likely to need next. This innovative solution means app developers and marketers can seamlessly engage with users directly on their smartphones through personalized, contextual recommendations. Established in 2012 and now with 12 offices globally, Appnext is the fastest-growing and largest independent mobile discovery platform in emerging markets. As a Machine Learning Engineer , you will be in charge of building end-to-end machine learning pipelines that operate at a huge scale, from data investigation, ingestions and model training to deployment, monitoring, and continuous optimization. You will ensure that each pipeline delivers measurable impact through experimentation, high-throughput inference, and seamless integration with business-critical systems. This job combines 70% machine learning engineering and 30% algorithm engineering and data science. We're seeking an Adtech pro who thrives in a team environment, possesses exceptional communication and analytical skills, and can navigate high-pressure demands of delivering results, taking ownership, and leveraging sales opportunities. Responsibilities: Build ML pipelines that train on real big data and perform on a massive scale. Handle a massive responsibility, Advertise on lucrative placement (Samsung appstore, Xiaomi phones, TrueCaller). Train models that will make billions of daily predictions and affect hundreds of millions users. Optimize and discover the best solution algorithm to data problems, from implementing exotic losses to efficient grid search. Validate and test everything. Every step should be measured and chosen via AB testing. Use of observability tools. Own your experiments and your pipelines. Be Frugal. Optimize the business solution at minimal cost. Advocate for AI. Be the voice of data science and machine learning, answering business needs. Build future products involving agentic AI and data science. Affect millions of users every instant and handle massive scale Requirements: MSc in CS/EE/STEM with at least 5 years of proven experience (or BSc with equivalent experience) as a Machine Learning Engineer: strong focus on MLOps, data analytics, software engineering, and applied data science- Must Hyper communicator: Ability to work with minimal supervision and maximal transparency. Must understand requirements rigorously, while frequently giving an efficient honest picture of his/hers work progress and results. Flawless verbal English- Must Strong problem-solving skills, drive projects from concept to production, working incrementally and smart. Ability to own features end-to-end, theory, implementation, and measurement. Articulate data-driven communication is also a must. Deep understanding of machine learning, including the internals of all important ML models and ML methodologies. Strong real experience in Python, and at least one other programming language (C#, C++, Java, Go…). Ability to write efficient, clear, and resilient production-grade code. Flawless in SQL. Strong background in probability and statistics. Experience with tools and ML models Experience with conducting A/B test. Experience with using cloud providers and services (AWS) and python frameworks: TensorFlow/PyTorch, Numpy, Pandas, SKLearn (Airflow, MLflow, Transformers, ONNX, Kafka are a plus). AI/LLMs assistance: Candidates have to hold all skills independently without using AI assist. With that candidates are expected to use AI effectively, safely and transparently. Preferred: Deep Knowledge in ML aspects including ML Theory, Optimization, Deep learning tinkering, RL, Uncertainty quantification, NLP, classical machine learning, performance measurement. Prompt engineering and Agentic workflows experience Web development skills Publication in leading machine learning conferences and/or medium blogs. Show more Show less

Posted 22 hours ago

Apply

3.0 - 4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Responsibilities Build and fine-tune models for NLP, computer vision, predictions, and more. Engineer intelligent pipelines that are used in production. Collaborate across teams to bring AI solutions to life (not just in Jupyter Notebooks). Embrace MLOps with tools like MLflow, Docker, and Kubernetes. Stay on the AI cutting edge and share what you learnmentorship mindset is a big plus. Champion code quality and contribute to a future-focused dev culture. Requirements 3-4 years in hardcore AI/ML or applied data science. Pro-level Python skills (R is cool too, but Python is king here). Mastery over ML frameworks: scikit-learn, XGBoost, LightGBM, TensorFlow/Keras, PyTorch. Hands-on with real-world data wrangling, feature engineering, and model deployment. DevOps-savvy: Docker, REST APIs, Git, and maybe even some MLOps sparkle. Cloud comfort: AWS, GCP, or Azure - take your pick. Solid grasp of Agile, good debugging instincts, and a hunger for optimization. This job was posted by Sampurna Pal from AmpleLogic. Show more Show less

Posted 23 hours ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Position: Solution Architect Location: Chennai/ Bangalore/ Kuala Lumpur Experience: 8+ years Employment Type: Full-time Job Overview Join Moving Walls, a trailblazer in the Out-of-Home (OOH) advertising and AdTech ecosystem, as a Solution Architect. This pivotal role places you at the heart of our innovative journey, designing and implementing scalable, efficient, and transformative solutions for our award-winning platforms like LMX and MAX . With a focus on automating and enhancing media transactions, you’ll enable a seamless connection between media buyers and sellers in a rapidly evolving digital-first landscape. As a Solution Architect, you will bridge the gap between business objectives and technical execution, working in an Agile environment with POD-based execution models to ensure ownership and accountability. You will drive initiatives that revolutionize the way data and technology shape OOH advertising. Why Join Us? ● Innovative Vision: Be part of a team committed to "Creating the Future of Outernet Media", where every solution impacts global markets across Asia, ANZ, Africa, and more. ● Cutting-edge Projects: Work on features like programmatic deal automation, data-driven audience insights, and dynamic campaign management for platforms connecting billions of ad impressions. ● Collaborative Culture: Collaborate with multidisciplinary teams, including Sales, Product Management, and Engineering, to craft solutions that are customized and impactful. What You’ll Do: ● Architect scalable and innovative solutions for AdTech products, ensuring alignment with organizational goals and market needs. ● Collaborate with cross-functional teams to gather, analyze, and translate business requirements into technical designs. ● Lead the development of programmatic solutions, dynamic audience segmentation tools, and integrations for global markets. ● Enhance existing products by integrating advanced features like dynamic rate cards, bid management, and inventory mapping. ● Advocate for best practices in system design, ensuring the highest standards of security, reliability, and performance. What You Bring: ● A strong technical background with hands-on experience in cloud-based architectures, API integrations, and data analytics. ● Proven expertise in working within an Agile environment and leading POD-based teams to deliver high-impact results. ● Passion for AdTech innovation and the ability to navigate complex, fast-paced environments. ● Excellent problem-solving skills, creativity, and a customer-centric mindset. Key Responsibilities 1. Solution Design: ○ Develop end-to-end solution architectures for web, mobile, and cloud-based platforms using the specified tech stack. ○ Translate business requirements into scalable and reliable technical solutions. 2. Agile POD-Based Execution: ○ Collaborate with cross-functional POD teams (Product, Engineering, QA, and Operations) to deliver iterative and focused solutions. ○ Ensure clear ownership of deliverables within the POD, fostering accountability and streamlined execution. ○ Contribute to defining and refining the POD stages to ensure alignment with organizational goals. 3. Collaboration and Stakeholder Management: ○ Work closely with product, engineering, and business teams to define technical requirements. ○ Lead technical discussions with internal and external stakeholders. 4. Technical Expertise: ○ Provide architectural guidance and best practices for system integrations, APIs, and microservices. ○ Ensure solutions meet non-functional requirements like scalability, reliability, and security. 5. Documentation: ○ Prepare and maintain architectural documentation, including solution blueprints and workflows. ○ Create technical roadmaps and detailed design documentation. 6. Mentorship: ○ Guide and mentor engineering teams during development and deployment phases. ○ Review code and provide technical insights to improve quality and performance. 7. Innovation and Optimization: ○ Identify areas for technical improvement and drive innovation in solutions. ○ Evaluate emerging technologies to recommend the best tools and frameworks. Required Skills and Qualifications ● Bachelor’s/Master’s degree in Computer Science, Information Technology, or a related field. ● Proven experience as a Solution Architect or a similar role. ● Expertise in programming languages and frameworks: Java, Angular, Python, C++ ● Proficiency in AI/ML frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, or Keras. ● Experience in deploying AI models in production, including optimizing for performance and scalability. ● Understanding of deep learning, NLP, computer vision, or generative AI techniques. ● Hands-on experience with model fine-tuning, transfer learning, and hyperparameter optimization. ● Strong knowledge of enterprise architecture frameworks (TOGAF, Zachman, etc.). ● Expertise in distributed systems, microservices, and cloud-native architectures. ● Experience in API design, data pipelines, and integration of AI services within existing systems. ● Strong knowledge of databases: MongoDB, SQL, NoSQL. ● Proficiency in working with large-scale datasets, data wrangling, and ETL pipelines. ● Hands-on experience with CI/CD pipelines for AI development. ● Version control systems like Git and experience with ML lifecycle tools such as MLflow or DVC. ● Proven track record of leading AI-driven projects from ideation to deployment. ● Hands-on experience with cloud platforms (AWS, Azure, GCP) for deploying AI solutions. ● Familiarity with Agile methodologies, especially POD-based execution models. ● Strong problem-solving skills and ability to design scalable solutions. ● Excellent communication skills to articulate technical solutions to stakeholders. Preferred Qualifications ● Experience in e-commerce, Adtech or OOH (Out-of-Home) advertising technology. ● Knowledge of tools like Jira, Confluence, and Agile frameworks like Scrum or Kanban. ● Certification in cloud technologies (e.g., AWS Solutions Architect). Tech Stack ● Programming Languages: Java, Python or C++ ● Frontend Framework: Angular ● Database Technologies: MongoDB, SQL, NoSQL ● Cloud Platform: AWS ● Familiarity with data processing tools like Pandas, NumPy, and big data frameworks (e.g., Hadoop, Spark). ● Experience with cloud platforms for AI (AWS SageMaker, Azure ML, Google Vertex AI). ● Understanding of APIs, microservices, and containerization tools like Docker and Kubernetes. Share your profile to kushpu@movingwalls.com Show more Show less

Posted 1 day ago

Apply

2.0 - 6.0 years

5 - 11 Lacs

India

On-site

We are looking for an experienced AI Engineer to join our team. The ideal candidate will have a strong background in designing, deploying, and maintaining advanced AI/ML models with expertise in Natural Language Processing (NLP), Computer Vision, and architectures like Transformers and Diffusion Models. You will play a key role in developing AI-powered solutions, optimizing performance, and deploying and managing models in production environments. Key Responsibilities AI Model Development and Optimization: Design, train, and fine-tune AI models for NLP, Computer Vision, and other domains using frameworks like TensorFlow and PyTorch. Work on advanced architectures, including Transformer-based models (e.g., BERT, GPT, T5) for NLP tasks and CNN-based models (e.g., YOLO, VGG, ResNet) for Computer Vision applications. Utilize techniques like PEFT (Parameter-Efficient Fine-Tuning) and SFT (Supervised Fine-Tuning) to optimize models for specific tasks. Build and train RLHF (Reinforcement Learning with Human Feedback) and RL-based models to align AI behavior with real-world objectives., Explore multimodal AI solutions combining text, vision, and audio using generative deep learning architectures. Natural Language Processing (NLP): Develop and deploy NLP solutions, including language models, text generation, sentiment analysis, and text-to-speech systems. Leverage advanced Transformer architectures (e.g., BERT, GPT, T5) for NLP tasks. AI Model Deployment and Frameworks: Deploy AI models using frameworks like VLLM, Docker, and MLFlow in production-grade environments. Create robust data pipelines for training, testing, and inference workflows. Implement CI/CD pipelines for seamless integration and deployment of AI solutions. Production Environment Management: Deploy, monitor, and manage AI models in production, ensuring performance, reliability, and scalability. Set up monitoring systems using Prometheus to track metrics like latency, throughput, and model drift. Data Engineering and Pipelines: Design and implement efficient data pipelines for preprocessing, cleaning, and transformation of large datasets. Integrate with cloud-based data storage and retrieval systems for seamless AI workflows. Performance Monitoring and Optimization: Optimize AI model performance through hyperparameter tuning and algorithmic improvements. Monitor performance using tools like Prometheus, tracking key metrics (e.g., latency, accuracy, model drift, error rates etc.) Solution Design and Architecture: Collaborate with cross-functional teams to understand business requirements and translate them into scalable, efficient AI/ML solutions. Design end-to-end AI systems, including data pipelines, model training workflows, and deployment architectures, ensuring alignment with business objectives and technical constraints. Conduct feasibility studies and proof-of-concepts (PoCs) for emerging technologies to evaluate their applicability to specific use cases. Stakeholder Engagement: Act as the technical point of contact for AI/ML projects, managing expectations and aligning deliverables with timelines. Participate in workshops, demos, and client discussions to showcase AI capabilities and align solutions with client needs. Experience: 2 - 6 years of experience Salary : 5-11 LPA Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹1,100,000.00 per year Schedule: Day shift Work Location: In person

Posted 1 day ago

Apply

3.0 years

0 - 0 Lacs

Calicut

On-site

Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. 3+ years of experience in AI/ML development and deployment. Proficient in Python and familiar with libraries like TensorFlow, PyTorch, Scikit-learn, Pandas, and NumPy. Strong understanding of machine learning algorithms (supervised, unsupervised, deep learning). Experience with cloud platforms (AWS, Azure, GCP) and MLOps tools (MLflow, Airflow, Docker, Kubernetes). Solid understanding of data structures, algorithms, and software engineering principles. Experience with RESTful APIs and integrating AI models into production environments. Job Type: Full-time Pay: ₹35,000.00 - ₹60,000.00 per month Benefits: Internet reimbursement Paid sick time Provident Fund Schedule: Fixed shift Work Location: In person

Posted 1 day ago

Apply

3.0 - 7.0 years

7 - 16 Lacs

Hyderābād

On-site

AI Specialist / Machine Learning Engineer Location: On-site (hyderabad) Department: Data Science & AI Innovation Experience Level: Mid–Senior Reports To: Director of AI / CTO Employment Type: Full-time Job Summary We are seeking a skilled and forward-thinking AI Specialist to join our advanced technology team. In this role, you will lead the design, development, and deployment of cutting-edge AI/ML solutions, including large language models (LLMs), multimodal systems, and generative AI. You will collaborate with cross-functional teams to develop intelligent systems, automate complex workflows, and unlock insights from data at scale. Key Responsibilities Design and implement machine learning models for natural language processing (NLP), computer vision, predictive analytics, and generative AI. Fine-tune and deploy LLMs using frameworks such as Hugging Face Transformers, OpenAI APIs, and Anthropic Claude. Develop Retrieval-Augmented Generation (RAG) pipelines using tools like LangChain, LlamaIndex, and vector databases (e.g., Pinecone, Weaviate, Qdrant). Productionize ML workflows using MLflow, TensorFlow Extended (TFX), or AWS SageMaker Pipelines. Integrate generative AI with business applications, including Copilot-style features, chat interfaces, and workflow automation. Collaborate with data scientists, software engineers, and product managers to build and scale AI-powered products. Monitor, evaluate, and optimize model performance, focusing on fairness, explainability (e.g., SHAP, LIME), and data/model drift. Stay informed on cutting-edge AI research (e.g., NeurIPS, ICLR, arXiv) and evaluate its applicability to business challenges. Tools & Technologies Languages & Frameworks Python, PyTorch, TensorFlow, JAX FastAPI, LangChain, LlamaIndex ML & AI Platforms OpenAI (GPT-4/4o), Anthropic Claude, Mistral, Cohere Hugging Face Hub & Transformers Google Vertex AI, AWS SageMaker, Azure ML Data & Deployment MLflow, DVC, Apache Airflow, Ray Docker, Kubernetes, RESTful APIs, GraphQL Snowflake, BigQuery, Delta Lake Vector Databases & RAG Tools Pinecone, Weaviate, Qdrant, FAISS ChromaDB, Milvus Generative & Multimodal AI DALL·E, Sora, Midjourney, Runway Whisper, CLIP, SAM (Segment Anything Model) Qualifications Bachelor’s or Master’s in Computer Science, AI, Data Science, or related discipline 3–7 years of experience in machine learning or applied AI Hands-on experience deploying ML models to production environments Familiarity with LLM prompt engineering and fine-tuning Strong analytical thinking, problem-solving ability, and communication skills Preferred Qualifications Contributions to open-source AI projects or academic publications Experience with multi-agent frameworks (e.g., AutoGPT, OpenDevin) Knowledge of synthetic data generation and augmentation techniques Job Type: Permanent Pay: ₹734,802.74 - ₹1,663,085.14 per year Benefits: Health insurance Provident Fund Schedule: Day shift Work Location: In person

Posted 1 day ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

This posting is for one of our International Clients. About the Role We’re creating a new certification: Inside Gemini: Gen AI Multimodal and Google Intelligence (Google DeepMind) . This course is designed for technical learners who want to understand and apply the capabilities of Google’s Gemini models and DeepMind technologies to build powerful, multimodal AI applications. We’re looking for a Subject Matter Expert (SME) who can help shape this course from the ground up. You’ll work closely with a team of learning experience designers, writers, and other collaborators to ensure the course is technically accurate, industry-relevant, and instructionally sound. Responsibilities As the SME, you’ll partner with learning experience designers and content developers to: Translate real-world Gemini and DeepMind applications into accessible, hands-on learning for technical professionals. Guide the creation of labs and projects that allow learners to build pipelines for image-text fusion, deploy Gemini APIs, and experiment with DeepMind’s reinforcement learning libraries. Contribute technical depth across activities, from high-level course structure down to example code, diagrams, voiceover scripts, and data pipelines. Ensure all content reflects current, accurate usage of Google’s multimodal tools and services. Be available during U.S. business hours to support project milestones, reviews, and content feedback. This role is an excellent fit for professionals with deep experience in AI/ML, Google Cloud, and a strong familiarity with multimodal systems and the DeepMind ecosystem. Essential Tools & Platforms A successful SME in this role will demonstrate fluency and hands-on experience with the following: Google Cloud Platform (GCP) Vertex AI (particularly Gemini integration, model tuning, and multimodal deployment) Cloud Functions, Cloud Run (for inference endpoints) BigQuery and Cloud Storage (for handling large image-text datasets) AI Platform Notebooks or Colab Pro Google DeepMind Technologies JAX and Haiku (for neural network modeling and research-grade experimentation) DeepMind Control Suite or DeepMind Lab (for reinforcement learning demonstrations) RLax or TF-Agents (for building and modifying RL pipelines) AI/ML & Multimodal Tooling Gemini APIs and SDKs (image-text fusion, prompt engineering, output formatting) TensorFlow 2.x and PyTorch (for model interoperability) Label Studio, Cloud Vision API (for annotation and image-text preprocessing) Data Science & MLOps DVC or MLflow (for dataset and model versioning) Apache Beam or Dataflow (for processing multimodal input streams) TensorBoard or Weights & Biases (for visualization) Content Authoring & Collaboration GitHub or Cloud Source Repositories Google Docs, Sheets, Slides Screen recording tools like Loom or OBS Studio Required skills and experience: Demonstrated hands-on experience building, deploying, and maintaining sophisticated AI powered applications using Gemini APIs/SDKs within the Google Cloud ecosystem, especially in Firebase Studio and VS Code. Proficiency in designing and implementing agent-like application patterns, including multi-turn conversational flows, state management, and complex prompting strategies (e.g., Chain-of Thought, few-shot, zero-shot). Experience integrating Gemini with Google Cloud services (Firestore, Cloud Functions, App Hosting) and external APIs for robust, production-ready solutions. Proven ability to engineer applications that process, integrate, and generate content across multiple modalities (text, images, audio, video, code) using Gemini’s native multimodal capabilities. Skilled in building and orchestrating pipelines for multimodal data handling, synchronization, and complex interaction patterns within application logic. Experience designing and implementing production-grade RAG systems, including integration with vector databases (e.g., Pinecone, ChromaDB) and engineering data pipelines for indexing and retrieval. Ability to manage agent state, memory, and persistence for multi-turn and long-running interactions. Proficiency leveraging AI-assisted coding features in Firebase Studio (chat, inline code, command execution) and using App Prototyping agents or frameworks like Genkit for rapid prototyping and structuring agentic logic. Strong command of modern development workflows, including Git/GitHub, code reviews, and collaborative development practices. Experience designing scalable, fault-tolerant deployment architectures for multimodal and agentic AI applications using Firebase App Hosting, Cloud Run, or similar serverless/cloud platforms. Advanced MLOps skills, including monitoring, logging, alerting, and versioning for generative AI systems and agents. Deep understanding of security best practices: prompt injection mitigation (across modalities), secure API key management, authentication/authorization, and data privacy. Demonstrated ability to engineer for responsible AI, including bias detection, fairness, transparency, and implementation of safety mechanisms in agentic and multimodal applications. Experience addressing ethical challenges in the deployment and operation of advanced AI systems. Proven success designing, reviewing, and delivering advanced, project-based curriculum and hands-on labs for experienced software developers and engineers. Ability to translate complex engineering concepts (RAG, multimodal integration, agentic patterns, MLOps, security, responsible AI) into clear, actionable learning materials and real world projects. 5+ years of professional experience in AI-powered application development, with a focus on generative and multimodal AI. Strong programming skills in Python and JavaScript/TypeScript; experience with modern frameworks and cloud-native development. Bachelor’s or Master’s degree in Computer Science, Data Engineering, AI, or a related technical field. Ability to explain advanced technical concepts (e.g., fusion transformers, multimodal embeddings, RAG workflows) to learners in an accessible way. Strong programming experience in Python and experience deploying machine learning pipelines Ability to work independently, take ownership of deliverables, and collaborate closely with designers and project managers Preferred: Experience with Google DeepMind tools (JAX, Haiku, RLax, DeepMind Control Suite/Lab) and reinforcement learning pipelines. Familiarity with open data formats (Delta, Parquet, Iceberg) and scalable data engineering practices. Prior contributions to open-source AI projects or technical community engagement. Show more Show less

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Company: Indian / Global Engineering & Manufacturing Organization Key Skills: Machine Learning, ML, AI Artificial intelligence, Artificial Intelligence, Tensorflow, Python, Pytorch. Roles and Responsibilities: Design, build, and rigorously optimize the complete stack necessary for large-scale model training, fine-tuning, and inference--including dataloading, distributed training, and model deployment--to maximize Model Flop Utilization (MFU) on compute clusters. Collaborate closely with research scientists to translate state-of-the-art models and algorithms into production-grade, high-performance code and scalable infrastructure. Implement, integrate, and test advancements from recent research publications and open-source contributions into enterprise-grade systems. Profile training workflows to identify and resolve bottlenecks across all layers of the training stack--from input pipelines to inference--enhancing speed and resource efficiency. Contribute to evaluations and selections of hardware, software, and cloud platforms defining the future of the AI infrastructure stack. Use MLOps tools (e.g., MLflow, Weights & Biases) to establish best practices across the entire AI model lifecycle, including development, validation, deployment, and monitoring. Maintain extensive documentation of infrastructure architecture, pipelines, and training processes to ensure reproducibility and smooth knowledge transfer. Continuously research and implement improvements in large-scale training strategies and data engineering workflows to keep the organization at the cutting edge. Demonstrate initiative and ownership in developing rapid prototypes and production-scale systems for AI applications in the energy sector. Experience Requirement: 5-9 years of experience building and optimizing large-scale machine learning infrastructure, including distributed training and data pipelines. Proven hands-on expertise with deep learning frameworks such as PyTorch, JAX, or PyTorch Lightning in multi-node GPU environments. Experience in scaling models trained on large datasets across distributed computing systems. Familiarity with writing and optimizing CUDA, Triton, or CUTLASS kernels for performance enhancement is preferred. Hands-on experience with AI/ML lifecycle management using MLOps frameworks and performance profiling tools. Demonstrated collaboration with AI researchers and data scientists to integrate models into production environments. Track record of open-source contributions in AI infrastructure or data engineering is a significant plus. Education: M.E., B.Tech M.Tech (Dual), BCA, B.E., B.Tech, M. Tech, MCA. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Linkedin logo

You will lead the development of predictive machine learning models for Revenue Cycle Management analytics, along the lines of: 1 Payer Propensity Modeling - predicting payer behavior and reimbursement likelihood 2 Claim Denials Prediction - identifying high-risk claims before submission 3 Payment Amount Prediction - forecasting expected reimbursement amounts 4 Cash Flow Forecasting - predicting revenue timing and patterns 5 Patient-Related Models - enhancing patient financial experience and outcomes 6 Claim Processing Time Prediction - optimizing workflow and resource allocation Additionally, we will work on emerging areas and integration opportunities—for example, denial prediction + appeal success probability or prior authorization prediction + approval likelihood models. You will reimagine how providers, patients, and payors interact within the healthcare ecosystem through intelligent automation and predictive insights, ensuring that providers can focus on delivering the highest quality patient care. VHT Technical Environment 1 Cloud Platform: AWS (SageMaker, S3, Redshift, EC2) 2 Development Tools: Jupyter Notebooks, Git, Docker 3 Programming: Python, SQL, R (optional) 4 ML/AI Stack: Scikit-learn, TensorFlow/PyTorch, MLflow, Airflow 5 Data Processing: Spark, Pandas, NumPy 6 Visualization: Matplotlib, Seaborn, Plotly, Tableau Show more Show less

Posted 1 day ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We are seeking a highly skilled Senior Technical Architect with expertise in Databricks, Apache Spark, and modern data engineering architectures. The ideal candidate will have a strong grasp of Generative AI and RAG pipelines and a keen interest (or working knowledge) in Agentic AI systems. This individual will lead the architecture, design, and implementation of scalable data platforms and AI-powered applications for our global clients. This high-impact role requires technical leadership, cross-functional collaboration, and a passion for solving complex business challenges with data and AI. Responsibilities Lead architecture, design, and deployment of scalable data solutions using Databricks and the medallion architecture. Guide technical teams in building batch and streaming data pipelines using Spark, Delta Lake, and MLflow. Collaborate with clients and internal stakeholders to understand business needs and translate them into robust data and AI architectures. Design and prototype Generative AI applications using LLMs, RAG pipelines, and vector stores. Provide thought leadership on the adoption of Agentic AI systems in enterprise environments. Mentor data engineers and solution architects across multiple projects. Ensure adherence to security, governance, performance, and reliability best practices. Stay current with emerging trends in data engineering, MLOps, GenAI, and agent-based systems. Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or related technical discipline. 10+ years of experience in data architecture, data engineering, or software architecture roles. 5+ years of hands-on experience with Databricks, including Spark SQL, Delta Lake, Unity Catalog, and MLflow. Proven experience in designing and delivering production-grade data platforms and pipelines. Exposure to LLM frameworks (OpenAI, Hugging Face, LangChain, etc.) and vector databases (FAISS, Weaviate, etc.). Strong understanding of cloud platforms (Azure, AWS, or GCP), particularly in the context of Databricks deployment. Knowledge or interest in Agentic AI frameworks and multi-agent system design is highly desirable. Technical Skills Databricks (incl. Spark, Delta Lake, MLflow, Unity Catalog) Python, SQL, PySpark GenAI tools and libraries (LangChain, OpenAI, etc.) CI/CD and DevOps for data REST APIs, JSON, data serialization formats Cloud services (Azure/AWS/GCP) Soft Skills Strong communication and stakeholder management skills Ability to lead and mentor diverse technical teams Strategic thinking with a bias for action Comfortable with ambiguity and iterative development Client-first mindset and consultative approach Excellent problem-solving and analytical skills Preferred Certifications Databricks Certified Data Engineer / Architect Cloud certifications (Azure/AWS/GCP) Any certifications in AI/ML, NLP, or GenAI frameworks are a plus Show more Show less

Posted 1 day ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description We are seeking a highly skilled Senior Technical Architect with expertise in Databricks, Apache Spark, and modern data engineering architectures. The ideal candidate will have a strong grasp of Generative AI and RAG pipelines and a keen interest (or working knowledge) in Agentic AI systems. This individual will lead the architecture, design, and implementation of scalable data platforms and AI-powered applications for our global clients. This high-impact role requires technical leadership, cross-functional collaboration, and a passion for solving complex business challenges with data and AI. Key Responsibilities Lead architecture, design, and deployment of scalable data solutions using Databricks and the medallion architecture. Guide technical teams in building batch and streaming data pipelines using Spark, Delta Lake, and MLflow. Collaborate with clients and internal stakeholders to understand business needs and translate them into robust data and AI architectures. Design and prototype Generative AI applications using LLMs, RAG pipelines, and vector stores. Provide thought leadership on the adoption of Agentic AI systems in enterprise environments. Mentor data engineers and solution architects across multiple projects. Ensure adherence to security, governance, performance, and reliability best practices. Stay current with emerging trends in data engineering, MLOps, GenAI, and agent-based systems. Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or related technical discipline. 10+ years of experience in data architecture, data engineering, or software architecture roles. 5+ years of hands-on experience with Databricks, including Spark SQL, Delta Lake, Unity Catalog, and MLflow. Proven experience in designing and delivering production-grade data platforms and pipelines. Exposure to LLM frameworks (OpenAI, Hugging Face, LangChain, etc.) and vector databases (FAISS, Weaviate, etc.). Strong understanding of cloud platforms (Azure, AWS, or GCP), particularly in the context of Databricks deployment. Knowledge or interest in Agentic AI frameworks and multi-agent system design is highly desirable. Technical Skills Databricks (incl. Spark, Delta Lake, MLflow, Unity Catalog) Python, SQL, PySpark GenAI tools and libraries (LangChain, OpenAI, etc.) CI/CD and DevOps for data REST APIs, JSON, data serialization formats Cloud services (Azure/AWS/GCP) Soft Skills Strong communication and stakeholder management skills Ability to lead and mentor diverse technical teams Strategic thinking with a bias for action Comfortable with ambiguity and iterative development Client-first mindset and consultative approach Excellent problem-solving and analytical skills Preferred Certifications Databricks Certified Data Engineer / Architect Cloud certifications (Azure/AWS/GCP) Any certifications in AI/ML, NLP, or GenAI frameworks are a plus Show more Show less

Posted 1 day ago

Apply

2.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

Technical Expertise : (minimum 2 year relevant experience) ● Solid understanding of Generative AI models and Natural Language Processing (NLP) techniques, including Retrieval-Augmented Generation (RAG) systems, text generation, and embedding models. ● Exposure to Agentic AI concepts, multi-agent systems, and agent development using open-source frameworks like LangGraph and LangChain. ● Hands-on experience with modality-specific encoder models (text, image, audio) for multi-modal AI applications. ● Proficient in model fine-tuning, prompt engineering, using both open-source and proprietary LLMs. ● Experience with model quantization, optimization, and conversion techniques (FP32 to INT8, ONNX, TorchScript) for efficient deployment, including edge devices. ● Deep understanding of inference pipelines, batch processing, and real-time AI deployment on both CPU and GPU. ● Strong MLOps knowledge with experience in version control, reproducible pipelines, continuous training, and model monitoring using tools like MLflow, DVC, and Kubeflow. ● Practical experience with scikit-learn, TensorFlow, and PyTorch for experimentation and production-ready AI solutions. ● Familiarity with data preprocessing, standardization, and knowledge graphs (nice to have). ● Strong analytical mindset with a passion for building robust, scalable AI solutions. ● Skilled in Python, writing clean, modular, and efficient code. ● Proficient in RESTful API development using Flask, FastAPI, etc., with integrated AI/ML inference logic. ● Experience with MySQL, MongoDB, and vector databases like FAISS, Pinecone, or Weaviate for semantic search. ● Exposure to Neo4j and graph databases for relationship-driven insights. ● Hands-on with Docker and containerization to build scalable, reproducible, and portable AI services. ● Up-to-date with the latest in GenAI, LLMs, Agentic AI, and deployment strategies. ● Strong communication and collaboration skills, able to contribute in cross-functional and fast-paced environments. Bonus Skills ● Experience with cloud deployments on AWS, GCP, or Azure, including model deployment and model inferencing. ● Working knowledge of Computer Vision and real-time analytics using OpenCV, YOLO, and similar Show more Show less

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies