Jobs
Interviews

524 Mlops Jobs - Page 11

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

coimbatore, tamil nadu

On-site

As a Machine Learning Engineer at our company based in Coimbatore, Tamil Nadu, India, your primary responsibility will be to design, implement, and deploy machine learning models. You will work closely with data scientists to transition prototypes into production-ready systems. Managing and automating the end-to-end machine learning lifecycle will be a key part of your role. Your duties will include implementing continuous integration and continuous deployment (CI/CD) pipelines customized for machine learning workflows. Monitoring the real-time performance of deployed models, managing model drift, and executing retraining strategies will also be crucial aspects of your responsibilities. Ensuring reproducibility, traceability, and versioning for machine learning models and datasets will be essential. Additionally, you will be tasked with optimizing the machine learning infrastructure for enhanced performance, scalability, and cost-efficiency. It will be important to stay abreast of the latest trends and tools in the field of machine learning and MLOps to continuously improve our practices. To excel in this role, you will need strong programming skills, particularly in Python, along with familiarity with ML frameworks such as TensorFlow and PyTorch. Experience with data processing tools like Pandas and Scikit-learn, as well as CI/CD tools like Jenkins or GitLab CI, will be beneficial. Proficiency in containerization technologies like Docker, orchestration tools like Kubernetes, and ML model versioning tools like MLflow or DVC is also desired. Knowledge of cloud platforms such as AWS, Google Cloud, or Azure and their ML deployment services will be advantageous. The ideal candidate will have previous experience in a machine learning or data science role, along with expertise in advanced monitoring and logging tools tailored for ML models. Possessing certification or training in MLOps, machine learning, or related fields, as well as a background in software development or software engineering, will be beneficial. Familiarity with CRM & ERP systems and their data structures, along with a strong understanding of the latest machine learning trends and techniques, is highly valued. In this role, you will have the opportunity to work with a team of smart individuals in a friendly and open culture. There are no cumbersome managers or unnecessary tools to deal with, and rigid working hours are not a part of our work environment. You will have real responsibilities and the chance to expand your knowledge across various business sectors. By creating content that benefits our users on a daily basis, you will face real challenges in a rapidly evolving company environment.,

Posted 3 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

maharashtra

On-site

We are looking for exceptional individuals to join our team at ScalePad as Head of AI Engineering. ScalePad is a prominent software-as-a-service (SaaS) company operating globally to provide Managed Service Providers (MSPs) with the tools and support needed to enhance client value in the ever-evolving IT landscape. As a member of our tech-management team, you will lead AI development initiatives, shape our AI strategy, and guide teams in creating impactful AI applications. This hands-on leadership role involves mentoring teams, improving developer productivity, and ensuring best practices in AI development, software engineering, and system design. Your responsibilities will include designing state-of-the-art AI applications, leveraging advanced techniques such as Machine Learning (ML), Large Language Models (LLMs), Graph Neural Networks (GNNs), and Retrieval-Augmented Generation (RAG). You will also focus on fostering an environment of responsible AI practices, governance, and ethics, advocating for AI-first product thinking, and collaborating with various teams to align AI solutions with business objectives. To excel in this role, you should possess strong technical expertise in AI, ML, software architecture principles, and have a proven track record of integrating AI advancements into engineering execution. Additionally, experience in AI governance, ethics, and managing globally distributed teams will be essential. We are seeking a curious, hands-on leader who is passionate about developing talent, driving innovation, and ensuring AI excellence within our organization. Joining our team at ScalePad will offer you the opportunity to lead the evolution of AI-driven products, work with cutting-edge technologies, and make a global impact by influencing AI-powered decision-making at an enterprise level. As a Rocketeer, you will enjoy ownership through our Employee Stock Option Plan (ESOP), benefit from annual training and development opportunities, and work in a dynamic, entrepreneurial setting that promotes growth and stability. If you are ready to contribute to a culture of innovation, collaboration, and success, we invite you to apply for this role. Please note that only candidates eligible to work in Canada will be considered. At ScalePad, we are committed to fostering Diversity, Equity, Inclusion, and Belonging (DEIB) to create a workplace where every individual's unique experiences and perspectives are valued. Join us in building a stronger, more inclusive future where everyone has the opportunity to thrive and grow.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As an AI/ML Engineer at Neutrinos, you will play a crucial role in designing, developing, and deploying machine learning models to drive business outcomes. With 5 to 8 years of hands-on experience, you will utilize your expertise in Python, AutoML frameworks, image processing, and cloud-based AI services to create innovative solutions. Collaborating with cross-functional teams, you will focus on model development, integration, and optimization to deliver high-quality results aligned with business objectives. Your responsibilities will include designing, training, and deploying complex machine learning models for predictive analytics, object detection, and classification. Leveraging AI frameworks like Python, TensorFlow, and PyTorch, you will build and optimize machine learning pipelines. You will also work on implementing and customizing AutoML frameworks to automate model building and enhance efficiency. Utilizing cloud-based AI platforms such as Google Vision, Azure Document Vision, and AWS Textract, you will handle tasks like image recognition and document processing. In addition, you will develop and integrate AI models with REST APIs using NodeJS and Python to ensure efficient communication between systems and scalable deployment. Your role will involve implementing advanced object detection algorithms for various image-based applications and continuously optimizing model performance for accuracy, latency, and scalability in production environments. Utilizing Git for version control and CI/CD practices, you will ensure seamless updates and rollouts of AI solutions. Preferred skills for this role include familiarity with cloud infrastructure such as AWS, Azure, or Google Cloud, experience with Docker and Kubernetes for containerized AI solutions, and knowledge of Machine Learning Operations (MLOps) for automating ML lifecycle processes. A Bachelors or Masters degree in Computer Science, Machine Learning, AI, or a related field, along with 5 to 8 years of experience in AI/ML engineering, will be essential qualifications for this position. Join Neutrinos to be part of a dynamic team at the forefront of AI and machine learning innovation. You will have the opportunity to work on challenging projects with real-world impact, competitive salary and benefits, and experience a collaborative and growth-focused environment in Bangalore.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

About us Bain & Company is a global management consulting firm that assists the world's most ambitious change makers in defining the future. With offices in 65 locations across 40 countries, we collaborate with our clients as a unified team, striving to achieve extraordinary results, surpass the competition, and reshape industries. Since our establishment in 1973, we have gauged our prosperity by the achievements of our clients and uphold the highest level of client advocacy in the industry. In 2004, we inaugurated our presence in the Indian market by launching the Bain Capability Center (BCC) in New Delhi, which is now recognized as BCN (Bain Capability Network) with nodes spread across various geographies. BCN serves as a core and largest unit of Expert Client Delivery (ECD), playing a vital role in enhancing Bain's case teams globally by providing analytics and research solutions in all industries, specific domains for corporate cases, client development, private equity diligence, or Bain intellectual property. The BCN encompasses Consulting Services, Knowledge Services, and Shared Services. Who you will work with Bain & Company stands as the foremost consulting partner to the private equity industry and its stakeholders. The Private Equity Group (PEG) at Bain offers comprehensive advisory services to investors throughout the entire investment lifecycle, including pre-investment strategy, due diligence, post-acquisition value creation, and portfolio management. The PEG Innovation Team (PIT) is a specialized division devoted to supporting the private equity sector and its stakeholders. An essential focus area involves harnessing Generative AI to develop innovative solutions that streamline due diligence, automate processes, and enable data-driven decision-making. By integrating expertise in due diligence, operational enhancement, and cutting-edge technologies like Generative AI, PIT empowers private equity clients to achieve superior returns and maintain a competitive edge in the market. What you'll do This role presents an opportunity to be part of the BCN Data business expanding science capability area within the PIT (PEG Innovation Team). Product Support: The primary responsibility of the team will be to support the Beta production of AI applications for PEG (Private Equity Group) as part of Bain's DD2030 (Due Diligence 2030) initiative. Generative AI Focus: The role involves a significant focus on Generative AI applications, aiming to push the boundaries of innovation in the private equity domain. Broader Automation: Additionally, the role may entail contributing to broader PIT automation initiatives aimed at streamlining processes and enhancing efficiency across various stages of the investment lifecycle. The person in this role will need to: - Translate business objectives into data and analytics solutions, and translate results into business insights using appropriate data engineering, analytics, visualization & Gen AI applications. - Utilize Gen AI skills to design and create repeatable analytical solutions to enhance data quality. - Build and deploy machine learning models using Scikit-Learn for various predictive analytics tasks. - Implement and fine-tune NLP models with Hugging Face to tackle complex language processing challenges. - Collaborate with engineering team members on design requirements to transform PoC methods into repeatable data pipelines; collaborate with the Practice team to develop scalable products. - Aid in creating and documenting standard operating procedures for recurring data processes, and establish a knowledge base of data methods. - Stay updated on advancements in AI/ML technologies and best practices in data science, particularly in LLMs and generative AI. About you A Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Applied Mathematics, Econometrics, Statistics, Physics, Market Research, or related field is preferred. 3-5 years of experience in data science & data engineering with hands-on experience in AI/GenAI tools such as Scikit-Learn, LangChain, and Hugging Face. Experience in designing and developing RESTful and GraphQL APIs to facilitate data access and integration. Proficiency in data wrangling in either R or Python is mandatory. Proficiency in SQL is essential, and familiarity with NoSQL data stores is a plus. Familiarity with MLOps practices for model lifecycle management. Experience with Git and modern software development workflow is advantageous. Experience with containerization such as Docker/Kubernetes is beneficial. Proficiency with Agile way of working and tools (Jira, Confluence, Miro). Strong interpersonal and communication skills are imperative. Experience with Microsoft Office suite (Word, Excel, PowerPoint, Teams) is preferred. Ability to explain and discuss Gen AI and Data Engineering technicalities to a business audience. What makes us a great place to work We take pride in consistently being acknowledged as one of the world's best places to work, a proponent of diversity, and a model of social responsibility. Currently ranked as the #1 consulting firm on Glassdoor's Best Places to Work list, we have maintained a position in the top four for the past 12 years. We believe that diversity, inclusion, and collaboration are pivotal in constructing exceptional teams. We recruit individuals with exceptional talents, abilities, and potential, fostering an environment where you can realize your full potential and thrive both professionally and personally. Our commitment to diversity and inclusion has been externally recognized by Fortune, Vault, Mogul, Working Mother, Glassdoor, and the Human Rights Campaign.,

Posted 3 weeks ago

Apply

6.0 - 11.0 years

15 - 25 Lacs

Bengaluru

Hybrid

Role & rJob Title: Senior Machine Learning Engineer Anomaly Detection & Time Series Forecasting Location: Bangalore Department: Engineering – AI/ML Job Overview: Resolve Tech Solutions is seeking an experienced Senior Machine Learning Engineer to drive the development of AI-driven anomaly detection, time series forecasting, and predictive analytics models for our next-generation observability platform. This role will focus on designing, building, and deploying ML models that provide real-time insights, predictive alerts, and intelligent recommendations powered by LLMs. As a key individual contributor, you will work on solving complex challenges in large-scale cloud environments while collaborating with cross-functional teams. Key Responsibilities: Develop Advanced ML Models: Design and implement machine learning models for anomaly detection, time series forecasting, and predictive analytics, ensuring high accuracy and scalability. Anomaly Detection & Root Cause Analysis: Build robust models to detect abnormal patterns in metric data, leveraging statistical methods, deep learning, and AI-driven techniques. Time Series Forecasting: Implement predictive models to forecast metric trends, proactively identifying threshold breaches and alerting users. LLM-Driven Insights: Utilize Large Language Models (LLMs) to analyze historical incidents, correlate anomalies, and provide recommendations by integrating with ITSM platforms like ServiceNow. Cloud & Big Data Integration: Work with large-scale data pipelines, integrating ML models with cloud platforms such as AWS, Azure, and GCP. Feature Engineering & Data Processing: Design and optimize feature extraction and data preprocessing pipelines for real-time and batch processing. Model Deployment & Optimization: Deploy ML models in production environments using MLOps best practices, ensuring efficiency, scalability, and reliability. Performance Monitoring & Continuous Improvement: Establish key performance metrics, monitor model drift, and implement retraining mechanisms for continuous model improvement. Collaboration & Knowledge Sharing: Work closely with product managers, data engineers, and DevOps teams to align ML solutions with business objectives and platform goals. Requirements: Experience: 5+ years of hands-on experience in machine learning, with a strong focus on anomaly detection, time series forecasting, and deep learning. ML & AI Expertise: Proficiency in Python and ML frameworks such as TensorFlow, PyTorch, Scikit-Learn, and XGBoost. Anomaly Detection: Experience with statistical techniques, autoencoders, GANs, or isolation forests for anomaly detection in time series data. Time Series Forecasting: Strong background in models such as ARIMA, Prophet, LSTMs, or Transformers for predictive analytics. LLMs & NLP: Hands-on experience in fine-tuning and integrating LLMs for intelligent insights and automated issue resolution. Cloud & Data Engineering: Familiarity with cloud ML services (AWS SageMaker, Azure ML, GCP Vertex AI) and distributed computing frameworks like Spark. MLOps & Deployment: Experience with CI/CD pipelines, Docker, Kubernetes, and model monitoring in production. Problem-Solving & Analytical Skills: Ability to analyze large datasets, derive insights, and build scalable ML solutions for enterprise applications. Communication & Collaboration: Strong verbal and written communication skills, with the ability to explain ML concepts to non-technical stakeholders. Education: Bachelor’s or Master’s degree in Computer Science, Machine Learning, Data Science, or a related field. Preferred Qualifications: Experience with AIOps, observability, or IT operations analytics. Hands-on experience with reinforcement learning or graph neural networks. Familiarity with Apache Kafka, Flink, or other real-time data processing frameworks. Contributions to open-source ML projects or research publications in anomaly detection and time series analysis. Why Join Us? Opportunity to work on cutting-edge AI/ML solutions for enterprise observability. Collaborative and innovative work environment with top AI/ML talent. Competitive salary, benefits, and career growth opportunities. Exposure to large-scale, cloud-native, and AI-driven technologies. If you are passionate about AI-driven anomaly detection, time series forecasting, and leveraging LLMs for real-world enterprise solutions, we'd love to hear from you!esponsibilities Preferred candidate profile

Posted 3 weeks ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Hyderabad

Work from Office

We are seeking a skilled Data Scientist with expertise in AI orchestration and embedded systems to support a sprint-based Agile implementation focused on integrating generative AI capabilities into enterprise platforms such as Slack, Looker, and Confluence. The ideal candidate will have hands-on experience with Gemini and a strong understanding of prompt engineering, vector databases, and orchestration infrastructure. Key Responsibilities : - Develop and deploy Slack-based AI assistants leveraging Gemini models. - Design and implement prompt templates tailored to enterprise data use cases (Looker and Confluence). - Establish and manage an embedding pipeline for Confluence documentation. - Build and maintain orchestration logic for prompt execution and data retrieval. - Set up API authentication and role-based access controls for integrated systems. - Connect and validate vector store operations (e.g., Pinecone, Weaviate, or Snowflake vector extension). - Contribute to documentation, internal walkthroughs, and user acceptance testing planning. - Participate in Agile ceremonies including daily standups and sprint demos. Required Qualifications : - Proven experience with Gemini and large language model deployment in production environments. - Proficiency in Python, orchestration tools, and prompt engineering techniques. - Familiarity with vector database technologies and embedding workflows. - Experience integrating APIs for data platforms such as Looker and Confluence. - Strong understanding of access control frameworks and enterprise-grade authentication. - Demonstrated success in Agile, sprint-based project environments. Preferred Qualifications : - Experience working with Slack app development and deployment. - Background in MLOps, LLMOps, or AI system orchestration at scale. - Excellent communication skills and ability to work in cross-functional teams.

Posted 3 weeks ago

Apply

4.0 - 9.0 years

15 - 32 Lacs

Noida, Hyderabad, Bengaluru

Work from Office

Responsibilities: Design and implement robust MLOps pipelines for model training, evaluation, deployment, and monitoring using industry-standard tools and frameworks Collaborate with data scientists to streamline the model development process and ensure seamless integration with MLOps pipelines. Optimize and scale machine learning infrastructure to support high-performance model training and inference. Contribute to the development of MLOps standards, processes, and documentation within the organization. Mentor and support junior team members in MLOps practices and technologies. Stay up-to-date with the latest trends and best practices in MLOps, and explore opportunities for continuous improvement. Qualifications: Bachelor's or Master's degree in Computer Science, Statistics, or a related field. 5+ years of experience in software engineering, with 2+ years experience in ML Proficient in Python and at least one other programming language (e.g., Java, Go, C++). Extensive experience with containerization technologies (Docker, Kubernetes) and cloud platforms (AWS, GCP, Azure). Familiarity with machine learning frameworks and MLOps tools Experience with big data technologies Strong understanding of CI/CD principles and practices.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

14 - 22 Lacs

Chennai

Work from Office

Roles and Responsibilities 1. MLOps Strategy & Implementation Design and implement scalable MLOps pipelines for the end-to-end lifecycle of machine learning models (from data ingestion to model deployment and monitoring). Automate model training, testing, validation, and deployment using CI/CD practices. Collaborate with data scientists to productize ML models. 2. Infrastructure Management Build and maintain cloud-native infrastructure (e.g., AWS/GCP/Azure) for training, deploying, and monitoring ML models. Optimize compute and storage resources for ML workloads. Containerize ML applications using Docker and orchestrate them with Kubernetes. 3. Model Monitoring & Governance Set up monitoring for ML model performance (drift detection, accuracy drop, latency). Ensure compliance with ML governance policies, versioning, and auditing. 4. Collaboration & Communication Work with cross-functional teams (Data Engineering, DevOps, and Product) to ensure smooth ML model deployment and maintenance. Provide mentorship and technical guidance to junior engineers. 5. Automation & Optimization Automate feature extraction, model retraining, and deployment processes. Improve latency, throughput, and efficiency of deployed models in production. Technical Skills / Tech Stack 1. Programming Languages Python (primary for ML/AI and scripting) Bash/Shell Go or Java (optional but valuable for performance-critical components) 2. ML Frameworks & Libraries TensorFlow , PyTorch , Scikit-learn MLflow , Kubeflow , or SageMaker ONNX (for model conversion) 3. Data & Pipeline Tools Apache Airflow , Luigi Kafka , Apache Beam , Spark (for streaming/batch data) Pandas , Dask , NumPy 4. DevOps & MLOps Tools Docker , Kubernetes , Helm Terraform , Pulumi (for infrastructure as code) Jenkins , GitHub Actions , Argo Workflows MLflow , DVC , Tecton , Feast 5. Cloud Platforms AWS : S3, EKS, SageMaker, Lambda, CloudWatch GCP : GKE, AI Platform, BigQuery, Dataflow Azure : Azure ML, AKS, Blob Storage 6. Monitoring & Logging Prometheus , Grafana ELK Stack , Datadog , Cloud-native monitoring tools 7. CI/CD & Versioning Git , GitOps , CI/CD pipelines for model and data versioning Preferred Experience 5+ years in AI/ML engineering roles. Experience building MLOps pipelines in production. Familiarity with regulatory and ethical considerations in ML (e.g., fairness, bias detection, explainability). Strong debugging and performance tuning skills in distributed environments.

Posted 3 weeks ago

Apply

6.0 - 10.0 years

15 - 25 Lacs

Kochi, Hyderabad, Thiruvananthapuram

Hybrid

Hi All Job Title: MLOPS Engineer Exp: 5+ Years Location: Hyderabad, Trivandrum, Kochi Mandatory Skillset - MLOps, Python, AWS, CI CD pipeline

Posted 3 weeks ago

Apply

4.0 - 9.0 years

0 - 1 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Hi Pleasae Find JD and send me your updated Resume Bachelor's or Masters degree in Computer Science, Data Science, Engineering, or a related field. 3+ years of experience in MLOps, DevOps, or ML Engineering roles. Strong experience with containerization (Docker) and orchestration (Kubernetes). Proficiency in Python and experience working with ML libraries like TensorFlow, PyTorch, or scikit-learn. Familiarity with ML pipeline tools such as MLflow, Kubeflow, TFX, Airflow, or SageMaker Pipelines. Hands-on experience with cloud platforms (AWS, GCP, Azure) and infrastructure-as-code tools (Terraform, CloudFormation). Solid understanding of CI/CD principles, especially as applied to machine learning workflows. Nice-to-Have Experience with feature stores, model registries, and metadata tracking. Familiarity with data versioning tools like DVC or LakeFS. Exposure to data observability and monitoring tools. Knowledge of responsible AI practices including fairness, bias detection, and explainability.

Posted 3 weeks ago

Apply

4.0 - 9.0 years

15 - 25 Lacs

Gurugram

Work from Office

What were looking for: 4+ years of experience in Data Science 12 years of proven work in Machine Learning Minimum 2 years of experience in Deep Learning Proficiency in Python, Linux, and Containers (Docker/K8s) Hands-on with GenAI frameworks like LangChain, LlamaIndex, Hugging Face Transformers Experience deploying models using Vertex AI or Azure AI Currently contributing to a live project (public GitHub profile preferred) Key Responsibilities Deploy and manage AI workloads in hybrid cloud environments using RHEL AI and OpenShift AI, with guidance and training provided. Collaborate with teams to fine-tune and operationalize AI models, including large language models (LLMs), using tools like InstructLab. Build and maintain containerized applications with Kubernetes or similar platforms, adapting to OpenShift as needed. Support the development, training, and deployment of AI/ML models, leveraging frameworks like PyTorch or TensorFlow. Assist in implementing MLOps practices for model lifecycle management, with exposure to CI/CD pipelines. Troubleshoot and optimize Linux-based systems to ensure reliable AI performance. Learn and apply Red Hat-specific tools and best practices through on-the-job training and resources. Document workflows and contribute to Reve Clouds team knowledge sharing.

Posted 3 weeks ago

Apply

9.0 - 14.0 years

50 - 70 Lacs

Hyderabad

Remote

Staff/Sr. Staff Engineer Experience: 6 - 15 Years Exp Salary : Competitive Preferred Notice Period : Within 60 Days Shift : 10:00AM to 6:00PM IST Opportunity Type: Remote Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Airflow OR LLMs OR MLOps OR Generative AI and Python Netskope (One of Uplers' Clients) is Looking for: Staff/Sr. Staff Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Job Summary: Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's in it for you You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What you will be doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required skills and experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS or equivalent required, MSCS or equivalent strongly preferred How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Netskope, a global SASE leader, helps organizations apply zero trust principles and AI/ML innovations to protect data and defend against cyber threats. Fast and easy to use, the Netskope platform provides optimized access and real-time security for people, devices, and data anywhere they go. Netskope helps customers reduce risk, accelerate performance, and get unrivaled visibility into any cloud, web, and private application activity. Thousands of customers trust Netskope and its powerful NewEdge network to address evolving threats, new risks, technology shifts, organizational and network changes, and new regulatory requirements About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

9.0 - 14.0 years

50 - 70 Lacs

Pune

Remote

Staff/Sr. Staff Engineer Experience: 6 - 15 Years Exp Salary : Competitive Preferred Notice Period : Within 60 Days Shift : 10:00AM to 6:00PM IST Opportunity Type: Remote Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Airflow OR LLMs OR MLOps OR Generative AI and Python Netskope (One of Uplers' Clients) is Looking for: Staff/Sr. Staff Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Job Summary: Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's in it for you You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What you will be doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required skills and experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS or equivalent required, MSCS or equivalent strongly preferred How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Netskope, a global SASE leader, helps organizations apply zero trust principles and AI/ML innovations to protect data and defend against cyber threats. Fast and easy to use, the Netskope platform provides optimized access and real-time security for people, devices, and data anywhere they go. Netskope helps customers reduce risk, accelerate performance, and get unrivaled visibility into any cloud, web, and private application activity. Thousands of customers trust Netskope and its powerful NewEdge network to address evolving threats, new risks, technology shifts, organizational and network changes, and new regulatory requirements About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

9.0 - 14.0 years

50 - 70 Lacs

Bengaluru

Remote

Staff/Sr. Staff Engineer Experience: 6 - 15 Years Exp Salary : Competitive Preferred Notice Period : Within 60 Days Shift : 10:00AM to 6:00PM IST Opportunity Type: Remote Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Airflow OR LLMs OR MLOps OR Generative AI and Python Netskope (One of Uplers' Clients) is Looking for: Staff/Sr. Staff Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Job Summary: Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's in it for you You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What you will be doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required skills and experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS or equivalent required, MSCS or equivalent strongly preferred How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Netskope, a global SASE leader, helps organizations apply zero trust principles and AI/ML innovations to protect data and defend against cyber threats. Fast and easy to use, the Netskope platform provides optimized access and real-time security for people, devices, and data anywhere they go. Netskope helps customers reduce risk, accelerate performance, and get unrivaled visibility into any cloud, web, and private application activity. Thousands of customers trust Netskope and its powerful NewEdge network to address evolving threats, new risks, technology shifts, organizational and network changes, and new regulatory requirements About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

Job Title: Senior Software Development Engineer Location: Pune/Bangalore About the Company: Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks. Why Gruve: At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If youre passionate about technology and eager to make an impact, wed love to hear from you. Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted. Position summary: We are seeking a talented Engineer to join our AI team. You will technically lead experienced software and machine learning engineers to develop, test, and deploy AI-based solutions, with a primary focus on large language models and other machine learning applications. This is an excellent opportunity to apply your software engineering skills in a dynamic, real-world environment and gain hands-on experience in cutting-edge AI technology. Key Roles & Responsibilities: Design and Develop AI-Powered Solutions: Architect and implement scalable AI/ML systems, focusing on Large Language Models (LLMs) and other deep learning applications. End-to-End Model Development: Lead the entire lifecycle of AI modelsfrom data collection and preprocessing to training, fine-tuning, evaluation, and deployment. Fine-Tuning & Customization: Leverage techniques like LoRA (Low-Rank Adaptation) and Q-LoRA to efficiently fine-tune large models for specific business applications. Reasoning Model Implementation: Work with advanced reasoning models such as DeepSeek-R1, exploring their applications in enterprise AI workflows. Data Engineering & Dataset Creation: Design and curate high-quality datasets optimized for fine-tuning AI models, ensuring robust training and validation processes. Performance Optimization & Efficiency: Optimize model inference, computational efficiency, and resource utilization for large-scale AI applications. MLOps & CI/CD Pipelines: Implement best practices for MLOps, ensuring automated training, deployment, monitoring, and continuous improvement of AI models. Cloud & Edge AI Deployment: Deploy and manage AI solutions in cloud environments (AWS, Azure, GCP) and explore edge AI deployment where applicable. API Development & Microservices: Develop RESTful APIs and microservices to integrate AI models seamlessly into enterprise applications. Security, Compliance & Ethical AI: Ensure AI solutions comply with industry standards, data privacy laws (e.g., GDPR, HIPAA), and ethical AI guidelines. Collaboration & Stakeholder Engagement: Work closely with product managers, data engineers, and business teams to translate business needs into AI-driven solutions. Mentorship & Technical Leadership: Guide and mentor junior engineers, fostering best practices in AI/ML development, model fine-tuning, and software engineering. Research & Innovation: Stay updated with emerging AI trends, conduct experiments with cutting-edge architectures and fine-tuning techniques, and drive innovation within the team. Basic Qualifications: A master's degree or PhD in Computer Science, Data Science, Engineering, or a related field Experience: 5-8 Years Strong programming skills in Python and Java Good understanding of machine learning fundamentals Hands-on experience with Python and common ML libraries (e.g., PyTorch, TensorFlow, scikit-learn) Familiar with frontend development and frameworks like React Basic knowledge of LLMs and transformer-based architectures is a plus. Preferred Qualifications Excellent problem-solving skills and an eagerness to learn in a fast-paced environment Strong attention to detail and ability to communicate technical concepts clearly,

Posted 3 weeks ago

Apply

12.0 - 16.0 years

0 Lacs

karnataka

On-site

As a skilled AI expert, you will be responsible for partnering with Product Managers, Engineers, and other key stakeholders to deeply understand business requirements and translate them into actionable technical roadmaps. You will identify and prioritize AI use cases aligned with organizational goals, develop a scalable and sustainable implementation roadmap, and conduct ROI analysis for on-prem LLM deployments. Your role will involve creating sophisticated software designs driven by AI-powered experiences, focusing on performance, scalability, security, reliability, and ease of maintenance. You will define and develop complex enterprise applications through AI Agentic frameworks, ensuring responsiveness, responsibility, traceability, and reasoning. Utilizing modeling techniques like UML and Domain-Driven Design, you will visualize intricate relationships between components and ensure seamless integrations. Leading large-scale platform projects, you will deliver No-code workflow management, HRMS, Collaboration, Search engine, Document Management, and other services for employees. Championing automated testing, continuous integration/delivery pipelines, MLOps, and agile methodologies across multiple teams will also be a key aspect of your role. To excel in this position, you should hold a BTech/MTech/PhD in Computer Sciences with specialization in AI/ML. A proven track record of leading large-scale digital transformation projects is required, along with a minimum of 3+ years of hands-on experience in building AI-based applications using Agentic frameworks. With a minimum of 12-14 years of experience in software design & development, you should have expertise in designing and developing applications and workflows using the ACE framework. Your skillset should include hands-on experience in developing AI Agents based on heterogenous frameworks such as LangGraph, AutoGen, Crew AI, and others. You should also be proficient in selecting and fine-tuning LLMs for enterprise needs and designing efficient inference pipelines for system integration. Expertise in Python programming, developing agents/tools in an AI Agentic framework, and building data pipelines for structured and unstructured data is essential. Additionally, experience in leveraging technologies like RAG (Retrieval Augmented Generation), vector databases, and other tools to enhance AI models is crucial. Your ability to quickly learn and adapt to the changing technology landscape, combined with past experience in the .NET Core ecosystem and front-end development featuring Angular/React/Javascript/Html/Css, will be beneficial. Hands-on experience managing full-stack web applications built upon Graph/RESTful APIs/microservice-oriented architectures and familiarity with large-scale data ecosystems is also required. Additionally, you should be skilled in platform telemetry capture, ingestion, and intelligence derivation, with a track record of effectively mentoring peers and maintaining exceptional attention to detail throughout the SDLC. Excellent verbal and written communication abilities, as well as outstanding presentation and public speaking talents, are necessary to excel in this role. Please note: Beware of recruitment scams.,

Posted 3 weeks ago

Apply

6.0 - 11.0 years

18 - 33 Lacs

Kolkata, Hyderabad, Bengaluru

Hybrid

Inviting applications for the role of Principal Consultant ML Engineers! In this role, lead the automation and orchestration of our machine learning infrastructure and CI/CD pipelines on public cloud (preferably AWS). This role is essential for enabling scalable, secure, and reproducible deployments of both classical AI/ML models and Generative AI solutions in production environments. Responsibilities • Develop and maintain CI/CD pipelines for AI/GenAI models on AWS using GitHub Actions and CodePipeline. (Not Limited to) • Automate infrastructure provisioning using IAC. (Terraform, Bicep Etc) • Any cloud platform - Azure or AWS • Package and deploy AI/GenAI models on (SageMaker, Lambda, API Gateway). • Write Python scripts for automation, deployment, and monitoring. • Engaging in the design, development and maintenance of data pipelines for various AI use cases • Active contribution to key deliverables as part of an agile development team • Set up model monitoring, logging, and alerting (e.g., drift, latency, failures). • Ensure model governance, versioning, and traceability across environments. • Collaborating with others to source, analyse, test and deploy data processes • Experience in GenAI project Qualifications we seek in you! Minimum Qualifications experience with MLOps practices. Degree/qualification in Computer Science or a related field, or equivalent work experience Experience developing, testing, and deploying data pipelines Strong Python programming skills. Hands-on experience in deploying 2 - 3 AI/GenAI models in AWS. Familiarity with LLM APIs (e.g., OpenAI, Bedrock) and vector databases. Clear and effective communication skills to interact with team members, stakeholders and end users Preferred Qualifications/ Skills Experience with Docker-based deployments. Exposure to model monitoring tools (Evidently, CloudWatch). Familiarity with RAG stacks or fine-tuning LLMs. Understanding of GitOps practices. Knowledge of governance and compliance policies, standards, and procedures Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted 3 weeks ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Hyderabad

Work from Office

Skills: Python, GPU compute platforms, MLOps platforms, AI models, software applications As a computer vision support engineer, you are part of the "run operations", the innovation hub of the Colruyt Group. You maintain, support and improve the applications and ICT systems that the Colruyt Group business partners and outside world purchase from Smart Technics. As such you will get to know a lot of diverse innovative cutting-edge products, providing a fantastic learning experience and growth path. Your tasks You are part of the support group that takes care of the second line technical support for the Smart Technics products which rely on computer vision technology. You support products end-to-end including camera systems, GPU compute platforms, MLOps platforms, AI models, software applications, interacting with other experts and support groups within Colruyt, as well as suppliers You take care of solving incidents by analysing root causes, providing workarounds and fixes and getting the systems operational again as soon as possible. You maintain the applications and monitor the stability and continuity of the ICT systems. You take care of the application life cycle management (ALM) of our products. To this end, you work with ITSM processes such as IT change, knowledge, incident, problem management, etc. You interact with the Service Delivery Manager on application lifecycle management and follow up on SLAs You keep documentation up-to-date and you build a knowledge base that leads to better support by first line customer services, product teams, suppliers Besides technical support, you will also contribute to the feature development and continuous improvement of our innovative products You implement new features and change requests You maintain and optimize AI models You create and implement ideas about improvements to our products and AI models for better scalability, maintainability and supportability. You proactively take care of the quality of our products You give advice and follow up with the product teams to ensure that applications can be scaled, monitored and supported in the right way. In this way, you help ensure the satisfaction of our partners. You help with the transition from MVP to robust and supportable solutions. Together with the Service Delivery Manager you are responsible to assure quality and viability of the products Profile: Knowledge and experience You have a computer science or engineering background. You have experience with computer vision technologies. You preferably have already experience with application lifecycle management (ALM) Knowledge of ITIL methodology and/or IT4IT is a plus. Problem-solving ability - You see a problem as a challenge for which you like to roll up your sleeves. Customer-oriented - You are helpful and stand for good service Communicative - You can communicate your ideas, suggestions and thoughts clearly to colleagues, suppliers and partners. Analytical You are able to efficiently analyze problems & improvements for both short term and long term Languages - Knowledge of English is a must.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Kochi, Bengaluru

Work from Office

Job Summary: We are seeking a highly skilled and motivated Machine Learning Engineer with a strong foundation in programming and machine learning, hands-on experience with AWS Machine Learning services (especially SageMaker), and a solid understanding of Data Engineering and MLOps practices. You will be responsible for designing, developing, deploying, and maintaining scalable ML solutions in a cloud-native environment. Key Responsibilities: • Design and implement machine learning models and pipelines using AWS SageMaker and related services. • Develop and maintain robust data pipelines for training and inference workflows. • Collaborate with data scientists, engineers, and product teams to translate business requirements into ML solutions. • Implement MLOps best practices including CI/CD for ML, model versioning, monitoring, and retraining strategies. • Optimize model performance and ensure scalability and reliability in production environments. • Monitor deployed models for drift, performance degradation, and anomalies. • Document processes, architectures, and workflows for reproducibility and compliance. Required Skills & Qualifications: • Strong programming skills in Python and familiarity with ML libraries (e.g., scikitlearn, TensorFlow, PyTorch). • Solid understanding of machine learning algorithms, model evaluation, and tuning. • Hands-on experience with AWS ML services, especially SageMaker, S3, Lambda, Step Functions, and CloudWatch. • Experience with data engineering tools (e.g., Apache Airflow, Spark, Glue) and workflow orchestration. Machine Learning Engineer - Job Description • Proficiency in MLOps tools and practices (e.g., MLflow, Kubeflow, CI/CD pipelines, Docker, Kubernetes). • Familiarity with monitoring tools and logging frameworks for ML systems. • Excellent problem-solving and communication skills. Preferred Qualifications: • AWS Certification (e.g., AWS Certified Machine Learning Specialty). • Experience with real-time inference and streaming data. • Knowledge of data governance, security, and compliance in ML systems

Posted 3 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

Position Summary. Drives the execution of multiple business plans and projects by identifying customer and operational needs; developing and communicating business plans and priorities; removing barriers and obstacles that impact performance; providing resources; identifying performance standards; measuring progress and adjusting performance accordingly; developing contingency plans; and demonstrating adaptability and supporting continuous learning. Provides supervision and development opportunities for associates by selecting and training; mentoring; assigning duties; building a team-based work environment; establishing performance expectations and conducting regular performance evaluations; providing recognition and rewards; coaching for success and improvement; and ensuring diversity awareness. Promotes and supports company policies, procedures, mission, values, and standards of ethics and integrity by training and providing direction to others in their use and application; ensuring compliance with them; and utilizing and supporting the Open Door Policy. Ensures business needs are being met by evaluating the ongoing effectiveness of current plans, programs, and initiatives; consulting with business partners, managers, co-workers, or other key stakeholders; soliciting, evaluating, and applying suggestions for improving efficiency and cost-effectiveness; and participating in and supporting community outreach events. What you'll do. About Team Ever wondered what would a convergence of online and offline advertising systems looks like Ever wondered how we can bridge the gap between sponsored search, display, video ad formats Ever thought how we can write our own ad servers which serve billions of requests in near real time Our Advertising Technology team is building an end-to-end advertising platform that is key to Walmarts overall growth strategy. We use cutting edge machine learning, data mining and optimization algorithms to ingest, model and analyze Walmarts proprietary online and in-store data, encompassing 95% of American households. Importantly, we build smart data systems that deliver relevant retail ads and experiences that connect our customers with the brands and products they love. Your Opportunity We are looking for a versatile principal data scientist who has a strong expertise in machine learning, deep learning with good software engineering skills and significant exposure to building ML solutions; including Gen AI solutions scratch up & leading data science engagement. The opportunities that will come with this role are As a seasoned SME in MLE, you will get to work on & take a lead in scaling & deployment for the most challenging of our data science solutions (including , but definitely not limited to Gen-AI solutions) across a broad spectrum of advertising domain. Influence the best practices that we should follow as we scale & deploy our solutions across a diverse set of product. Train and mentor our pool of data scientists in data sciences and MLE skills Contribute to the Tech org via patents, publications & open source contributions. What You Will Do Design large-scale AI/ML products/systems impacting millions of customers Develop highly scalable, timely, highly-performant, instrumented, and accurate data pipelines Drive and ensure that MLOps practices are being followed in solutions Enable data governance practices and processes by being a passionate adopter and ambassador. Drive data pipeline efficiency, data quality, efficient feature engineering, maintenance of different DBs (like Vector DBs, Graph DBs, feature stores, caching mechanism) Lead and inspire a team of scientists and engineers solving AI/ML problems through R&D while pushing the state-of-the-art Lead the team to develop production-level code for the implementation of AI/ML solutions using best practices to handle high-scale and low-latency requirements Deploy batch and real-time ML solutions, model results consumption and integration pipelines. Work with the customer-centric mindset to deliver high-quality business-driven analytic solutions. Drive proactive optimisation of code and deployments, improving efficiency, cost and resource optimisation. Design model architecture, optimal Tech stack and model choices, integration with larger engineering ecosystem, drive best-practices of model integrations working closely with Software Engineering leaders Consult with business stakeholders regarding algorithm-based recommendations and be a thought-leader to deploy these & drive business actions. Closely partners with the Senior Managers & Director of Data Science, Engineering and product counterparts to drive data science adoption in the domain Collaborate with multiple stakeholders to drive innovation at scale Build a strong external presence by publishing your team's work in top-tier AI/ML conferences and developing partnerships with academic institutions Adhere to Walmart's policies, procedures, mission, values, standards of ethics and integrity Adopt Walmart's quality standards, develop/recommend process standards and best practices across the retail industry. Drive data pipeline efficiency, data quality, efficient feature engineering, maintenance of different DBs (like Vector DBs, Graph DBs, feature stores, caching mechanism). Deploy batch and real-time ML solutions, model results consumption and integration pipelines. Design model architecture, Optimal Tech stack and model choices, integration with larger engineering ecosystem, drive best-practices of model integrations working closely with Software Engineering leaders. Drive proactive optimisation of code and deployments, improving efficiency, cost and resource optimisation. What You Will Bring Bachelors with > 13 years or Master's with > 12 years OR Ph.D. with > 10 years of relevant experience. Educational qualifications should be in Engineering / Data sciences. Strong experience working with state-of-the-art supervised and unsupervised machine learning algorithms on real-world problems. Experienced in architecting solutions with Continuous Integration and Continuous Delivery in mind. Strong experience in real time ML solution deployment Experience with deployment patterns for Distributed Systems. Strong Python coding and package development skills. Experience with Big Data and analytics in general leveraging technologies like Hadoop, Spark, and MapReduce. Ability to work in a big data ecosystem - expert in SQL/Hive/Spark. About Walmart Global Tech: Imagine working in an environment where one line of code can make life easier for hundreds of millions of people and put a smile on their face. Thats what we do at Walmart Global Tech. Were a team of 15,000+ software engineers, data scientists and service professionals within Walmart, the worlds largest retailer, delivering innovations that improve how our customers shop and empower our 2.3 million associates. To others, innovation looks like an app, service, or some code, but Walmart has always been about people. People are why we innovate, and people power our innovations. Being human led is our true disruption. Flexible, hybrid work: We use a hybrid way of working that is primarily in office coupled with virtual when not onsite. Our campuses serve as a hub to enhance collaboration, bring us together for purpose and deliver on business needs. This approach helps us make quicker decisions, remove location barriers across our global team and be more flexible in our personal lives. Benefits: Beyond our great compensation package, you can receive incentive awards for your performance. Other great perks include a host of best-in-class benefits maternity and parental leave, PTO, health benefits, and much more. Equal Opportunity Employer: Walmart, Inc. is an Equal Opportunity Employer By Choice. We believe we are best equipped to help our associates, customers and the communities we serve live better when we really know them. That means understanding, respecting and valuing diversity- unique styles, experiences, identities, ideas and opinions while being inclusive of all people. Minimum Qualifications. Outlined below are the required minimum qualifications for this position. If none are listed, there are no minimum qualifications. Minimum Qualifications:Option 1: Bachelors degree in Statistics, Economics, Analytics, Mathematics, Computer Science, Information Technology or related field and 5 years" experience in an analytics related field. Option 2: Masters degree in Statistics, Economics, Analytics, Mathematics, Computer Science, Information Technology or related field and 3 years" experience in an analytics related field. Option 3: 7 years" experience in an analytics or related field. Preferred Qualifications. Outlined below are the optional preferred qualifications for this position. If none are listed, there are no preferred qualifications. Primary Location. G, 1, 3, 4, 5 Floor, Building 11, Sez, Cessna Business Park, Kadubeesanahalli Village, Varthur Hobli , India R-1925040,

Posted 3 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

punjab

On-site

Responsibilities: Responsible for selling one or more of the following Digital Engineering Services/SaaS/ for the US Markets. Customer industries: Automotive,e-commerce, media, Logistics, Energy, Hi-Tech, Industrial SaaS companies, ISVs. Cloud Services / Solutions, Application Development, Full Stack Development, DevOps, Mobile Apps Development, SRE, Servicenow, workflow automation. Data Analytics, Data Engineering, Governance and Pipelining, Data Lake Development And Re-Architecture, DataOps, MLOps Industrial IoT Prospecting: Identify and research potential as part of daily reach out activities. Outbound Calling: Reach out to prospects via phone and email to introduce our IT services & solutions and build initial interest. Sales Pipeline Management: Maintain accurate and up-to-date records of leads, opportunities, and client interactions in the CRM system. Achieve Targets: Meet or exceed monthly and quarterly leads quotas and targets. Must Have Qualifications: Education: B. Tech / B.E In Engineering; Experience: 3 to 5 years of experience with Proven track record of success in inside sales. 3+ years experience selling Digital Engineering Services to U.S. and European customers (Germany/Ireland preferred). Domain Knowledge: Domain knowledge in Digital Engineering Services (Must Have) Vertical Knowledge/Experience: as described in the JD Excellent communication and interpersonal skills. Familiarity with CRM software and sales automation tools. Job Types: Full-time, Permanent Benefits: Food provided Health insurance Leave encashment Provident Fund Schedule: Evening shift Fixed shift Monday to Friday Night shift US shift Experience: total work: 2 years (Required) Work Location: In person Speak with the employer +91 9034340735,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Scientist at Amdocs in Pune, you will be responsible for the design, development, modification, debugging, and maintenance of software systems. Your role will involve hands-on experience in handling GenAI use cases and developing Databricks jobs for data ingestion for learnings. You will create partnerships with project stakeholders to provide technical assistance for important decisions and work on the development and implementation of Gen AI use-cases in live production as per business/user requirements. Your technical skills should include mandatory expertise in deep learning engineering (mostly on MLOps), strong NLP/LLM experience, and processing text using LLM. You should be proficient in Pyspark/Databricks & Python programming, building backend applications using Python and deep learning frameworks, and deploying models while building APIs (FAST API, FLASK API). Experience with working with GPUs, vector databases like Milvus, Azure cognitive search, and quadrant, as well as transformers and hugging face models like llama, Mixtral AI, and embedding models is essential. It would be good to have knowledge and experience in Kubernetes, Docker, cloud experience working with VMs and Azure storage, and sound data engineering experience. In this role, you will be challenged to design and develop new software applications, providing you with opportunities for personal growth in a growing organization. Your job will involve minimal travel and will be located in Pune. Join Amdocs and help build the future to make it amazing by unlocking innovative potential for next-generation communication and media experiences for end users and enterprise customers.,

Posted 3 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

As an AI/ML Engineering Lead, you will be joining our team in Pune, India, to spearhead our AI initiatives. With 8 to 10 years of programming experience, including 2 to 3 years in AI/ML, you will lead a dynamic team of AI engineers. Your expertise in Python or similar languages, coupled with a strong background in data science and MLOps, will drive the development and optimization of AI/ML models. Your responsibilities will include team leadership, model development, collaboration with cross-functional teams, data management, deployment & monitoring of AI/ML models, MLOps implementation, infrastructure management, innovation & research, stakeholder communication, and strategic alignment. Key Responsibilities: - Lead, mentor, and manage a team of AI engineers, providing technical direction, coaching, and performance evaluations. - Design, develop, and optimize AI/ML models using Python or equivalent programming languages. - Partner with cross-functional teams to translate business requirements into AI/ML solutions that align with organizational objectives. - Oversee the preprocessing and management of large datasets to ensure data integrity and readiness for model development. - Lead the deployment of AI/ML models in production, ensuring robustness, scalability, and high performance. - Develop and manage MLOps pipelines to streamline model training, testing, and deployment. - Set up and maintain AI-related infrastructure, including cloud services, databases, and computational resources. - Stay updated on the latest AI/ML technologies, tools, and best practices to foster continuous learning and innovation. - Articulate complex AI/ML concepts to stakeholders at all levels, providing clear updates on project progress and technical insights. - Drive the AI strategy to ensure alignment with broader business goals. Required Qualifications: - Bachelors or Masters degree in Computer Science, Data Science, AI/ML, or related field. - 8 to 10 years of programming experience focusing on Python or similar languages. - 2 to 3 years of hands-on experience in AI/ML, including leading engineering teams. - Proficiency in data science principles and data management best practices within AI/ML projects. - Expertise in MLOps, AI infrastructure setup, and cloud platforms (AWS, Azure, GCP). - Strong problem-solving abilities and excellent communication skills for technical and non-technical audiences. Preferred Qualifications: - Experience with TensorFlow, PyTorch, scikit-learn, big data technologies, and databases. - Certification in AI/ML or cloud platforms like AWS Certified Machine Learning or Azure AI Engineer. Join us to lead passionate AI engineers in transformative projects, accelerate your career growth, and thrive in a collaborative work culture that values innovation and creativity. If you are ready to apply your AI/ML expertise and leadership skills, we invite you to explore this exciting opportunity in Pune.,

Posted 3 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

The company Armada is an edge computing startup that specializes in providing computing infrastructure to remote areas with limited connectivity and cloud infrastructure. They also focus on processing data locally for real-time analytics and AI at the edge. Armada is dedicated to bridging the digital divide by deploying advanced technology infrastructure rapidly. As they continue to grow, they are seeking talented individuals to join them in achieving their mission. As a DevOps Lead at Armada, you will play a crucial role in integrating AI-driven operations into the DevOps practices of the company. Your responsibilities will include leading a DevOps team, designing scalable systems, and implementing intelligent monitoring, alerting, and self-healing infrastructure. The role requires a strategic mindset and hands-on experience with a focus on Ops AI. This position is based at the Armada office in Trivandrum, Kerala. As the DevOps Lead, you will lead the DevOps strategy with a strong emphasis on AI-enabled operational efficiency. You will architect and implement CI/CD pipelines integrated with machine learning models and analytics. Additionally, you will develop and manage infrastructure as code using tools like Terraform, Ansible, or CloudFormation. Collaboration is key in this role, as you will work closely with data scientists, developers, and operations teams to deploy and manage AI-powered applications. You will also be responsible for enhancing system observability through intelligent dashboards and real-time metrics analysis. Furthermore, you will mentor DevOps engineers and promote best practices in automation, security, and performance. To be successful in this role, you should have a Bachelor's or Master's degree in Computer Science, Engineering, or a related field. You should also have at least 7 years of DevOps experience with a minimum of 2 years in a leadership role. Proficiency in cloud infrastructure management and automation is essential, along with experience in AIOps platforms and tools. Strong scripting abilities, familiarity with CI/CD tools, and expertise in containerization and orchestration are also required. Preferred qualifications include knowledge of MLOps, experience with serverless architectures, and certification in cloud platforms. Demonstrable experience in building and integrating software and hardware for autonomous or robotic systems is a plus. Strong analytical skills, time-management abilities, and effective communication are highly valued for this role. In return, Armada offers a competitive base salary along with equity options for India-based candidates. If you are a proactive individual with a growth mindset, strong problem-solving skills, and the ability to thrive in a fast-paced environment, you may be a great fit for this position at Armada. Join the team and contribute to the success and growth of the company while working collaboratively towards achieving common goals.,

Posted 3 weeks ago

Apply

2.0 - 7.0 years

30 - 40 Lacs

Hyderabad, Gurugram

Work from Office

We are seeking a skilled MLOps/ML Engineer to serve as our subject matter expert for Dataiku DSS. In this pivotal role, you will manage and scale our end-to-end machine learning operations, all of which are built on the Dataiku platform. Key responsibilities include designing automated data pipelines, deploying models as production APIs, ensuring the reliability of scheduled jobs, and championing platform best practices. Extensive, proven experience with Dataiku is mandatory. Gurgaon - Work from office 30 to 40 LPA MAX Immediate or Serving Notice 2 Weeks Data Pipeline Development: Design and implement Extract, Transform, Load (ETL) processes to collect, process, and analyze data from diverse sources. Workflow Optimization: Develop, configure, and optimize Dataiku DSS workflows to streamline data processing and machine learning operations. Integration: Integrate Dataiku DSS with cloud platforms (e.g., AWS, Azure, Google Cloud Platform) and big data technologies such as Snowflake, Hadoop, and Spark. AI/ML Model development & Implementation: Implement and optimize machine learning models within Dataiku for predictive analytics and AI-driven solutions. MLOps & Data Ops: Deployment of data pipelines and AI/ML models within the Dataiku platform. Dataiku Platform Management: Build, Manage & Support Dataiku platform. Automation: Automate data workflows, monitor job performance, and ensure scalable execution. Customization: Develop and maintain custom Python/R scripts within Dataiku to enhance analytics capabilities. Dataiku Project Management: Develop and maintain custom Python/R scripts within Dataiku to enhance analytics capabilities. Required Skills and Qualifications: Experience Level: 2 to 6 years of hands-on experience with Dataiku DSS platform and data engineering. Educational Background: Bachelors or Master’s degree in Computer Science, Data Science, Information Technology, or a related field. Technical Proficiency: Experience with Dataiku DSS platform. Strong programming skills in Python and SQL. Familiarity with cloud services (AWS, Azure, GCP) and big data technologies (Hadoop, Spark). Analytical Skills: Ability to analyze complex data sets and provide actionable insights. Problem-Solving: Strong troubleshooting skills to address and resolve issues in data workflows and models. Communication: Effective verbal and written communication skills to collaborate with team members and stakeholders.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies