Jobs
Interviews

515 Mlops Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

haryana

On-site

We are looking for a highly skilled Computer Vision & AI Engineer to join our dynamic team. You should have expertise in advanced computer vision and AI development, and a strong passion for creating innovative and scalable software products and systems. Your responsibilities will include designing, developing, and deploying algorithms within software products specifically tailored for the energy sector. Your key responsibilities in this role will be to design, implement, and deploy computer vision and AI solutions within software products and solutions. You will be expected to expand both personal and team expertise in 2D and 3D data analytics. It is important to stay up-to-date with the latest trends in computer vision, machine learning, and data science, while identifying practical applications of these technologies in the energy sector. You will also be responsible for developing and maintaining the necessary data pipelines and infrastructure for AI applications, performing model evaluation, tuning, and validation to improve accuracy and efficiency, as well as writing clean, maintainable, and efficient code in accordance with industry best practices. To excel in this role, you should possess a Bachelor's or Master's degree in computer science, Artificial Intelligence, Data Science, or a related field, along with at least 3 years of experience in computer vision / AI solution development. Proficiency in Python, OpenCV, and machine learning frameworks such as TensorFlow or PyTorch is required. Strong analytical skills with extensive expertise in modern computer vision techniques, including object detection, semantic and instance segmentation, feature extraction, and 3D reconstruction are essential. You should also be proficient in the design, implementation, and evaluation of algorithms based on specific problem descriptions and be familiar with computer vision, machine learning for large datasets, as well as have knowledge of DevOps tools and CI/CD pipelines. Excellent problem-solving abilities, effective communication skills in English, and the ability to work independently and collaboratively within a team are crucial. Enthusiasm, creativity in problem-solving, critical thinking, and effective communication in a distributed team environment are highly valued. Preferred qualifications include experience with cloud platforms such as AWS, Azure, or Google Cloud, MLOps, and model deployment in production environments. Experience with large datasets, 3D data processing, and deep learning in 3D data, as well as familiarity with containerization technologies like Docker and Kubernetes, would be advantageous.,

Posted 2 weeks ago

Apply

6.0 - 11.0 years

15 - 20 Lacs

Hyderabad, Bengaluru

Work from Office

Responsibilities: Design, implement & optimize infrastructure using ML techniques Ensure data security & compliance standards met Collaborate with cross-functional teams on project delivery

Posted 2 weeks ago

Apply

5.0 - 7.0 years

17 - 20 Lacs

Gurugram

Work from Office

Job Summary : We are seeking a skilled Data Scientist with expertise in AI orchestration and embedded systems to support a sprint-based Agile implementation focused on integrating generative AI capabilities into enterprise platforms such as Slack, Looker, and Confluence. The ideal candidate will have hands-on experience with Gemini and a strong understanding of prompt engineering, vector databases, and orchestration infrastructure. Key Responsibilities : - Develop and deploy Slack-based AI assistants leveraging Gemini models. - Design and implement prompt templates tailored to enterprise data use cases (Looker and Confluence). - Establish and manage an embedding pipeline for Confluence documentation. - Build and maintain orchestration logic for prompt execution and data retrieval. - Set up API authentication and role-based access controls for integrated systems. - Connect and validate vector store operations (e.g., Pinecone, Weaviate, or Snowflake vector extension). - Contribute to documentation, internal walkthroughs, and user acceptance testing planning. - Participate in Agile ceremonies including daily standups and sprint demos. Required Qualifications : - Proven experience with Gemini and large language model deployment in production environments. - Proficiency in Python, orchestration tools, and prompt engineering techniques. - Familiarity with vector database technologies and embedding workflows. - Experience integrating APIs for data platforms such as Looker and Confluence. - Strong understanding of access control frameworks and enterprise-grade authentication. - Demonstrated success in Agile, sprint-based project environments. Preferred Qualifications : - Experience working with Slack app development and deployment. - Background in MLOps, LLMOps, or AI system orchestration at scale. - Excellent communication skills and ability to work in cross-functional teams.

Posted 2 weeks ago

Apply

5.0 - 7.0 years

17 - 20 Lacs

Noida

Work from Office

Job Summary : We are seeking a skilled Data Scientist with expertise in AI orchestration and embedded systems to support a sprint-based Agile implementation focused on integrating generative AI capabilities into enterprise platforms such as Slack, Looker, and Confluence. The ideal candidate will have hands-on experience with Gemini and a strong understanding of prompt engineering, vector databases, and orchestration infrastructure. Key Responsibilities : - Develop and deploy Slack-based AI assistants leveraging Gemini models. - Design and implement prompt templates tailored to enterprise data use cases (Looker and Confluence). - Establish and manage an embedding pipeline for Confluence documentation. - Build and maintain orchestration logic for prompt execution and data retrieval. - Set up API authentication and role-based access controls for integrated systems. - Connect and validate vector store operations (e.g., Pinecone, Weaviate, or Snowflake vector extension). - Contribute to documentation, internal walkthroughs, and user acceptance testing planning. - Participate in Agile ceremonies including daily standups and sprint demos. Required Qualifications : - Proven experience with Gemini and large language model deployment in production environments. - Proficiency in Python, orchestration tools, and prompt engineering techniques. - Familiarity with vector database technologies and embedding workflows. - Experience integrating APIs for data platforms such as Looker and Confluence. - Strong understanding of access control frameworks and enterprise-grade authentication. - Demonstrated success in Agile, sprint-based project environments. Preferred Qualifications : - Experience working with Slack app development and deployment. - Background in MLOps, LLMOps, or AI system orchestration at scale. - Excellent communication skills and ability to work in cross-functional teams.

Posted 2 weeks ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Noida, Kolkata, Pune

Hybrid

Inviting applications for the role of ( ML/CV Ops Engineer) ! We are seeking a highly skilled ML CV Ops Engineer to join our AI Engineering team. This role is focused on operationalizing Computer Vision modelsensuring they are efficiently trained, deployed, monitored, and retrained across scalable infrastructure or edge environments. The ideal candidate has deep technical knowledge of ML infrastructure, DevOps practices, and hands-on experience with CV pipelines in production. You’ll work closely with data scientists, DevOps, and software engineers to ensure computer vision models are robust, secure, and production-ready always. Key Responsibilities: End-to-End Pipeline Automation: Build and maintain ML pipelines for computer vision tasks (data ingestion, preprocessing, model training, evaluation, inference). Use tools like MLflow, Kubeflow, DVC, and Airflow to automate workflows. Model Deployment & Serving: Package and deploy CV models using Docker and orchestration platforms like Kubernetes. Use model-serving frameworks (TensorFlow Serving, TorchServe, Triton Inference Server) to enable real-time and batch inference. Monitoring & Observability: Set up model monitoring to detect drift, latency spikes, and performance degradation. Integrate custom metrics and dashboards using Prometheus, Grafana, and similar tools. Model Optimization: Convert and optimize models using ONNX, TensorRT, or OpenVINO for performance and edge deployment. Implement quantization, pruning, and benchmarking pipelines. Edge AI Enablement (Optional but Valuable): Deploy models on edge devices (e.g., NVIDIA Jetson, Coral, Raspberry Pi) and manage updates and logs remotely. Collaboration & Support: Partner with Data Scientists to productionize experiments and guide model selection based on deployment constraints. Work with DevOps to integrate ML models into CI/CD pipelines and cloud-native architecture. Required Qualifications: Bachelor’s or Master’s in Computer Science, Engineering, or a related field. Sound experience in ML engineering, with significant work in computer vision and model operations. Strong coding skills in Python and familiarity with scripting for automation. Hands-on experience with PyTorch, TensorFlow, OpenCV, and model lifecycle tools like MLflow, DVC, or SageMaker. Solid understanding of containerization and orchestration (Docker, Kubernetes). Experience with cloud services (AWS/GCP/Azure) for model deployment and storage. Preferred Qualifications: Experience with real-time video analytics or image-based inference systems. Knowledge of MLOps best practices (model registries, lineage, versioning). Familiarity with edge AI deployment and acceleration toolkits (e.g., TensorRT, DeepStream). Exposure to CI/CD pipelines and modern DevOps tooling (Jenkins, GitLab CI, ArgoCD). Contributions to open-source ML/CV tooling or experience with labeling workflows (CVAT, Label Studio). Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career —Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together.

Posted 2 weeks ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Noida, Kolkata, Pune

Hybrid

Inviting applications for the role of ( ML/CV Ops Engineer) ! We are seeking a highly skilled ML CV Ops Engineer to join our AI Engineering team. This role is focused on operationalizing Computer Vision modelsensuring they are efficiently trained, deployed, monitored, and retrained across scalable infrastructure or edge environments. The ideal candidate has deep technical knowledge of ML infrastructure, DevOps practices, and hands-on experience with CV pipelines in production. You’ll work closely with data scientists, DevOps, and software engineers to ensure computer vision models are robust, secure, and production-ready always. Key Responsibilities: End-to-End Pipeline Automation: Build and maintain ML pipelines for computer vision tasks (data ingestion, preprocessing, model training, evaluation, inference). Use tools like MLflow, Kubeflow, DVC, and Airflow to automate workflows. Model Deployment & Serving: Package and deploy CV models using Docker and orchestration platforms like Kubernetes. Use model-serving frameworks (TensorFlow Serving, TorchServe, Triton Inference Server) to enable real-time and batch inference. Monitoring & Observability: Set up model monitoring to detect drift, latency spikes, and performance degradation. Integrate custom metrics and dashboards using Prometheus, Grafana, and similar tools. Model Optimization: Convert and optimize models using ONNX, TensorRT, or OpenVINO for performance and edge deployment. Implement quantization, pruning, and benchmarking pipelines. Edge AI Enablement (Optional but Valuable): Deploy models on edge devices (e.g., NVIDIA Jetson, Coral, Raspberry Pi) and manage updates and logs remotely. Collaboration & Support: Partner with Data Scientists to productionize experiments and guide model selection based on deployment constraints. Work with DevOps to integrate ML models into CI/CD pipelines and cloud-native architecture. Required Qualifications: Bachelor’s or Master’s in Computer Science, Engineering, or a related field. Sound experience in ML engineering, with significant work in computer vision and model operations. Strong coding skills in Python and familiarity with scripting for automation. Hands-on experience with PyTorch, TensorFlow, OpenCV, and model lifecycle tools like MLflow, DVC, or SageMaker. Solid understanding of containerization and orchestration (Docker, Kubernetes). Experience with cloud services (AWS/GCP/Azure) for model deployment and storage. Preferred Qualifications: Experience with real-time video analytics or image-based inference systems. Knowledge of MLOps best practices (model registries, lineage, versioning). Familiarity with edge AI deployment and acceleration toolkits (e.g., TensorRT, DeepStream). Exposure to CI/CD pipelines and modern DevOps tooling (Jenkins, GitLab CI, ArgoCD). Contributions to open-source ML/CV tooling or experience with labeling workflows (CVAT, Label Studio). Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career —Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together.

Posted 2 weeks ago

Apply

3.0 - 7.0 years

15 - 25 Lacs

Bengaluru

Work from Office

Key Responsibilities: Design, develop, and maintain CI/CD pipelines for ML models. Collaborate with Data Scientists to build scalable model deployment architectures. Automate data and model validation, retraining, and monitoring pipelines. Ensure reliability, security, and scalability of ML infrastructure. Monitor model performance and data drift in production environments. Integrate ML models with APIs, web services, and production systems. Implement best practices for version control and reproducibility of ML workflows. Manage containerized ML applications using Docker and Kubernetes. Qualifications: Bachelors or Master’s degree in Computer Science, Engineering, or a related field. Proven experience in MLOps, DevOps, or ML Engineering roles. Proficiency in Python, Bash, and scripting languages. Hands-on experience with ML platforms (e.g., MLflow, Kubeflow, SageMaker). Familiarity with cloud platforms (AWS, Azure, GCP). Experience with CI/CD tools (e.g., Jenkins, GitHub Actions). Strong understanding of ML model lifecycle and deployment challenges. Excellent problem-solving and communication skills. Preferred Skills: Experience with infrastructure-as-code tools (e.g., Terraform). Knowledge of data engineering and ETL pipelines. Exposure to monitoring tools like Prometheus, Grafana.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

18 - 22 Lacs

Bengaluru

Work from Office

About Apexon: Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation.Apexon brings together distinct core competencies in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences – to help businesses capitalize on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients’ toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents. We enable #HumanFirstDigital Key Responsibilities: Design, develop, and maintain CI/CD pipelines for ML models and data workflows. Collaborate with data science teams to productionize models using tools like MLflow, Kubeflow, or SageMaker. Automate training, validation, testing, and deployment of machine learning models. Monitor model performance, drift, and retraining needs. Ensure version control of datasets, code, and model artifacts. Implement model governance, audit trails, and reproducibility. Optimize model serving infrastructure (REST APIs, batch/streaming inference). Integrate ML solutions with cloud services (AWS, Azure, GCP). Ensure security, compliance, and reliability of ML systems. Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or related field. 4+ years of experience in MLOps, DevOps, or ML engineering roles. Strong experience with ML pipeline tools (MLflow, Kubeflow, TFX, SageMaker Pipelines). Proficiency in containerization and orchestration tools (Docker, Kubernetes, Airflow). Strong Python coding skills and familiarity with ML libraries (scikit-learn, TensorFlow, PyTorch). Experience with cloud platforms (AWS, Azure, GCP) and their ML services. Knowledge of CI/CD tools (GitLab CI/CD, Jenkins, GitHub Actions). Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK, Sentry). Understanding of data versioning (DVC, LakeFS) and feature stores (Feast, Tecton). Strong grasp of model testing, validation, and monitoring in production environments. Our Commitment to Diversity & Inclusion: Did you know that Apexon has been Certified™ by Great Place To Work®, the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK.Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We take affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy (apexon.com) Our Perks and Benefits: Our benefits and rewards program has been thoughtfully designed to recognize your skills and contributions, elevate your learning/upskilling experience and provide care and support for you and your loved ones. As an Apexon Associate, you get continuous skill-based development, opportunities for career advancement, and access to comprehensive health and well-being benefits and assistance. We also offer: Group Health Insurance covering family of 4 Term Insurance and Accident Insurance Paid Holidays & Earned Leaves Paid Parental LeaveoLearning & Career Development Employee Wellness

Posted 2 weeks ago

Apply

9.0 - 14.0 years

25 - 40 Lacs

Pune

Work from Office

Position: Architect AI & ML Experience- 9+ Years Job Location- Pune We are seeking a dynamic and experienced leader to drive the growth of our AI practice with a focus on Generative AI, advanced NLP solutions, and Large Language Models (LLMs). This role is ideal for a seasoned professional who combines technical expertise with exceptional leadership skills, customer-facing experience, and a vision for scaling teams and capabilities. Skills & Qualifications Technical Expertise: Extensive experience with NLP techniques and multi-class/multi-label text classification. Hands-on experience fine-tuning private LLMs (e.g., LLaMA, Gemma). Proficiency in PyTorch, Hugging Face, LangChain, Haystack, and related frameworks. Strong knowledge of model orchestration tools like MLFlow or KubeFlow. Familiarity with RAG techniques and vector stores (e.g., Pinecone, ChromaDB). Strategic Mindset: Visionary thinking with the ability to translate emerging AI trends into actionable business strategies. Good to have: Experience with multi-modal AI models (e.g., image-text, text-audio models). Expertise in Databricks for scalable model development and deployment. Knowledge of Explainability AI tools (e.g., Captum) for interpretable models. Key Responsibilities Leadership & Growth: Lead, mentor, and grow a high-performing AI team, fostering innovation and collaboration. Develop and execute strategies to expand the practice and deliver measurable business value to clients. Customer Engagement: Serve as a confident and articulate interface with clients, ensuring clear communication of AI strategies, solutions, and outcomes. Build trusted partnerships with clients, understanding their needs and aligning solutions to their goals. AI Solution Development: Design and implement state-of-the-art NLP solutions, focusing on multi-class and multi-label text classification. Fine-tune and deploy private LLMs (e.g., LLaMA, Gemma) for tailored business applications. Develop Retrieval-Augmented Generation (RAG) pipelines leveraging vector databases like Pinecone or ChromaDB for high-performance solutions. Operational Excellence: Oversee end-to-end model lifecycle management, including training, deployment, and monitoring. Integrate explainability into AI models, ensuring transparency and trust in decision-making. Collaborate with external LLM providers (e.g., OpenAI, Claude) to enhance and integrate AI capabilities. Leadership & Communication: Proven ability to lead and inspire teams, with a track record of scaling AI practices. Exceptional communication and presentation skills, capable of engaging diverse stakeholders confidently. Additional Skills Experience with multi-modal AI models (e.g., image-text, text-audio models). Expertise in Databricks for scalable model development and deployment. Knowledge of Explainability AI tools (e.g., Captum) for interpretable models.

Posted 2 weeks ago

Apply

6.0 - 8.0 years

25 - 27 Lacs

Hyderabad

Work from Office

We seek a Senior Full Stack AI Engineer passionate about rapidly developing and demonstrating Agentic AI and workflow solutions. This role focuses on quickly translating innovative ideas into tangible Proofs of Concept (POCs), leveraging full-stack capabilities to build attractive user interfaces, integrate sophisticated AI agents, and deploy working solutions for impactful demonstrations. Key Responsibilities Rapid Agentic AI POC Development: Lead agile development of end-to-end Agentic AI and complex workflow POCs from concept to demonstration, ensuring rapid iteration and delivery. Design and implement multi-turn conversational Agents, Tools, and Chains. Full Stack Application Development: Develop attractive and intuitive front-end user interfaces (e.g., React, Next.js) and robust back-end APIs (e.g., FastAPI, Flask). Implement comprehensive testing strategies across the full stack. AI Orchestration & Integration: Utilize frameworks like Langgraph (or equivalents such as CrewAI, AutoGen) to orchestrate complex AI agent systems and multi-step workflows. Implement Agent-to-Agent (A2A) communication and adhere to Model Connect Protocol (MCP) for inter-model interactions. Apply advanced prompt engineering techniques. Deployment & Demonstration: Containerize applications (Docker) and deploy on Kubernetes for scalable POCs. Establish and utilize CI/CD pipelines for continuous integration and delivery. Prepare and deliver compelling technical demonstrations. Experience: 6+ years in ML + data engineering, with 2+ years in LLM/GenAI projects. Proven rapid prototyping track record. Technical Skills: Programming: Python, JavaScript/TypeScript (React, Next.js). AI/ML Fundamentals: LLMs, prompt engineering, Agentic AI architectures. Agentic AI & LLM Frameworks: Langgraph, CrewAI, AutoGen, A2A, MCP, Chains, Tools, Agents. Front-end: React, Next.js. Back-end: FastAPI, Flask. Databases: Vector databases (Pinecone, Weaviate), SQL/NoSQL. DevOps: Docker, Kubernetes (K8s), CI/CD, Testing. Cloud Platforms: AWS/Azure/GCP. Good to have: Experience with cloud-native AI/ML services (e.g., AWS SageMaker, Azure ML, GCP Vertex AI). Familiarity with UI/UX design principles for creating engaging interfaces. Experience with asynchronous programming patterns in Python. Knowledge of caching mechanisms (e.g., Redis) for optimizing application performance. Understanding of general Machine Learning concepts and common algorithms beyond LLMs.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

The Spec Analytics Intmd Analyst role is a developing professional position where you will independently handle most problems and have the freedom to solve complex issues. You will utilize your in-depth specialty knowledge along with industry standards to integrate with the team and achieve sub-function/job family objectives. Your role will involve applying analytical thinking and data analysis tools to make judgments and recommendations based on factual information with attention to detail. You will be dealing with variable issues that could have broader business impacts and will have a direct influence on the core activities of the business through close contact. Your communication and diplomacy skills will be crucial to exchange potentially complex information effectively. Responsibilities: - Working with large and complex data sets to evaluate, recommend, and support business strategies - Identifying and compiling data sets using tools like SQL and Access to predict, improve, and measure business outcomes - Documenting data requirements, data collection, processing, cleaning, and exploratory data analysis - Specializing in marketing, risk, digital, and AML fields - Assessing risk in business decisions with consideration for the firm's reputation and compliance with laws and regulations Skills and Experience: As a successful candidate, you should ideally have: - 5+ years of experience in data science, machine learning, or related fields - Advanced process management skills, detail-oriented, and organized - Curiosity and willingness to learn new skill sets, particularly in artificial intelligence - Strong programming skills in Python and proficiency in data science libraries such as scikit-learn, TensorFlow, PyTorch, and Transformers - Experience with statistical modeling techniques, data visualization tools, and GenAI solutions - Familiarity with agentic AI frameworks and MLOps Education: - Bachelors/University degree or equivalent experience This job description provides an overview of the work performed, and additional duties may be assigned as needed.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

At PwC, the focus in data and analytics is on leveraging data to drive insights and make informed business decisions. Utilizing advanced analytics techniques to help clients optimize their operations and achieve strategic goals is key. In data analysis at PwC, the emphasis is on utilizing advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. Skills in data manipulation, visualization, and statistical modeling play a crucial role in supporting clients in solving complex business problems. Candidates with 4+ years of hands-on experience are sought for the position of Senior Associate in supply chain analytics. Successful candidates should possess proven expertise in supply chain analytics across domains such as demand forecasting, inventory optimization, logistics, segmentation, and network design. Additionally, hands-on experience working on optimization methods like linear programming, mixed integer programming, and scheduling optimization is required. Proficiency in forecasting techniques and machine learning techniques, along with a strong command of statistical modeling, testing, and inference, is essential. Familiarity with GCP tools like BigQuery, Vertex AI, Dataflow, and Looker is also necessary. Required skills include building data pipelines and models for forecasting, optimization, and scenario planning, strong SQL and Python programming skills, experience deploying models in a GCP environment, and knowledge of orchestration tools like Cloud Composer (Airflow). Nice-to-have skills consist of familiarity with MLOps, containerization (Docker, Kubernetes), and orchestration tools, as well as strong communication and stakeholder engagement skills at the executive level. The roles and responsibilities of the Senior Associate involve assisting analytics projects within the supply chain domain, driving design, development, and delivery of data science solutions. They are expected to interact with and advise consultants/clients as subject matter experts, conduct analysis using advanced analytics tools, and implement quality control measures for deliverable integrity. Validating analysis outcomes, making presentations, and contributing to knowledge and firm building activities are also part of the role. The ideal candidate should hold a degree in BE / B.Tech / MCA / M.Sc / M.E / M.Tech / Masters Degree / MBA from a reputed institute.,

Posted 2 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

You will be joining a company that specializes in Identity and Access Management (IAM) and Customer Identity and Access Management (CIAM), offering advanced solutions to enhance security for your workforce, customers, and partners. The company also provides cutting-edge security solutions for various popular CMS and project management platforms such as Atlassian, WordPress, Joomla, Drupal, Shopify, BigCommerce, and Magento. The solutions offered are precise, effective, and focused on improving overall security measures. As an AI/ML Engineer with our team, you will play a crucial role in developing innovative AI-powered products and solutions. The ideal candidate for this position should possess a minimum of 10 years of hands-on experience in creating and implementing advanced AI and ML models and related software systems. This is a full-time employee position based in Baner, Pune. Your responsibilities will include developing machine learning and deep learning models and algorithms to address complex business challenges, enhance processes, and improve product functionality. You will work on deploying personalized large language models (LLMs), developing systems for document parsing, named entity recognition (NER), retrieval-augmented generation (RAG), and chatbots, as well as building robust data and ML pipelines for production scalability and performance. Additionally, you will optimize machine learning models for better performance, scalability, and accuracy using techniques like hyperparameter tuning and model optimization. It is crucial to write high-quality, production-ready code using frameworks such as PyTorch or TensorFlow and stay updated on the latest advancements in AI/ML technologies and tools. To qualify for this role, you should hold a Bachelor's or Master's Degree in Computer Science, Data Science, Computational Linguistics, Natural Processing (NLP), or related fields. You must have extensive experience in developing and deploying machine learning models and algorithms, with proficiency in AI/ML frameworks such as TensorFlow, PyTorch, and scikit-learn. Strong programming skills in Python, Java, or C++ are necessary, along with familiarity with web frameworks like FastAPI, Flask, Django, and agentic AI frameworks such as LangChain, LangGraph, AutoGen, or Crew AI. Knowledge of Data Science and MLOps, including ML/DL, Generative AI, containerization (Docker), and orchestration (Kubernetes) for deployment is essential. Experience with cloud platforms like AWS, Azure, and GCP, as well as AI/ML deployment tools, is highly beneficial. In this role, you will have the opportunity to work with a team of talented individuals in a stable, collaborative, and supportive work environment. You will be constantly exposed to new technologies and have the chance to expand your skills and knowledge in the field of AI/ML. Your communication and collaboration skills will be put to the test as you collaborate with stakeholders to understand business requirements, define project objectives, and deliver AI/ML solutions that meet customer needs and drive business value.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

ahmedabad, gujarat

On-site

The ideal candidate for this position in Ahmedabad should be a graduate with at least 3 years of experience. At Bytes Technolab, we strive to create a cutting-edge workplace infrastructure that empowers our employees and clients. Our focus on utilizing the latest technologies enables our development team to deliver high-quality software solutions for a variety of businesses. You will be responsible for leveraging your 3+ years of experience in Machine Learning and Artificial Intelligence to contribute to our projects. Proficiency in Python programming and relevant libraries such as NumPy, Pandas, and scikit-learn is essential. Hands-on experience with frameworks like PyTorch, TensorFlow, Keras, Facenet, and OpenCV will be key in your role. Your role will involve working with GPU acceleration for deep learning model development using CUDA, cuDNN. A strong understanding of neural networks, computer vision, and other AI technologies will be crucial. Experience with Large Language Models (LLMs) like GPT, BERT, LLaMA, and familiarity with frameworks such as LangChain, AutoGPT, and BabyAGI are preferred. You should be able to translate business requirements into ML/AI solutions and deploy models on cloud platforms like AWS SageMaker, Azure ML, and Google AI Platform. Proficiency in ETL pipelines, data preprocessing, and feature engineering is required, along with experience in MLOps tools like MLflow, Kubeflow, or TensorFlow Extended (TFX). Expertise in optimizing ML/AI models for performance and scalability across different hardware architectures is necessary. Knowledge of Natural Language Processing (NLP), Reinforcement Learning, and data versioning tools like DVC or Delta Lake is a plus. Skills in containerization tools like Docker and orchestration tools like Kubernetes will be beneficial for scalable deployments. You should have experience in model evaluation, A/B testing, and establishing continuous training pipelines. Working in Agile/Scrum environments with cross-functional teams, understanding ethical AI principles, model fairness, and bias mitigation techniques are important. Familiarity with CI/CD pipelines for machine learning workflows and the ability to communicate complex concepts to technical and non-technical stakeholders will be valuable.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

The Vivriti Group is a leading player in the mid-market lending sector, providing tailored debt solutions to mid-sized enterprises. The group comprises two primary businesses: Vivriti Capital Limited: This is a systematically important Non-Banking Financial Company (NBFC ND-SI) that is regulated by the Reserve Bank of India (RBI). Vivriti Capital has successfully disbursed over USD 3 billion to more than 300 enterprise borrowers and boasts a CRISIL rating of A+. Vivriti Asset Management: This segment operates as a fixed-income fund manager, overseeing multiple Alternative Investment Funds (AIFs). With commitments exceeding USD 550 million from over 900 institutional and private contributors, Vivriti AMC has invested upwards of USD 600 million across 90 entities. In the role of Manager - AI Product, based in Chennai or Bangalore, you will be responsible for the complete lifecycle management of AI/ML products geared towards wholesale lending. This includes overseeing ideation, strategy formulation, development, launch, and ongoing iteration. Your key collaborators will include data scientists, engineers, designers, credit, risk personnel, and other business stakeholders to ensure that our AI initiatives align with user requirements and business objectives. Your primary responsibilities will include defining and executing the AI product roadmap in partnership with leadership and cross-functional teams, converting business challenges into precise AI/ML product requirements, collaborating with data experts and engineers to establish success metrics and model evaluation frameworks, prioritizing product enhancements based on user feedback, data insights, and business impact, ensuring model performance, explainability, fairness, and compliance with ethical guidelines, driving experimentation and iteration for continuous enhancement, acting as a bridge between engineering, data science, design, and stakeholders to secure timely and high-quality delivery, and keeping abreast of emerging AI trends, technologies, and industry best practices. You will also be tasked with championing a user-centric design approach in collaboration with UX/UI designers to craft intuitive, efficient, and engaging user experiences for complex financial workflows. Additionally, you will define key performance indicators (KPIs) for product success and provide regular performance updates to stakeholders. As for requirements, the ideal candidate should possess up to 3 years of product management experience, particularly with AI/ML-based products, a fundamental grasp of machine learning concepts, model lifecycle, and data infrastructure, a track record of translating intricate technical requirements into scalable product solutions, familiarity with MLOps, model evaluation metrics, and deployment pipelines, exceptional communication and stakeholder management abilities, and experience with agile product development and related tools such as Jira, Confluence, Figma, and GitHub. Desired but not mandatory qualifications include hands-on experience with data analysis tools like SQL, Python, Jupyter, exposure to LLMs, generative AI, or AI applications in NLP, computer vision, or recommendation systems, a background in computer science, engineering, data science, or related fields, and prior exposure to startup or high-growth environments.,

Posted 2 weeks ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Bengaluru

Work from Office

About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 43,000 people worldwide. Additional information can be found at: www.chubb.com. About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow. With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Position Details: Function/Department : Advanced Analytics Location : Bangalore, India Employment Type: Full-time Role Overview – Full stack Data Scientist We are seeking a full stack data scientist in Advanced Analytics team, who will be at the foreftont of developing new innovative data driven solutions with bleeding edge machine learning and AI solution end to end. AIML Data Scientist is a technical job that uses AI & machine learning techniques to automate underwriting processes, improve claims outcomes and/or risk solutions. This person will develop vibrant data science solutions which require data engineering, AlML algorithms and Ops engineering skills to develop and deploy it for the business. Ideal candidate for this role is someone with a strong education in computer science, data science, statistics, applied math or a related field, and who is eager to tackle problems with innovative thinking without compromising detail business insights. You are adept at solving diverse problems by utilizing a variety of different tools, strategies, machine learning techniques, algorithms and programming languages. Major Responsibilities Work with business partners globally, determine analyses to be performed, manage deliverables against timelines, present of results and implement the model. Use broad spectrum of Machine Learning, text and image AI models to extract impactful features from structured/unstructured data. Develop and implement models that help with automating, getting insights, make smart decisions; Ensure that the model is able to meet the desired KPIs post-production. Develop and deploy scalable and efficient machine learning models. Package and publish codes and solutions in reusable format python package format- (Pypi, Scikit-learn pipeline,..) Keep the code ready for seamless building of CI/CD pipelines and workflows for machine learning applications. Ensure high quality code that meets business objectives, quality standards and secure web development guidelines. Building reusable tools to streamline the modeling pipeline and sharing knowledge Build real-time monitoring and alerting systems for machine learning systems. Develop and maintain automated testing and validation infrastructure. Troubleshoot pipelines across multiple touchpoints like CI Server, Artifact storage and Deployment cluster. Implement best practices for versioning, monitoring and reusability. Skills and Qualifications: Sound understanding of ML concepts, Supervised / Unsupervised Learning, Ensemble Techniques, Hyperparameter Good knowledge of Random Forest, XGBoost, SVM, Clustering, building data pipelines in Azure/Databricks, deep learning models, OpenCV, Bert and new transformer models for NLU, LLM application in ML> Strong experience with Azure cloud computing and containerization technologies (like Docker, Kubernetes). 4-6 years of experience in delivery end to end data science models. Experience with Python/OOPs programming languages and data science frameworks like (Pandas, Numpy, TensorFlow, Keras, PyTorch, sklearn). Knowledge of DevOps tools such as Git, Jenkins, Sonar, Nexus is must. Building python wheels and debugging build process. Data pipeline building and debugging (by creating and following log traces). Basic knowledge of DevOps practices. Concepts of Unit Testing and Test-Driven development. SDE skills like OOP and Functional programming are an added advantage. Experience with Databricks and its ecosystem is an added advantage. analytics/statistics/mathematics or related domain.

Posted 2 weeks ago

Apply

15.0 - 19.0 years

0 Lacs

karnataka

On-site

As a Data Science Associate Director at Accenture Strategy & Consulting, Global Network Data & AI practice in the Resources team, you will be part of a dynamic group that helps clients grow their businesses through analytics and insights. Your role will involve working closely with clients and stakeholders to drive business growth, identify new opportunities, and develop advanced analytics models for various client problems. Your responsibilities will include solution architecture, design, deployment, and monitoring of analytics models, as well as collaborating with internal teams to drive sales and innovation. You will be expected to lead a team of data analysts, work on large-scale datasets, and provide thought leadership in key capability areas such as tools & technology and asset development. Qualifications and Experience: - Bachelor's/Masters degree in Mathematics, Statistics, Computer Science, Computer Engineering, or related field - 15+ years of experience as a Data Science professional focusing on cloud services - Strong knowledge of Statistical Modeling, Machine Learning algorithms, and Experimental design - Expertise in experimental test design and the ability to derive business strategies from statistical findings - Experience in Utilities, Energy, Chemical, and Natural Resources industries preferred - Proficiency in programming languages like Python, PySpark, SQL, or Scala - Implementation of MLOps practices for streamlining machine learning lifecycle - Understanding of data integration, data modeling, and data warehousing concepts - Excellent analytical, problem-solving, communication, and collaboration skills - Relevant certifications in Azure Data Services or cloud data engineering are highly desirable If you are a strategic thinker with excellent communication skills, a passion for innovation, and a drive to make a difference in the world of data science, we invite you to join our team at Accenture Strategy & Consulting.,

Posted 2 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

indore, madhya pradesh

On-site

You will be working as a Lead Backend Engineer at Team Geek Solutions, a company based in Noida/Indore, with a mission to solve real-world problems using scalable backend systems and next-gen AI technologies. As part of our collaborative and forward-thinking culture, you will play a crucial role in building impactful products driven by cutting-edge technology. Your primary responsibility will be to lead a team of backend and AI engineers, guiding them in developing robust backend solutions using Java and Python. You will leverage your expertise in GenAI and Large Language Models (LLMs) to architect scalable backend architectures and AI-driven solutions, pushing the boundaries of AI capabilities. Key Responsibilities: - Lead and mentor a team of backend and AI engineers to deliver innovative solutions. - Architect and develop robust backend solutions using Java and Python. - Utilize state-of-the-art LLMs like OpenAI and HuggingFace models to build solutions using LangChain, Transformer models, and PyTorch. - Implement advanced techniques such as Retrieval-Augmented Generation (RAG) to enhance LLM-based systems. - Drive the development of end-to-end ML pipelines, including training, fine-tuning, evaluation, deployment, and monitoring (MLOps). - Collaborate with business and product teams to identify use cases and deliver AI-powered solutions. - Stay abreast of the latest advancements in AI and integrate best practices into development workflows. Required Skills: - Proficiency in Java and Python with hands-on experience in both languages. - Strong understanding of Data Structures & Algorithms. - Deep expertise in Transformer-based models, LLM fine-tuning, and deployment. - Hands-on experience with PyTorch, LangChain, and Python web frameworks (e.g., Flask, FastAPI). - Solid database skills, particularly with SQL. - Experience in deploying and managing ML models in production environments. - Leadership experience in managing small to mid-sized engineering teams. Preferred / Good-to-Have Skills: - Familiarity with LLMOps tools and techniques. - Exposure to cloud platforms like AWS/GCP/Azure for model deployment. - Excellent written and verbal communication skills suitable for technical and non-technical audiences. - A strong passion for innovation and building AI-first products. If you are a tech enthusiast with a knack for problem-solving and a drive to innovate, we welcome you to join our team at Team Geek Solutions and contribute to shaping the future of AI-driven solutions.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

maharashtra

On-site

At PwC, our data and analytics team focuses on utilizing data to drive insights and support informed business decisions. We leverage advanced analytics techniques to assist clients in optimizing their operations and achieving strategic goals. As a data analysis professional at PwC, your role will involve utilizing advanced analytical methods to extract insights from large datasets, enabling data-driven decision-making. Your expertise in data manipulation, visualization, and statistical modeling will be pivotal in helping clients solve complex business challenges. PwC US - Acceleration Center is currently seeking a highly skilled MLOps/LLMOps Engineer to play a critical role in deploying, scaling, and maintaining Generative AI models. This position requires close collaboration with data scientists, ML/GenAI engineers, and DevOps teams to ensure the seamless integration and operation of GenAI models within production environments at PwC and for our clients. The ideal candidate will possess a strong background in MLOps practices and a keen interest in Generative AI technologies. With a preference for candidates with 4+ years of hands-on experience, core qualifications for this role include: - 3+ years of experience developing and deploying AI models in production environments, alongside 1 year of working on proofs of concept and prototypes. - Proficiency in software development, including building and maintaining scalable, distributed systems. - Strong programming skills in languages such as Python and familiarity with ML frameworks like TensorFlow and PyTorch. - Knowledge of containerization and orchestration tools like Docker and Kubernetes. - Understanding of cloud platforms such as AWS, GCP, and Azure, including their ML/AI service offerings. - Experience with continuous integration and delivery tools like Jenkins, GitLab CI/CD, or CircleCI. - Familiarity with infrastructure as code tools like Terraform or CloudFormation. Key Responsibilities: - Develop and implement MLOps strategies tailored for Generative AI models to ensure robustness, scalability, and reliability. - Design and manage CI/CD pipelines specialized for ML workflows, including deploying generative models like GANs, VAEs, and Transformers. - Monitor and optimize AI model performance in production, utilizing tools for continuous validation, retraining, and A/B testing. - Collaborate with data scientists and ML researchers to translate model requirements into scalable operational frameworks. - Implement best practices for version control, containerization, and orchestration using industry-standard tools. - Ensure compliance with data privacy regulations and company policies during model deployment. - Troubleshoot and resolve issues related to ML model serving, data anomalies, and infrastructure performance. - Stay updated with the latest MLOps and Generative AI developments to enhance AI capabilities. Project Delivery: - Design and implement scalable deployment pipelines for ML/GenAI models to transition them from development to production environments. - Oversee the setup of cloud infrastructure and automated data ingestion pipelines to meet GenAI workload requirements. - Create detailed documentation for deployment pipelines, monitoring setups, and operational procedures. Client Engagement: - Collaborate with clients to understand their business needs and design ML/LLMOps solutions. - Present technical approaches and results to technical and non-technical stakeholders. - Conduct training sessions and workshops for client teams. - Create comprehensive documentation and user guides for clients. Innovation And Knowledge Sharing: - Stay updated with the latest trends in MLOps/LLMOps and Generative AI. - Develop internal tools and frameworks to accelerate model development and deployment. - Mentor junior team members and contribute to technical publications. Professional And Educational Background: - Any graduate / BE / B.Tech / MCA / M.Sc / M.E / M.Tech / Masters Degree / MBA,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

kochi, kerala

On-site

As a highly skilled Senior Machine Learning Engineer, you will leverage your expertise in Deep Learning, Large Language Models (LLMs), and MLOps/LLMOps to design, optimize, and deploy cutting-edge AI solutions. Your responsibilities will include developing and scaling deep learning models, fine-tuning LLMs (e.g., GPT, Llama), and implementing robust deployment pipelines for production environments. You will be responsible for designing, training, fine-tuning, and optimizing deep learning models (CNNs, RNNs, Transformers) for various applications such as NLP, computer vision, or multimodal tasks. Additionally, you will fine-tune and adapt LLMs for domain-specific tasks like text generation, summarization, and semantic similarity. Experimenting with RLHF (Reinforcement Learning from Human Feedback) and alignment techniques will also be part of your role. In the realm of Deployment & Scalability (MLOps/LLMOps), you will build and maintain end-to-end ML pipelines for training, evaluation, and deployment. Deploying LLMs and deep learning models in production environments using frameworks like FastAPI, vLLM, or TensorRT is crucial. You will optimize models for low-latency, high-throughput inference and implement CI/CD workflows for ML systems using tools like MLflow and Kubeflow. Monitoring & Optimization will involve setting up logging, monitoring, and alerting for model performance metrics such as drift, latency, and accuracy. Collaborating with DevOps teams to ensure scalability, security, and cost-efficiency of deployed models will also be part of your responsibilities. The ideal candidate will possess 5-7 years of hands-on experience in Deep Learning, NLP, and LLMs. Strong proficiency in Python, PyTorch, TensorFlow, Hugging Face Transformers, and LLM frameworks is essential. Experience with model deployment tools like Docker, Kubernetes, and FastAPI, along with knowledge of MLOps/LLMOps best practices and familiarity with cloud platforms (AWS, GCP, Azure) are required qualifications. Preferred qualifications include contributions to open-source LLM projects, showcasing your commitment to advancing the field of machine learning.,

Posted 2 weeks ago

Apply

4.0 - 9.0 years

0 - 0 Lacs

Kochi, Hyderabad, Thiruvananthapuram

Work from Office

We Are Hiring MLOps Engineer Location: Hyderabad / Trivandrum / Kochi Skills: MLOps, Python, AWS, SageMaker, CI/CD Pipeline Experience: 4+ Years Notice Period: 015 Days Only Note: All the above skills must be clearly mentioned in your CV – mandatory requirement. Share your CV: shivani.awadhiya@alikethoughts.com ***Immediate Joinner Only send CV in MAil or apply here*** Relocation Also apply here

Posted 2 weeks ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Pune

Work from Office

Job Summary : We are seeking a skilled Data Scientist with expertise in AI orchestration and embedded systems to support a sprint-based Agile implementation focused on integrating generative AI capabilities into enterprise platforms such as Slack, Looker, and Confluence. The ideal candidate will have hands-on experience with Gemini and a strong understanding of prompt engineering, vector databases, and orchestration infrastructure. Key Responsibilities : - Develop and deploy Slack-based AI assistants leveraging Gemini models. - Design and implement prompt templates tailored to enterprise data use cases (Looker and Confluence). - Establish and manage an embedding pipeline for Confluence documentation. - Build and maintain orchestration logic for prompt execution and data retrieval. - Set up API authentication and role-based access controls for integrated systems. - Connect and validate vector store operations (e.g., Pinecone, Weaviate, or Snowflake vector extension). - Contribute to documentation, internal walkthroughs, and user acceptance testing planning. - Participate in Agile ceremonies including daily standups and sprint demos. Required Qualifications : - Proven experience with Gemini and large language model deployment in production environments. - Proficiency in Python, orchestration tools, and prompt engineering techniques. - Familiarity with vector database technologies and embedding workflows. - Experience integrating APIs for data platforms such as Looker and Confluence. - Strong understanding of access control frameworks and enterprise-grade authentication. - Demonstrated success in Agile, sprint-based project environments. Preferred Qualifications : - Experience working with Slack app development and deployment. - Background in MLOps, LLMOps, or AI system orchestration at scale. - Excellent communication skills and ability to work in cross-functional teams.

Posted 2 weeks ago

Apply

6.0 - 8.0 years

25 - 27 Lacs

Hyderabad

Work from Office

We seek a Senior Gen AI Engineer with strong ML fundamentals and data engineering expertise to lead scalable AI/LLM solutions. This role focuses on integrating AI models into production, optimizing machine learning workflows, and creating scalable AI-driven systems. You will design, fine-tune, and deploy models (e.g., LLMs, RAG architectures) while ensuring robust data pipelines and MLOps practices. Key Responsibilities Agentic AI & Workflow Design: Lead design and implementation of Agentic AI systems and multi-step AI workflows. Build AI orchestration systems using frameworks like LangGraph. Utilize Agents, Tools, and Chains for complex task automation. Implement Agent-to-Agent (A2A) communication and Model Connect Protocol (MCP) for inter-model interactions. Production MLOps & Deployment: Develop, train, and deploy ML models optimized for production. Implement CI/CD pipelines (GitHub), automated testing, and robust observability (monitoring, logging, tracing) for Gen AI solutions. Containerize models (Docker) and deploy on cloud (AWS / Azure/ GCP) using Kubernetes. Implement robust AI/LLM security measures and adhere to Responsible AI principles. AI Model Integration: Integrate LLMs and models from HuggingFace. Apply deep learning concepts with PyTorch or TensorFlow. Data & Prompt Engineering: Build scalable data pipelines for unstructured/text data. Design and implement embedding/chunking strategies for scalable data processing. Optimize storage/retrieval for embeddings (e.g., Pinecone, Weaviate). Utilize Prompt Engineering techniques to fine-tune AI model performance. Solution Development: Develop GenAI-driven Text-to-SQL solutions. Programming: Python. Foundation Model APIs: AzureOpenAI, OpenAI, Gemini, Anthropic, or AWS Bedrock. Agentic AI & LLM Frameworks: LangChain, LangGraph, A2A, MCP, Chains, Tools, Agents. Ability to design multi-agent systems, autonomous reasoning pipelines, and tool-calling capabilities for AI agents. MLOps/LLMOps: Docker , Kubernetes (K8s) , CI/CD , Automated Testing, Monitoring, Observability, Model Registries, Data Versioning. Cloud Platforms: AWS/Azure/GCP. Vector Databases: Pinecone, Weaviate, or similar leading platforms. Prompt Engineering. Security & Ethics: AI/LLM solution security, Responsible AI principles. Version Control: GitHub. Databases: SQL/NoSQL.

Posted 2 weeks ago

Apply

7.0 - 12.0 years

22 - 30 Lacs

Bengaluru

Work from Office

Job Type: C2H (6 Months) / FTE ML engineering project experience: > 5 yr Python programming language > 5 yr LangChain framework > 1 yr MLOps experience on at least 3 projects & > 3 yr AI coding experience > 1 Yr CALL: 9916086641

Posted 2 weeks ago

Apply

9.0 - 14.0 years

50 - 70 Lacs

Bengaluru

Remote

Staff/Sr. Staff Engineer Experience: 6 - 15 Years Exp Salary : Competitive Preferred Notice Period : Within 60 Days Shift : 10:00AM to 6:00PM IST Opportunity Type: Remote Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Airflow OR LLMs OR MLOps OR Generative AI and Python Netskope (One of Uplers' Clients) is Looking for: Staff/Sr. Staff Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Job Summary: Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's in it for you You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What you will be doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required skills and experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS or equivalent required, MSCS or equivalent strongly preferred How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Netskope, a global SASE leader, helps organizations apply zero trust principles and AI/ML innovations to protect data and defend against cyber threats. Fast and easy to use, the Netskope platform provides optimized access and real-time security for people, devices, and data anywhere they go. Netskope helps customers reduce risk, accelerate performance, and get unrivaled visibility into any cloud, web, and private application activity. Thousands of customers trust Netskope and its powerful NewEdge network to address evolving threats, new risks, technology shifts, organizational and network changes, and new regulatory requirements About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies