Jobs
Interviews

2 Agentic Architectures Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

As a Founding Principal Engineer on the new Applied AI team within Autodesk's Data and Process Management (DPM) group, you play a crucial role in designing, building, and scaling AI-powered experiences that provide essential Product Lifecycle Management (PLM) and Product Data Management (PDM) capabilities to customers. Your work will involve creating production-grade AI applications that are scalable, resilient, and secure, while also shaping the AI strategy for DPM by identifying opportunities, evaluating emerging technologies, and guiding long-term direction. You will be responsible for fine-tuning, evaluating, and deploying large language models (LLMs) in production environments, while balancing performance, cost, and user experience with real-world data and constraints. Additionally, you will collaborate with other engineering teams to define best practices for AI experimentation, evaluation, and optimization, as well as design frameworks and tools to facilitate the development of AI-powered experiences by other teams. To be successful in this role, you must hold a Masters in computer science, AI, Machine Learning, Data Science, or a related field, and have at least 10 years of experience building scalable cloud-native applications, with a focus on production AI/ML systems for at least 3 years. Deep understanding of LLMs, VLMs, foundation models, and related technologies, along with experience with AWS cloud services and SageMaker Studio is essential. Proficiency in Python or TypeScript, a passion for tackling complex challenges, and the ability to communicate technical concepts clearly to both technical and non-technical audiences are also required. Preferred qualifications include experience in the CAD or manufacturing domain, building AI applications, designing evaluation pipelines for LLM-based systems, and familiarity with tools and frameworks for LLM fine-tuning and orchestration. A passion for mentoring engineering talent, experience with emerging Agentic AI solutions, and contributions to open-source AI projects or publications in the field are considered a plus. Join Autodesk's innovative team to shape the future of AI applications and contribute to building a better world through technology.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

ahmedabad, gujarat

On-site

The ideal candidate for this position in Ahmedabad should be a graduate with at least 3 years of experience. At Bytes Technolab, we strive to create a cutting-edge workplace infrastructure that empowers our employees and clients. Our focus on utilizing the latest technologies enables our development team to deliver high-quality software solutions for a variety of businesses. You will be responsible for leveraging your 3+ years of experience in Machine Learning and Artificial Intelligence to contribute to our projects. Proficiency in Python programming and relevant libraries such as NumPy, Pandas, and scikit-learn is essential. Hands-on experience with frameworks like PyTorch, TensorFlow, Keras, Facenet, and OpenCV will be key in your role. Your role will involve working with GPU acceleration for deep learning model development using CUDA, cuDNN. A strong understanding of neural networks, computer vision, and other AI technologies will be crucial. Experience with Large Language Models (LLMs) like GPT, BERT, LLaMA, and familiarity with frameworks such as LangChain, AutoGPT, and BabyAGI are preferred. You should be able to translate business requirements into ML/AI solutions and deploy models on cloud platforms like AWS SageMaker, Azure ML, and Google AI Platform. Proficiency in ETL pipelines, data preprocessing, and feature engineering is required, along with experience in MLOps tools like MLflow, Kubeflow, or TensorFlow Extended (TFX). Expertise in optimizing ML/AI models for performance and scalability across different hardware architectures is necessary. Knowledge of Natural Language Processing (NLP), Reinforcement Learning, and data versioning tools like DVC or Delta Lake is a plus. Skills in containerization tools like Docker and orchestration tools like Kubernetes will be beneficial for scalable deployments. You should have experience in model evaluation, A/B testing, and establishing continuous training pipelines. Working in Agile/Scrum environments with cross-functional teams, understanding ethical AI principles, model fairness, and bias mitigation techniques are important. Familiarity with CI/CD pipelines for machine learning workflows and the ability to communicate complex concepts to technical and non-technical stakeholders will be valuable.,

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies