Bangalore City, Bengaluru, Karnataka
INR 3.5 - 6.0 Lacs P.A.
On-site
Full Time
Job Posting: AI Project Engineer Are you passionate about building and deploying cutting-edge AI solutions? Do you thrive in a collaborative environment and enjoy tackling complex technical challenges? If so, we encourage you to apply for the AI Project Engineer position at Lizmotors. Role Overview: We are seeking a highly motivated and skilled AI Project Engineer to join our innovative team. The ideal candidate will have 1-3 years of professional experience in AI development, with a strong emphasis on practical application and an eagerness to contribute to cutting-edge projects. This role involves the full lifecycle of developing, deploying, and maintaining AI models and systems, collaborating closely with cross-functional teams to deliver impactful AI-driven solutions. You will play a key role in translating business needs into robust and scalable AI solutions. Key Responsibilities: Model Development and Deployment: Design, develop, and deploy AI models, ensuring scalability, performance, and reliability. Workflow Automation: Implement and optimize agentic workflows for various AI applications, streamlining processes and enabling autonomous operations. System Integration: Integrate AI components with existing systems, ensuring seamless data flow and functionality. Performance Optimization: Monitor, fine-tune, and optimize AI models for efficiency, accuracy, and cost-effectiveness. Troubleshooting and Debugging: Identify and resolve issues in AI systems, ensuring robust and reliable operations. Code Management: Utilize Git and GitHub for collaborative development and efficient code management. Required Technical Skills: The successful candidate will possess a strong foundation in the following core technical areas: Programming and Frameworks: Expert-level proficiency in Python is mandatory. LangChain & LangGraph: In-depth understanding and practical experience with LangChain and LangGraph for building complex language model applications. AI Model Management & Infrastructure: Hands-on experience with model fine-tuning of pre-trained AI models for specific tasks and datasets. Familiarity with or experience managing and deploying models. Workflow Automation & Orchestration: Proven experience with MCP, n8n for automating workflows and integrating various services. Strong understanding and practical implementation of agentic workflows for autonomous AI operations. Data Management & Tools: Proficiency in data preprocessing techniques. Strong proficiency with Git and GitHub for version control. Desirable Qualifications: Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Experience with cloud platforms such as AWS, Azure, or GCP for deploying AI solutions. Familiarity with containerization technologies like Docker and Kubernetes. Contributions to open-source AI projects or a strong portfolio of personal AI projects. Strong problem-solving skills and a proactive approach to learning new technologies. Job Types: Full-time, Internship, Contractual / Temporary Pay: ₹350,000.00 - ₹600,000.00 per year Schedule: Monday to Friday
India
INR 9.0 - 16.0 Lacs P.A.
On-site
Full Time
Gen-AI Tech Lead - Enterprise AI Applications About Us We're a cutting-edge technology company building enterprise-grade AI solutions that transform how businesses operate. Our platform leverages the latest in Generative AI to create intelligent applications for document processing, automated decision-making, and knowledge management across industries. Role Overview We're seeking an exceptional Gen-AI Tech Lead to architect, build, and scale our next-generation AI-powered enterprise applications. You'll lead the technical strategy for implementing Large Language Models, fine-tuning custom models, and deploying production-ready AI systems that serve millions of users. Key Responsibilities - AI/ML Leadership (90% Hands-on) Design and implement enterprise-scale Generative AI applications using custom LLMs or (GPT, Claude, Llama, Gemini) Lead fine-tuning initiatives for domain-specific models and custom use cases Build and optimize model training pipelines for large-scale data processing Develop RAG (Retrieval-Augmented Generation) systems with vector databases and semantic search Implement prompt engineering strategies and automated prompt optimization Create AI evaluation frameworks and model performance monitoring systems Enterprise Application Development Build scalable Python applications integrating multiple AI models and APIs Develop microservices architecture for AI model serving and orchestration Implement real-time AI inference systems with sub-second response times Design fault-tolerant systems with fallback mechanisms and error handling Create APIs and SDKs for enterprise AI integration Build AI model version control and A/B testing frameworks MLOps & Infrastructure Containerize AI applications using Docker and orchestrate with Kubernetes Design and implement CI/CD pipelines for ML model deployment Set up model monitoring, drift detection, and automated retraining systems Optimize inference performance and cost efficiency in cloud environments Implement security and compliance measures for enterprise AI applications Technical Leadership Lead a team of 3-5 AI engineers and data scientists Establish best practices for AI development, testing, and deployment Mentor team members on cutting-edge AI technologies and techniques Collaborate with product and business teams to translate requirements into AI solutions Drive technical decision-making for AI architecture and technology stack Required Skills & Experience Core AI/ML Expertise Python : 5+ years of production Python development with AI/ML libraries LLMs : Hands-on experience with GPT-4, Claude, Llama 2/3, Gemini, or similar models Fine-tuning : Proven experience fine-tuning models using LoRA, QLoRA, or full parameter tuning Model Training : Experience training models from scratch or continued pre-training Frameworks : Expert-level knowledge of PyTorch, TensorFlow, Hugging Face Transformers Vector Databases : Experience with Pinecone, Weaviate, ChromaDB, or Qdrant Technical StackAI/ML Stack Models : OpenAI GPT, Anthropic Claude, Meta Llama, Google Gemini Frameworks : PyTorch, Hugging Face Transformers, LangChain, LlamaIndex Training : Distributed training with DeepSpeed, Accelerate, or Fairscale Serving : vLLM, TensorRT-LLM, or Triton Inference Server Vector Search : Pinecone, Weaviate, FAISS, Elasticsearch Infrastructure & DevOps Containerization : Docker, Kubernetes, Helm charts Cloud : AWS (ECS, EKS, Lambda, SageMaker), GCP Vertex AI Databases : PostgreSQL, MongoDB, Redis, Neo4j Monitoring : Prometheus, Grafana, DataDog, MLflow CI/CD : GitHub Actions, Jenkins, ArgoCD Professional Growth Work directly with founders and C-level executives Opportunity to publish research and speak at AI conferences Access to latest AI models and cutting-edge research Mentorship from industry experts and AI researchers Budget for attending top AI conferences (NeurIPS, ICML, ICLR) Ideal Candidate Profile Passionate about pushing the boundaries of AI technology Strong engineering mindset with focus on production systems Experience shipping AI products used by thousands of users Stays current with latest AI research and implements cutting-edge techniques Excellent problem-solving skills and ability to work under ambiguity Leadership experience in fast-paced, high-growth environments Apply now and help us democratize AI for enterprise customers worldwide. Job Type: Full-time Pay: ₹900,000.00 - ₹1,600,000.00 per year Schedule: Monday to Friday Supplemental Pay: Performance bonus
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.