AI Engineer – Generative AI & LLMs

3 years

0 Lacs

Posted:3 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

AI Engineer – Generative AI & LLMs

Onsite Role - Hyderabad


Accelyst is an innovative AI Consultancy that leverages a unique catalog of industry-specific Agents and leading-edge AI platforms to deliver tangible, integrated, secure and ROI-optimized solutions. We combine deep industry and technical expertise to enable rapid deployment of innovative AI-driven capabilities to augment and automate client workflows for employees, customers, prospects, and investors.


Why Accelyst?

Join Accelyst to be part of a dynamic team that leverages AI-driven technology to make a positive impact. Our leadership, with Big Four Consulting experience, fosters a nimble, client-focused environment, minimizing bureaucracy to enhance delivery and professional growth. You'll work on complex AI projects that challenge and inspire, meeting high client expectations. Additionally, benefit from our profit-sharing model, reflecting our commitment to respect and integrity for all employees.


Job Summary:

We are looking for a skilled AI Engineer with proven experience developing and deploying large language models (LLMs) and generative AI systems. In this role, you will be responsible for designing, fine-tuning, and operationalizing models from leading providers such as OpenAI, Llama, Gemini, and Claude, along with leveraging open-source models from platforms like Hugging Face. You’ll also build robust multi-step workflows and intelligent agents using frameworks such as Microsoft AutoGen, Lang Graph, and CrewAI. This position requires strong technical expertise in generative AI, advanced software engineering abilities, fluency in Python (including FastAPI), and a solid understanding of Mlops/LLMops principles.


Responsibilities:


· LLM Solution Design & Implementation:

Architect, develop, and implement LLM-powered and generative AI solutions utilizing both proprietary and open-source technologies (e.g., GPT-4, Llama 3, Gemini, Claude). Customize and fine-tune models for tasks such as chatbots, summarization, and content classification, evaluating the suitability of LLMs for various business needs.

· Prompt Engineering & Model Tuning:

Craft, refine, and test model prompts to achieve targeted outputs. Fine-tune pre-trained LLMs using customized data and apply advanced techniques like instruction tuning or reinforcement learning with human feedback as required.

· Agentic Frameworks & Workflow Automation:

Build and maintain stateful, multi-agent workflows and autonomous AI agents using frameworks like Microsoft AutoGen, LangGraph, LangChain, LlamaIndex, and CrewAI. Develop custom tools that enable seamless API integration and task orchestration.

· Retrieval-Augmented Generation (RAG):

Design and deploy RAG pipelines by integrating vector databases (such as Pinecone, Faiss, or Weaviate) for efficient knowledge retrieval. Utilize tools like RAGAS to ensure high-quality, traceable response generation.

· LLM API Integration & Deployment:

Serve LLMs via FastAPI-based endpoints and manage their deployment using Docker containers and orchestration tools like Kubernetes and cloud functions. Implement robust CI/CD pipelines and focus on scalable, reliable, and cost-efficient production environments.

· Data Engineering & Evaluation:

Construct data pipelines for ingestion, preprocessing, and controlled versioning of training datasets. Set up automated evaluation systems, including A/B tests and human-in-the-loop feedback, to drive rapid iteration and improvement.

· Team Collaboration:

Partner with data scientists, software engineers, and product teams to scope and integrate generative AI initiatives. Communicate complex ideas effectively to both technical and non-technical stakeholders.

· Monitoring, LLMOps, & Ethics:

Deploy rigorous monitoring and observability tools to track LLM usage, performance, cost, and hallucination rates. Enforce LLMOps best practices in model management, reproducibility, explainability, and compliance with privacy and security standards.

· Continuous Learning & Thought Leadership:

Stay abreast of the latest developments in AI/LLMs and open-source innovations. Contribute to internal knowledge sharing, champion new approaches, and represent the organization at industry or academic events.


Qualifications:


· Experience: At least 3 years in machine learning engineering, with 1–2 years focused on building and deploying generative AI or LLM-based applications


Technical Skills:


Proficiency in Python and FastAPI, and experience developing RESTful APIs and microservices. Hands-on familiarity with LLM providers (OpenAI, Anthropic, Google, Meta) and with frameworks such as LangChain, LangGraph, LlamaIndex, CrewAI, AutoGen, or Transformers.

· Model Customization & Prompt Design:

Proven ability to fine-tune language models and craft effective prompts tailored to specific applications.

· Data & Retrieval:

Experience creating RAG pipelines with vector databases (e.g., Pinecone, Faiss, Weaviate) and evaluation frameworks like RAGAS.

· Deployment & Cloud:

Practical knowledge of containerization (Docker), orchestration (Kubernetes), and cloud deployments (AWS, Azure, GCP). Solid grasp of CI/CD pipelines and LLMOps practices.

· Communication & Collaboration:

Excellent teamwork and communication skills, able to bridge technical and business perspectives effectively.

· Education: Bachelor’s degree in Computer Science, Data Science, or a related field is required, a Master’s degree is preferred.


Preferred Qualifications:


· Open-Source & Community:

Participation in open-source AI/ML projects, or a strong GitHub profile showcasing relevant contributions or publications.

· Multi-Agent Systems:

Hands-on experience with advanced agentic frameworks or autonomous agent system design.

· Data Governance & Compliance:

Knowledge of data governance, security protocols, and compliance standards.

· Search & Databases:

Deep expertise in vector similarity search, indexing, and familiarity with document stores (such as MongoDB, PostgreSQL) as well as graph databases.

· Cloud-Native AI Services:

Experience with cloud-native AI services like Azure ML, Cognitive Search, or equivalent platforms for scalable generative AI deployment.

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You