Posted:2 days ago|
Platform:
Hybrid
Full Time
Will champion vibe codingthe emerging practice of using LLMs and coding agents (e.g., GitHub Copilot, Cursor, Claude Code, etc.) to generate working code from natural-language instructions, iterating rapidly while enforcing quality and compliance. Your leadership will modernize engineering workflows and scale AI-first development practices across diverse BFSI portfolios.
Will architect and deliver enterprise-grade AI applications leveraging Generative AI (GenAI), Agentic AI, LLMs, RAG, and Agentic RAGwith a strong focus on security, governance, observability, and cost efficiency.
This role operationalizes AI-first delivery, increases developer productivity, strengthens proposal win rates through compelling AI solutioning, and ensures secure, compliant implementations aligned with BFSI standards.
1. AI-Assisted Development Leadership
a. Drive organization-wide adoption of coding agents and vibe coding practices; define guardrails, standards, and governance for BFSI environments.
b. Build playbooks for prompt engineering, code generation, refactoring, test generation, documentation, and secure patterns using Copilot/Cursor/Claude Code, etc.
c. Deliver enablement programs: workshops, hands-on labs, brown-bags; establish usage analytics and productivity KPIs.
2. Solutioning, Pre-Sales & Proposal Support
a. Partner with sales, pre-sales, service lines, and delivery to:
3. Architecture & Delivery (LLMs, RAG, Agents)
a. Architect and deliver agentic systemstool orchestration, planning/critique loops, memory, multi-agent collaboration for complex BFSI workflows.
b. Own end-to-end solutioning: data acquisition/transform; embeddings/retrieval; prompt pipelines; function calling/tool schemas; APIs/SDKs; UI integration.
4. RAG & Agentic RAG Best Practices
a. Design advanced RAG pipelines: chunking, hybrid retrieval (vector + keyword), rerankers, query rewriting, context compression, caching, grounding, and citations.
b. Build Agentic RAG flows combining retrieval + tool use + planning loops to maximize accuracy, policy adherence, and cost performance.
5. Quality, Evals & Observability
a. Define LLM/agent evaluation: groundedness, factuality, precision/recall, hallucination rate, agent success rate, latency, cost/query.
b. Implement observability: tracing, token/cost accounting, prompt/version lineage, user feedback loops, and red-team logs.
6. Collaboration & Leadership
a. Mentor engineers; lead design reviews and AI SDLC standards; influence architecture councils.
b. Drive build-vs-buy decisions, vendor evaluations, and cost/latency optimization strategies.
Zensar
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Now
hyderabad, pune, bengaluru
35.0 - 75.0 Lacs P.A.
bengaluru
10.0 - 14.0 Lacs P.A.
10.0 - 14.0 Lacs P.A.
bengaluru
10.0 - 14.0 Lacs P.A.
bengaluru
10.0 - 14.0 Lacs P.A.
bengaluru
10.0 - 14.0 Lacs P.A.
bengaluru
10.0 - 14.0 Lacs P.A.
bengaluru
9.0 - 13.0 Lacs P.A.
pune, bengaluru
9.0 - 12.0 Lacs P.A.
chennai, gurugram, bengaluru
10.0 - 13.0 Lacs P.A.