Bankai Informatics (Its MNC client Full time)

2 Job openings at Bankai Informatics (Its MNC client Full time)
AI/ML Lead OR Architect hyderabad,chennai,bengaluru 7 - 12 years INR 25.0 - 40.0 Lacs P.A. Hybrid Full Time

We are seeking an exceptional AI/ML Lead Architect with strong software engineering fundamentals, deep Python expertise, and hands-on experience building GenAI, LLM, and Agentic AI systems . This role requires an architect who can design, code, and deploy scalable AI systems from scratch not just prototype models. You will drive the end-to-end architecture of production-grade AI/ML and GenAI solutions , lead multi-agent workflows, and mentor engineering teams. Key Responsibilities 1. Architecture & System Design Architect end-to-end AI/ML, LLM, and Agentic AI systems including data, model, API, and deployment layers. Design scalable microservices-based AI architectures using Python, FastAPI, Docker, Kubernetes . Build multi-agent AI systems using LangChain, CrewAI, AutoGen, or LangGraph. Create architectural diagrams, solution blueprints, and design documents. 2. AI/ML Development Build and deploy ML/DL models using PyTorch, TensorFlow, Hugging Face Transformers . Implement Generative AI + LLM-based applications (GPT, LLaMA, Mistral, Claude). Research, fine-tune, and optimize LLMs & embeddings for real-world applications. Implement RAG pipelines with Vector DBs (FAISS, Pinecone, Chroma). 3. Agentic AI Development Lead design and development of Agentic AI frameworks for autonomous workflows. Implement multi-agent systems with memory, planning, tools, and coordination. Integrate AI agents with external APIs, knowledge bases, and enterprise systems. 4. MLOps & Deployment Deploy ML/LLM models using Docker, Kubernetes, CI/CD, TorchServe, MLflow . Optimize inference cost, latency, and throughput at scale. Build monitoring dashboards for AI/ML reliability and performance. 5. Leadership & Strategy Provide technical leadership to AI/ML engineers and data scientists. Collaborate with product, cloud, and data engineering teams to align architecture with business goals. Conduct code reviews, enforce best practices, and maintain high engineering standards. Drive AI innovation roadmap and emerging tech adoption (Agentic AI, LLMOps, etc.). Required Qualifications Technical Must-Haves 10+ years in AI/ML engineering and solution architecture. Strong Python programming with production-level coding. Hands-on experience with GenAI, LLMs, and Agentic AI frameworks (LangChain, CrewAI, AutoGen, LangGraph). Deep knowledge of PyTorch, TensorFlow, Hugging Face Transformers . Strong experience in RAG pipelines, embeddings, vector DBs . Proven experience designing scalable AI microservices using FastAPI/Flask. Strong understanding of MLOps : Docker, Kubernetes, GitHub Actions, MLflow, Sagemaker, Azure ML, GCP Vertex. Experience in architecting distributed systems, caching, messaging (Kafka/Redis). Excellent understanding of LLM evaluation, safety, guardrails, and observability. Soft Skills Strong architectural thinking and problem-solving mindset. Ability to mentor engineering teams and conduct technical reviews. Excellent communication and stakeholder management. Proven ability to translate business needs into AI/ML solutions. Preferred Qualifications Experience in building enterprise GenAI platforms or AI agent ecosystems . Experience in LLM fine-tuning / quantization / optimization . Experience with RLHF, LORA , or custom training pipelines. Knowledge of Responsible AI, governance, safety, auditability . Prior experience in healthcare, BFSI, retail, or enterprise AI .

Senior Data Engineer pune,bengaluru,delhi / ncr 6 - 11 years INR 18.0 - 30.0 Lacs P.A. Hybrid Full Time

We are looking for an experienced Senior Data Engineer who can design, build, and optimize modern data pipelines and data platforms. The ideal candidate has strong experience in cloud (GCP/AWS/Azure), ETL/ELT, distributed systems, and hands-on coding in Python/Scala/SQL. Key Responsibilities Data Engineering & Pipelines Design, develop, and maintain scalable data pipelines using cloud-native services (GCP/AWS/Azure). Build ETL/ELT workflows for structured and unstructured data. Implement batch and real-time data ingestion using tools like Kafka / PubSub / Kinesis. Automate data workflows using Airflow / Composer / Azure Data Factory . Data Modelling & Warehousing Design data models (dimensional/normalized) for analytics and reporting. Manage and optimize data warehouses such as BigQuery, Snowflake, Redshift, Azure Synapse. Implement best practices for partitioning, clustering, and performance tuning . Coding & Framework Development Write clean, efficient code in Python / Scala / Java . Build reusable data engineering frameworks and libraries. Develop unit tests, CI/CD pipelines , and ensure code quality. Cloud & Infrastructure Hands-on experience with Cloud Storage, Compute, Containers (Docker/Kubernetes) . Monitor systems using logging, alerting, and observability tools. Ensure security, cost optimization, and reliability of data infrastructure. Collaboration & Leadership Work closely with Data Scientists, Analysts, and Product Teams. Mentor junior engineers and guide best engineering practices. Participate in architecture discussions and contribute to technical roadmaps. Required Skills & Experience Must Have 6+ years experience in Data Engineering . Strong hands-on skills in Python (or Scala/Java). Expert in SQL and query performance tuning. Experience with cloud platforms (GCP / AWS / Azure). ETL/ELT pipeline development with Airflow / Composer / ADF / Step Functions . Experience with Big Data tools (Spark / Dataproc / EMR / Databricks). Stream processing experience using Kafka / PubSub / Kinesis . Strong knowledge of data modelling (Star schema, SCD, Fact/Dimension). Good to Have Cloud certifications (GCP Professional Data Engineer / AWS Data Analytics). Experience with Docker/Kubernetes . Familiarity with BI tools (Looker, Tableau, Power BI). Experience with ML pipelines (Kubeflow, Vertex AI, SageMaker).