0 years

0 Lacs

Noida, Uttar Pradesh, India

Posted:1 day ago| Platform: Linkedin logo

Apply

Skills Required

data ai indexing retrieval processing engineering stack developer design etl development api docker kubernetes aws azure gcp code terraform cutting drive ml

Work Mode

On-site

Job Type

Full Time

Job Description

Responsibilities: 1. Architect and develop scalable AI applications focused on indexing, retrieval systems, and distributed data processing. 2. Collaborate closely with framework engineering, data science, and full-stack teams to deliver an integrated developer experience for building next-generation context-aware applications (i.e., Retrieval-Augmented Generation (RAG)). 3. Design, build, and maintain scalable infrastructure for high-performance indexing, search engines, and vector databases (e.g., Pinecone, Weaviate, FAISS). 4. Implement and optimize large-scale ETL pipelines, ensuring efficient data ingestion, transformation, and indexing workflows. 5. Lead the development of end-to-end indexing pipelines, from data ingestion to API delivery, supporting millions of data points. 6. Deploy and manage containerized services (Docker, Kubernetes) on cloud platforms (AWS, Azure, GCP) via infrastructure-as-code (e.g., Terraform, Pulumi). 7. Collaborate on building and enhancing user-facing APIs that provide developers with advanced data retrieval capabilities. 8. Focus on creating high-performance systems that scale effortlessly, ensuring optimal performance in production environments with massive datasets. 9. Stay updated on the latest advancements in LLMs, indexing techniques, and cloud technologies to integrate them into cutting-edge applications. 10. Drive ML and AI best practices across the organization to ensure scalable, maintainable, and secure AI infrastructure. Show more Show less

Mock Interview

Practice Video Interview with JobPe AI

Start Data Interview Now

RecommendedJobs for You

Bangalore Urban, Karnataka, India