Jobs
Interviews

11 Pgvector Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

0 Lacs

karnataka

On-site

Data Science is all about breaking new ground to enable businesses to answer their most urgent questions. Pioneering massively parallel data-intensive analytic processing, the mission is to develop a whole new approach to generating meaning and value from petabyte-scale data sets and shape brand new methodologies, tools, statistical methods, and models. In collaboration with leading academics, industry experts, and highly skilled engineers, the goal is to equip customers to generate sophisticated new insights from the biggest of big data. Join the team to do the best work of your career and make a profound social impact as an Advisor on the Data Science Team in Bangalore. As a Data Science Advisor, you will contribute to the business strategy and influence decision-making based on information gained from deep dive analysis. You will produce actionable and compelling recommendations by interpreting insights from complex data sets. Designing processes to consolidate and examine unstructured data to generate actionable insights will be part of your responsibilities. Additionally, you will partner with business leaders, engineers, and industry experts to construct predictive models, algorithms, and probability engines. You will: - Partner with internal and external teams to understand customer requirements and develop proposals. - Conduct interactions with external customers to gather project requirements, provide status updates, and share analytical insights. - Implement preliminary data exploration and data preparation steps for model development/validation. - Apply a broad range of techniques and theories from statistics, machine learning, and business intelligence to deliver actionable business insights. - Solution, build, deploy, and set up monitoring for models. Qualifications: - 6+ years of related experience with proficiency in NLP, Machine Learning, Computer Vision, and GenAI. - Working experience in data visualization (e.g., Power BI, matplotlib, plotly). - Hands-on experience with CNN, LSTM, YOLO, and database skills including SQL, Postgres SQL, PGVector, and ChromaDB. - Proven experience in MLOps and LLMOps, with a strong understanding of ML lifecycle management. - Expertise with large language models (LLMs), prompt engineering, fine-tuning, and integrating LLMs into applications for natural language processing (NLP) tasks. Desirable Skills: - Strong product/technology/industry knowledge and familiarity with streaming/messaging frameworks (e.g., Kafka, RabbitMQ, ZeroMQ). - Experience with cloud platforms (e.g., AWS, Azure, GCP). - Experience with web technologies and frameworks (e.g., HTTP/REST/GraphQL, Flask, Django). - Skilled in programming languages like Java or JavaScript. Dell Technologies is committed to providing equal employment opportunities for all employees and creating a work environment free of discrimination and harassment. If you are looking for an opportunity to grow your career with advanced technology and some of the best minds in the industry, this role might be the perfect fit for you. Join Dell Technologies to build a future that works for everyone because Progress Takes All of Us.,

Posted 2 days ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Role We are looking for an experienced DevOps Engineer to join our engineering team. This role involves setting up, managing, and scaling development, staging, and production environments both on AWS cloud and on-premise (open source stack) . You will be responsible for CI/CD pipelines, infrastructure automation, monitoring, container orchestration, and model deployment workflows for our enterprise applications and AI platform. Key Responsibilities Infrastructure Setup & Management Design and implement cloud-native architectures on AWS and be able to manage on-premise open source environments when required . Automate infrastructure provisioning using tools like Terraform or CloudFormation. Maintain scalable environments for dev, staging, and production . CI/CD & Release Management Build and maintain CI/CD pipelines for backend, frontend, and AI workloads. Enable automated testing, security scanning, and artifact deployments. Manage configuration and secret management across environments. Containerization & Orchestration Manage Docker-based containerization and Kubernetes clusters (EKS, self-managed K8s) . Implement service mesh, auto-scaling, and rolling updates. Monitoring, Security, and Reliability Implement observability (logging, metrics, tracing) using open source or cloud tools. Ensure security best practices across infrastructure, pipelines, and deployed services. Troubleshoot incidents, manage disaster recovery, and support high availability. Model DevOps / MLOps Set up pipelines for AI/ML model deployment and monitoring (LLMOps). Support data pipelines, vector databases, and model hosting for AI applications. Required Skills and Qualifications Cloud & Infra Strong expertise in AWS services : EC2, ECS/EKS, S3, IAM, RDS, Lambda, API Gateway, etc. Ability to set up and manage on-premise or hybrid environments using open source tools. DevOps & Automation Hands-on experience with Terraform / CloudFormation . Strong skills in CI/CD tools such as GitHub Actions, Jenkins, GitLab CI/CD, or ArgoCD. Containerization & Orchestration Expertise with Docker and Kubernetes (EKS or self-hosted). Familiarity with Helm charts, service mesh (Istio/Linkerd). Monitoring / Observability Tools Experience with Prometheus, Grafana, ELK/EFK stack, CloudWatch . Knowledge of distributed tracing tools like Jaeger or OpenTelemetry. Security & Compliance Understanding of cloud security best practices . Familiarity with tools like Vault, AWS Secrets Manager. Model DevOps / MLOps Tools (Preferred) Experience with MLflow, Kubeflow, BentoML, Weights & Biases (W&B) . Exposure to vector databases (pgvector, Pinecone) and AI pipeline automation . Preferred Qualifications Knowledge of cost optimization for cloud and hybrid infrastructures . Exposure to infrastructure as code (IaC) best practices and GitOps workflows. Familiarity with serverless and event-driven architectures . Education Bachelors degree in Computer Science, Engineering, or related field (or equivalent experience). What We Offer Opportunity to work on modern cloud-native systems and AI-powered platforms . Exposure to hybrid environments (AWS and open source on-prem). Competitive salary, benefits, and growth-oriented culture. Show more Show less

Posted 3 days ago

Apply

10.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

As a DataOps Engineer, you will play a crucial role within our data engineering team, blending elements of software engineering, DevOps, and data analytics. Your primary responsibility will involve the development and maintenance of secure, scalable, and high-quality data pipelines and infrastructure to support our clients" advanced analytics, machine learning, and real-time decision-making needs. Your key responsibilities will include designing, developing, and managing robust ETL/ELT pipelines utilizing Python and modern DataOps methodologies. Additionally, you will implement data quality checks, pipeline monitoring, and error handling mechanisms. Building data solutions on AWS cloud services like S3, ECS, Lambda, and CloudWatch will be an integral part of your role. Furthermore, containerizing applications with Docker and orchestrating them using Kubernetes for scalable deployments will be part of your daily tasks. You will work with infrastructure-as-code tools and CI/CD pipelines to automate deployments efficiently. Moreover, designing and optimizing data models using PostgreSQL, Redis, and PGVector for high-performance storage and retrieval will be essential. Supporting feature stores and vector-based storage for AI/ML applications will also fall under your responsibilities. You will drive Agile ceremonies such as daily stand-ups, sprint planning, and retrospectives to ensure successful sprint delivery. Additionally, reviewing pull requests (PRs), conducting code reviews, and enforcing security and performance standards will be part of your routine. Collaborating closely with product owners, analysts, and architects to refine user stories and technical requirements will also be crucial for the success of the projects. As for the required skills and qualifications, we are looking for someone with at least 10 years of experience in Data Engineering, DevOps, or Software Engineering roles focusing on data products. Proficiency in Python, Docker, Kubernetes, and AWS (especially S3 and ECS) is essential. Strong knowledge of relational and NoSQL databases like PostgreSQL, Redis, and experience with PGVector would be a strong advantage. Deep understanding of CI/CD pipelines, GitHub workflows, and modern source control practices is also required. Experience working in Agile/Scrum environments with excellent collaboration and communication skills is a must. A passion for developing clean, well-documented, and scalable code in a collaborative environment is highly valued. Familiarity with DataOps principles, encompassing automation, testing, monitoring, and deployment of data pipelines, is also beneficial.,

Posted 6 days ago

Apply

10.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

As a DataOps Engineer, you will play a crucial role within our data engineering team, operating in the realm that merges software engineering, DevOps, and data analytics. Your primary responsibility will involve creating and managing secure, scalable, and production-ready data pipelines and infrastructure that are vital in supporting advanced analytics, machine learning, and real-time decision-making capabilities for our clientele. Your key duties will encompass designing, developing, and overseeing the implementation of robust, scalable, and efficient ETL/ELT pipelines leveraging Python and contemporary DataOps methodologies. You will also be tasked with incorporating data quality checks, pipeline monitoring, and error handling mechanisms, as well as constructing data solutions utilizing cloud-native services on AWS like S3, ECS, Lambda, and CloudWatch. Furthermore, your role will entail containerizing applications using Docker and orchestrating them via Kubernetes to facilitate scalable deployments. You will collaborate with infrastructure-as-code tools and CI/CD pipelines to automate deployments effectively. Additionally, you will be involved in designing and optimizing data models using PostgreSQL, Redis, and PGVector, ensuring high-performance storage and retrieval while supporting feature stores and vector-based storage for AI/ML applications. In addition to your technical responsibilities, you will be actively engaged in driving Agile ceremonies such as daily stand-ups, sprint planning, and retrospectives to ensure successful sprint delivery. You will also be responsible for reviewing pull requests (PRs), conducting code reviews, and upholding security and performance standards. Your collaboration with product owners, analysts, and architects will be essential in refining user stories and technical requirements. To excel in this role, you are required to have at least 10 years of experience in Data Engineering, DevOps, or Software Engineering roles with a focus on data products. Proficiency in Python, Docker, Kubernetes, and AWS (specifically S3 and ECS) is essential. Strong knowledge of relational and NoSQL databases like PostgreSQL, Redis, and experience with PGVector will be advantageous. A deep understanding of CI/CD pipelines, GitHub workflows, and modern source control practices is crucial, as is experience working in Agile/Scrum environments with excellent collaboration and communication skills. Moreover, a passion for developing clean, well-documented, and scalable code in a collaborative setting, along with familiarity with DataOps principles encompassing automation, testing, monitoring, and deployment of data pipelines, will be beneficial for excelling in this role.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

NTT DATA is looking for a talented and passionate individual to join our team in Chennai, Tamil Nadu, India as a Python Full Stack + JAVA Microservices Developer. As a part of our team, you will be responsible for the development, testing, and maintenance of software applications and systems. You will lead the planning and design of product and technical initiatives, as well as mentor developers and team members. Your role will involve driving improvements in engineering techniques, standards, practices, and processes across the department, fostering a culture of knowledge sharing and collaboration. You will collaborate with team members to ensure high-quality deliverables that are optimized and adhere to performance standards. Additionally, you will engage with key internal stakeholders to understand user requirements and prepare low-level design documents for the development team. Participating in Agile planning and estimation activities will be a crucial part of your responsibilities, where you will break down large tasks into smaller ones. You will also be responsible for resolving team queries and escalating issues to the team lead if clarification is required from the customer. Providing technical guidance to the team, leading and resolving issues, and implementing reusable frameworks will be key aspects of your role. Mentoring junior team members, supporting interviews, and evaluations will also be part of your responsibilities. If you are looking to be part of an inclusive, adaptable, and forward-thinking organization, NTT DATA is the place for you. Join us in our mission to help clients innovate, optimize, and transform for long-term success. Apply now and be a part of our diverse team of experts in over 50 countries. Visit us at nttdata.com to learn more about our services and commitment to moving confidently into the digital future.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

We are looking for a skilled, motivated, and quick-learning Full Stack Developer to join our team dedicated to cutting-edge Gen AI development work. As a Full Stack Developer, you will be responsible for creating innovative applications and solutions that encompass both frontend and backend technologies. While our solutions often involve the use of Retrieval Augmented Generation (RAG) and Agentic frameworks, your role will extend beyond these technologies to encompass a variety of AI tools and techniques. Your responsibilities will include developing and maintaining web applications using Angular, NDBX frameworks, and other modern technologies. You will design and implement databases in Postgres DB, employ ingestion and retrieval pipelines utilizing pgvector and neo4j, and ensure the implementation of efficient and secure data practices. Additionally, you will work with various generative AI models and frameworks such as LangChain, Haystack, and LlamIndex for tasks like chucking, embeddings, chat completions, and integration with different data sources. You will collaborate with team members to integrate GenAI capabilities into applications, write clean and efficient code adhering to company standards, conduct testing to identify and fix bugs, and utilize collaboration tools like GitHub for effective team working and code management. Staying updated with emerging technologies and applying them to operations will be essential, showcasing a strong desire for continuous learning. Qualifications and Experience: - Bachelor's degree in Computer Science, Information Technology, or a related field with at least 6 years of working experience. - Proven experience as a Full Stack Developer with a focus on designing, developing, and deploying end-to-end applications. - Knowledge of front-end languages and libraries such as HTML/CSS, JavaScript, XML, and jQuery. - Experience with Angular and NDBX frameworks, as well as database technologies like Postgres DB and vector databases. - Proficiency in developing APIs following OpenAPI standards. - Familiarity with generative AI models on cloud platforms like Azure and AWS, including techniques like Retrieval Augmented Generation, Prompt engineering, Agentic RAG, and Model context protocols. - Experience with collaboration tools like GitHub and docker images for packaging applications. At Allianz, we believe in fostering a diverse and inclusive workforce. We are proud to be an equal opportunity employer that values bringing your authentic self to work, regardless of background, appearance, preferences, or beliefs. Together, we can create an environment where everyone feels empowered to explore, grow, and contribute to a better future for our customers and the global community. Join us at Allianz and let's work together to care for tomorrow.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

You are seeking a hands-on backend expert to elevate your FastAPI-based platform to the next level by developing production-grade model-inference services, agentic AI workflows, and seamless integration with third-party LLMs and NLP tooling. In this role, you will be responsible for various key areas: 1. Core Backend Enhancements: - Building APIs - Strengthening security with OAuth2/JWT, rate-limiting, SecretManager, and enhancing observability through structured logging and tracing - Adding CI/CD, test automation, health checks, and SLO dashboards 2. Awesome UI Interfaces: - Developing UI interfaces using React.js/Next.js, Redact/Context, and various CSS frameworks like Tailwind, MUI, Custom-CSS, and Shadcn 3. LLM & Agentic Services: - Designing micro/mini-services to host and route to platforms such as OpenAI, Anthropic, local HF models, embeddings & RAG pipelines - Implementing autonomous/recursive agents that orchestrate multi-step chains for Tools, Memory, and Planning 4. Model-Inference Infrastructure: - Setting up GPU/CPU inference servers behind an API gateway - Optimizing throughput with techniques like batching, streaming, quantization, and caching using tools like Redis and pgvector 5. NLP & Data Services: - Managing the NLP stack with Transformers for classification, extraction, and embedding generation - Building data pipelines to combine aggregated business metrics with model telemetry for analytics You will be working with a tech stack that includes Python, FastAPI, Starlette, Pydantic, Async SQLAlchemy, Postgres, Docker, Kubernetes, AWS/GCP, Redis, RabbitMQ, Celery, Prometheus, Grafana, OpenTelemetry, and more. Experience in building production Python REST APIs, SQL schema design in Postgres, async patterns & concurrency, UI application development, RAG, LLM/embedding workflows, cloud container orchestration, and CI/CD pipelines is essential for this role. Additionally, experience with streaming protocols, NGINX Ingress, SaaS security hardening, data privacy, event-sourced data models, and other related technologies would be advantageous. This role offers the opportunity to work on evolving products, tackle real challenges, and lead the scaling of AI services while working closely with the founder to shape the future of the platform. If you are looking for meaningful ownership and the chance to solve forward-looking problems, this role could be the right fit for you.,

Posted 2 weeks ago

Apply

3.0 - 5.0 years

8 - 12 Lacs

Noida

Remote

We are looking for a skilled Python Developer with a strong background in web application development using frameworks like Django, Flask, and FastAPI. The ideal candidate should also possess hands-on experience in building Generative AI (GenAI) applications leveraging modern AI/ML libraries and frameworks such as LangGraph, LangChain, and LLMs, and integrating them with vector databases like Pinecone or PGVector. Key Responsibilities Web Development: Design, develop, and maintain scalable RESTful APIs using Django, Flask, or FastAPI. Build and deploy modern web applications that are performant, modular, and secure. Implement API authentication and authorization using OAuth2, JWT, or session-based approaches. Work with frontend teams to integrate APIs and ensure smooth end-to-end flows. Follow CI/CD and Git-based deployment workflows using tools like GitHub/GitLab, Jenkins, or Docker. GenAI Application Development: Build intelligent applications using LangGraph and LangChain for orchestrating LLM workflows Integrate OpenAI, Anthropic, HuggingFace, or custom LLMs into production workflows. Design and optimize RAG (Retrieval Augmented Generation) pipelines using vector databases such as Pinecone, PGVector etc Database and Backend Integration: Work with relational databases like PostgreSQL and MySQL. Write efficient and scalable queries for large-scale datasets. Experience with AWS/GCP/Azure is a plus. Required Skills Minimum of 3 years of experience application development using Python/django Proficient in developing and consuming RESTful API Experience with LangGraph, LangChain, and GenAI tools/workflows Experience with LLMs (OpenAI GPT-4, Claude, Llama, etc.) Good understanding of software design principles, code modularity, and version control (Git)

Posted 1 month ago

Apply

6.0 - 9.0 years

0 - 3 Lacs

Pune, Chennai, Bengaluru

Hybrid

Primary: Postgres, PgVector, Vectorized database, SQL Experience - 6 years - 9 years Location - Pune / Mumbai / Chennai / Bangalore Working exp with DB: Postgres, hive, MS SQL Experience with scripting : python, UNIX shell, spark Experience in working with AI project added advantage

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad, Bengaluru

Work from Office

Your future duties and responsibilities: Skill: pgvector,Vertex AI, FastAPI, Flask, Kubernetes Develops and optimizes AI applications for production, ensuring seamless integration with enterprise systems and front-end applications. Builds scalable API layers and microservices using FastAPI, Flask, Docker, and Kubernetes to serve AI models in real-world environments Implements and maintains AI pipelines with MLOps best practices, leveraging tools like Azure ML, Databricks, AWS SageMaker, and Vertex AI Ensures high availability, reliability, and performance of AI systems through rigorous testing, monitoring, and optimization Works with agentic frameworks such as LangChain, LangGraph, and AutoGen to build adaptive AI agents and workflows Experience with GCP, AWS, or Azure - utilizing services such as Vertex AI, Bedrock, or Azure Open AI model endpoints Hands on experience with vector databases such as pgvector, Milvus, Azure Search, AWS OpenSearch, and embedding models such as Ada, Titan, etc. Collaborates with architects and scientists to transition AI models from research to fully functional, high-performance production systems. Skills: Azure Search Flask Kubernetes

Posted 2 months ago

Apply

2.0 - 5.0 years

3 - 7 Lacs

Faridabad

Work from Office

Hiring AI & Data Retrieval Engineer with expertise in NLQ, Text-to-SQL, LLMs, LangChain, pgVector, PostgreSQL, vector search, Python, AI libraries, Agentic AI & API integration. Exp with NLP, RAG, BI tools, live projects & LLM fine-tuning preferred.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies