Jobs
Interviews

24 Semantic Search Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

About us: Suhora is a cutting-edge technology firm that leverages satellite imagery, big data, and AI to solve problems surrounding Earth. We specialize in offering integrative all-weather, day-and-night solutions by combining Synthetic Aperture Radar (SAR), Optical, and Thermal data. Our mission is to utilize technology and our expertise in geospatial intelligence to make this world a better and more sustainable place. At Suhora, we are committed to delivering innovative solutions that help our clients make data-driven decisions, whether its for environmental monitoring, agriculture, disaster response, or infrastructure development. We believe that our expertise can make a tangible difference in addressing global challenges, and we're looking for individuals who are passionate about technology and sustainability to join our team. For more detailed information, visit our website: www.suhora.com Job Summary: We are seeking a Machine Learning Engineer with a focus on Large Language Models (LLMs), Natural Language Processing (NLP), and advanced techniques such as LLAMA and Retrieval-Augmented Generation (RAG). The ideal candidate will have hands-on experience in leveraging cutting-edge LLMs, including Meta's LLAMA models, and applying RAG to develop powerful AI systems. You will work on projects that combine NLP techniques with geospatial data, building systems that can process, analyze, and generate insights from geospatial intelligence applications. Responsibilities: - Develop LLMs & NLP Models: - Design, develop, and fine-tune LLAMA models, RAG architectures, and other LLM techniques (e.g., GPT, BERT) to process and generate text-based insights from geospatial data, reports, and metadata. - Build and integrate NLP models capable of performing information retrieval, extraction, and classification tasks from geospatial data reports and documents. - Implement Retrieval-Augmented Generation (RAG): - Design and implement RAG systems that enhance the performance of LLMs by integrating external knowledge sources for generating context-aware, accurate, and useful results. - Fine-tune LLAMA or other LLMs with RAG architectures to provide responses based on external retrieval of relevant information from large, unstructured datasets. - Text Analytics & Information Extraction: - Implement advanced NLP techniques for extracting key insights from unstructured geospatial text, such as location-based data, satellite data descriptions, and environmental reports. - Integrate LLMs with structured geospatial data, such as GeoTIFFs, shapefiles, or GIS data formats, to provide actionable insights. - Model Training and Optimization: - Train large-scale LLAMA models and RAG systems to handle diverse text datasets and optimize them for performance in real-time applications. - Ensure that models are efficient, scalable, and capable of processing massive volumes of geospatial-related text data. - Cross-Functional Collaboration: - Work closely with data scientists, software engineers, and domain experts to integrate NLP and LLM models into production pipelines. - Continuously contribute to R&D efforts, exploring and implementing new advancements in LLAMA, RAG, and NLP technologies. Required Skills & Qualifications: Experience: - 3-5 years of experience in machine learning, NLP, and LLMs, with specific hands-on experience working with LLAMA models or similar LLM technologies (e.g., GPT, BERT). - Experience in implementing Retrieval-Augmented Generation (RAG) techniques for improving model performance. Technical Expertise: - Strong proficiency in Python, and experience with NLP libraries and frameworks such as Hugging Face Transformers, spaCy, PyTorch, TensorFlow, or Torch. - Hands-on experience with LLAMA (Meta AI) or similar transformer-based models. - Proficiency in text processing techniques, including tokenization, named entity recognition (NER), sentiment analysis, and semantic search. Problem Solving & Collaboration: - Ability to analyze complex NLP tasks and translate them into scalable and efficient model solutions. - Strong collaboration skills to work with engineers and domain experts in delivering end-to-end NLP solutions. Preferred Skills: - Experience with geospatial data or knowledge of geospatial intelligence. - Familiarity with cloud platforms for model deployment (AWS, Google Cloud, Azure). - Knowledge of data augmentation, model fine-tuning, and reinforcement learning techniques. - Experience with Docker and Kubernetes for deploying machine learning models. Education: Bachelors or Masters degree in Computer Science, Data Science, Artificial Intelligence, or related fields.,

Posted 16 hours ago

Apply

0.0 - 2.0 years

1 - 2 Lacs

pune

Work from Office

About the Role We are developing a next-gen Applicant Tracking System (ATS) with an intelligent Resume Parser powered by AI/ML. We seek motivated interns passionate about NLP, Deep Learning, and Predictive Modeling to join our core development team. Internship AI/ML (ATS & Resume Parser Development) Location : Pune (Onsite/Hybrid) Stipend : 15,000/month Experience : 0.6 years to 2 years Duration : 3–6 Months Opportunity : High-performing interns will be absorbed into the organisation. Responsibilities Design & implement ML/DL models for resume parsing and candidate-job matching. Build pipelines for data extraction, cleaning, and structuring candidate information . Experiment with NLP libraries (spaCy, NER, BERT/LLMs). Assist in integrating AI models into the ATS platform. Requirements Pursuing/completed a bachelor’s/master’s in CS/AI/ML/Data Science. Proficiency in Python and ML/DL frameworks (TensorFlow/PyTorch). Familiarity with NLP, text mining, and semantic search . Exposure to APIs, databases, and Flask/FastAPI preferred. Must showcase at least one project successfully delivered during academics or otherwise. What You’ll Gain Hands-on AI/ML experience OR knowledge in HR Tech. End-to-end exposure from research to deployment. PPO opportunity for strong performers.

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

ahmedabad, gujarat

On-site

As an AI/ML Developer at our organization, you will be responsible for developing and implementing machine learning models and algorithms. Your role will involve working closely with project stakeholders to understand requirements and translate them into deliverables. Utilizing statistical and machine learning techniques, you will analyze and interpret complex data sets. It is crucial to stay updated with the latest advancements in AI/ML technologies and methodologies to ensure the success of various AI/ML initiatives. Collaboration with cross-functional teams will be essential to support and drive these initiatives forward. In this role, it would be advantageous to have knowledge and experience in building knowledge graphs in production and an understanding of multi-agent systems and their applications in complex problem-solving scenarios. Key technical skills required for this position include solid experience in Time Series Analysis, Anomaly Detection, and traditional machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, and the Deep Learning stack using Python. Additionally, experience with cloud infrastructure for AI/ML on AWS (Sagemaker, Quicksight, Athena, Glue) is necessary. You should also possess expertise in building enterprise-grade, secure data ingestion pipelines for unstructured data, including indexing, search, and advanced retrieval patterns. Proficiency in Python, TypeScript, NodeJS, ReactJS (and equivalent) and frameworks such as pandas, NumPy, scikit-learn, SKLearn, OpenCV, SciPy, Glue crawler, and ETL is required. Experience with data visualization tools like Matplotlib, Seaborn, and Quicksight, as well as deep learning frameworks such as TensorFlow, Keras, and PyTorch, is essential. Familiarity with version control systems like Git and CodeCommit is expected. Moreover, strong knowledge and experience in Generative AI/LLM based development, working with key LLM models APIs and LLM Frameworks, text chunking techniques, text embeddings, RAG concepts, and training and fine-tuning Foundation Models are valuable assets for this role. If you have 5-8 years of experience in the field and are available to join within 0-15 days, we encourage you to apply for this Full-Time position located in Ahmedabad, Indore, or Pune.,

Posted 3 days ago

Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

About DevRev DevRev is a rapidly growing SaaS innovator, building AgentOS - a unified platform that streamlines collaboration for support, product, and growth teams. The mission of DevRev is to help SaaS companies accelerate product velocity and reduce customer churn through AI-driven analytics and seamless integrations. Headquartered in Palo Alto and backed by leading investors, we are expanding our global team in Chennai. About the Team The Applied AI Engineering team at DevRev serves as the bridge between DevRev's core product and the unique needs of our customers. We work closely with clients to design, develop, and implement integrations, automations, and AI-powered enhancements that unlock real value from the DevRev platform. Role Overview As an Applied AI Engineer at DevRev, you will play a pivotal role in delivering custom solutions for our customers. Leveraging your expertise in integrations, cloud/serverless technologies, and AI/ML, you will connect DevRev with diverse SaaS and enterprise systems. Collaboration with cross-functional teams and direct engagement with customers are key aspects of this role, ensuring successful adoption and maximum impact. Key Responsibilities - Design, develop, and maintain integrations and automations between DevRev and third-party SaaS/non-SaaS systems using APIs, webhooks, and real-time communication architectures. - Utilize AI/ML, including Generative AI and large language models (LLMs), to address customer challenges and optimize workflows. - Construct and optimize knowledge graphs and semantic search engines to enhance data discovery processes. - Analyze data, craft SQL queries, and build dashboards to deliver actionable insights to customers and internal teams. - Collaborate with customers and internal stakeholders (Product, Engineering, Customer Success) to elicit requirements and deliver tailored solutions. - Document solutions to ensure maintainability, scalability, and resilience in all integrations. - Deploy and manage solutions on serverless/cloud platforms (AWS Lambda, Google Cloud Functions, Azure Functions, etc.). - Ensure the security of API integrations, including OAuth, API key/token management, and secrets handling. Required Skills & Qualifications - 6+ years of experience in integration software development, focusing on APIs, webhooks, and automation for customer-facing solutions. - Proficiency in coding using TypeScript, JavaScript, Python, or Go. - Hands-on experience with serverless/cloud platforms (AWS Lambda, Google Cloud Functions, Azure Functions, etc.). - Familiarity with OpenAPI specifications, SDK integrations, and secure API practices (OAuth, API keys, secrets management). - Solid understanding of event-driven and pub/sub architectures. - Expertise in data mapping, transformation, and working with graph data structures. - Background in AI/ML, especially LLMs, prompt engineering, or semantic search, is a strong advantage. - Excellent communication and documentation skills. - Bachelor's degree in Computer Science, Engineering, or related field (advanced certifications in AI/architecture are a plus).,

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

delhi

On-site

We are looking for a highly motivated and enthusiastic Senior Data Scientist with 5-8 years of experience to join our dynamic team. The ideal candidate will have a strong background in AI/ML analytics and a passion for leveraging data to drive business insights and innovation. As a Senior Data Scientist, your key responsibilities will include developing and implementing machine learning models and algorithms, working closely with project stakeholders to understand requirements and translate them into deliverables, utilizing statistical and machine learning techniques to analyze and interpret complex data sets, staying updated with the latest advancements in AI/ML technologies and methodologies, and collaborating with cross-functional teams to support various AI/ML initiatives. To qualify for this position, you should have a Bachelor's degree in Computer Science, Data Science, or a related field, as well as a strong understanding of machine learning, deep learning, and Generative AI concepts. Preferred skills for this role include experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, and Deep Learning stack using Python. Additionally, experience with cloud infrastructure for AI/ML on AWS (Sagemaker, Quicksight, Athena, Glue), expertise in building secure data ingestion pipelines for unstructured data (ETL/ELT), proficiency in Python, TypeScript, NodeJS, ReactJS, and frameworks, experience with data visualization tools, knowledge of deep learning frameworks, experience with version control systems, and strong knowledge and experience in Generative AI/LLM based development. Good to have skills for this position include knowledge and experience in building knowledge graphs in production and understanding of multi-agent systems and their applications in complex problem-solving scenarios. Pentair is an Equal Opportunity Employer, and we believe that a diverse workforce contributes different perspectives and creative ideas that enable us to continue to improve every day.,

Posted 4 days ago

Apply

6.0 - 8.0 years

10 - 15 Lacs

pune

Hybrid

Position: Gen AI Engineer (React + Node) Experience: 6-8 Years Location: Pune Hybrid Responsibilities: Design & deliver AI-first features (contract summarization, semantic search, guided workflows). Integrate LLMs (OpenAI, Anthropic, Mistral) using LangChain, LlamaIndex, or custom orchestration. Build RAG pipelines & embedding-based search with vector DBs (Pinecone, Weaviate, pgvector). Rapidly prototype & optimize prompts, models, and workflows. Work across stack React (UI), Node.js & Python (AI workflows). Skills Required: Strong in React, Node.js, Python. Hands-on with LLMs, LangChain/LlamaIndex, and vector DBs. Experience with RAG, semantic search, CI/CD, Agile/DevOps.

Posted 5 days ago

Apply

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

You are a highly skilled Data Scientist with expertise in AI agents, generative AI, and knowledge engineering, tasked with enhancing AI-driven cloud governance solutions. Your main focus will be on advancing multi-agent systems, leveraging LLMs, and integrating knowledge graphs (OWL ontologies) in a Python environment. Working at the intersection of machine learning, AI-driven automation, and cloud governance, you will be responsible for designing intelligent agents that adapt dynamically to cloud ecosystems. Your contributions will play a crucial role in FinOps, SecOps, CloudOps, and DevOps by providing scalable, AI-enhanced decision-making, workflows, and monitoring. Your key responsibilities will include designing, developing, and optimizing LLM-based multi-agent systems for cloud governance, implementing agent collaboration using frameworks like LangChain, AutoGen, or open-source MAS architectures, and developing adaptive AI workflows to enhance governance, compliance, and cost optimization. You will also be applying generative AI techniques such as GPT-4, Google Gemini, and fine-tuned BERT models to knowledge representation and reasoning, designing and managing knowledge graphs, OWL ontologies, and SPARQL queries for intelligent decision-making, and enhancing AI agent knowledge retrieval using symbolic reasoning and semantic search. Additionally, you will be developing embedding-based search models for retrieving and classifying cloud governance documents, fine-tuning BERT, OpenAI embeddings, or custom transformer models for document classification and recommendation, and integrating discrete event simulation (DES) or digital twins for adaptive cloud governance modeling. In the realm of cloud governance and automation, your tasks will involve working with multi-cloud environments (AWS, Azure, GCP, OCI) to extract, analyze, and manage structured/unstructured cloud data, implementing AI-driven policy recommendations for FinOps, SecOps, and DevOps workflows, and collaborating with CloudOps engineers and domain experts to enhance AI-driven automation and monitoring. To qualify for this role, you need to have at least 4+ years of experience in Data Science, AI, or Knowledge Engineering, strong proficiency in Python and relevant ML/AI libraries (PyTorch, TensorFlow, scikit-learn), hands-on experience with knowledge graphs, OWL ontologies, RDF, and SPARQL, expertise in LLMs, NLP, and embedding-based retrieval, familiarity with multi-agent systems, LangChain, AutoGen, and experience working with cloud platforms (AWS, Azure, GCP) and AI-driven cloud governance. Preferred qualifications include experience with knowledge-driven AI applications in cloud governance, FinOps, or SecOps, understanding of semantic search, symbolic AI, or rule-based reasoning, familiarity with event-driven architectures, digital twins, or discrete event simulation (DES), and a background in MLOps, AI pipelines, and cloud-native ML deployments. In return, you will have the opportunity to work on cutting-edge AI agent ecosystems for cloud governance in a collaborative environment that brings together AI, knowledge engineering, and cloud automation. Competitive compensation, benefits, and flexible work arrangements (remote/hybrid) are also part of the package. If you thrive in a fast-paced environment, demonstrate intellectual curiosity, and have a passion for applying advanced AI techniques to solve real-world cybersecurity challenges, this role is for you.,

Posted 5 days ago

Apply

3.0 - 6.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Position Summary... What you&aposll do... Job Description We build and scale the core Search Platform that powers millions of customer queries daily across multiple geographies. Our systems handle catalog indexing, query understanding, ranking, personalization, and retrieval at global scale with sub-second latencies. The teams mission is to deliver highly relevant, fast, and personalized search results that directly influence customer satisfaction and business growth. As a Software Engineer (Search Platform) , you will design and build distributed systems that drive search relevance, scalability, and reliability. You will collaborate closely with engineers, data scientists, and product managers to deliver world-class search experiences. About Team The Global Search Engineering Team is responsible for the core search services that power product discovery across multiple markets and platforms (desktop, mobile, and voice). We build highly scalable systems that support: Catalog indexing across millions of SKUs globally. Query understanding and expansion across multiple languages. Ranking & personalization using ML models and customer behavioral signals. Experimentation platforms for continuous improvement of search quality. The teams impact is directly tied to customer experience, engagement, and revenue. What You&aposll Do Design, develop, and maintain large-scale search and retrieval systems that serve millions of queries per second. Implement low-latency APIs (REST/GraphQL) for query execution, autocomplete, and personalized ranking. Work on distributed search infrastructure using technologies such as Elasticsearch, Solr, OpenSearch, or Vespa. Build and optimize indexing pipelines for catalog ingestion, query expansion, and relevance signals using streaming platforms like Kafka/Kinesis. Apply information retrieval techniques (BM25, semantic search, learning-to-rank, embeddings) to improve search result relevance. Collaborate with ML engineers to integrate vector embeddings, personalization, and reranking models into search services. Ensure system resilience with fault tolerance, fallback strategies, and graceful degradation mechanisms. Optimize for sub-200ms global latency with caching (Redis, CDN), sharding, and replication strategies. Work with observability platforms (Prometheus, Grafana, OpenTelemetry) to monitor query performance, detect anomalies, and drive improvements. Partner with cross-functional teams recommendations, personalization, catalog ingestion to ensure end-to-end customer experience. Participate in code reviews, mentorship, and design discussions, driving best practices across the team. Contribute to A/B testing frameworks and analyze metrics like NDCG, CTR, recall/precision to validate relevance improvements. What You Will Bring... A Bachelors or Masters degree in Computer Science, Engineering, or related field. 36 years of experience in software engineering, with at least 2+ years in search, recommendation, or distributed systems. Proven experience in large-scale system design and search infrastructure. Strong problem-solving, debugging, and analytical skills. A collaborative mindset, with the ability to work effectively with globally distributed teams. Curiosity to explore new technologies in search, ML, and distributed system. Technical Skills: Core Software Engineering Foundations Strong proficiency in Java, Scala, C++, Go, or Python. Deep understanding of data structures and algorithms, particularly search-related. Strong grounding in object-oriented and functional design. Experience building low-latency REST/GraphQL APIs. Search & Information Retrieval (IR) Domain Knowledge of indexing, ranking, and query parsing. Experience with ranking algorithms: BM25, semantic search, LTR. Familiar with query understanding techniques (spell correction, synonyms, tokenization). Exposure to search metrics (CTR, NDCG, recall, precision). About Walmart Global Tech Imagine working in an environment where one line of code can make life easier for hundreds of millions of people. Thats what we do at Walmart Global Tech. Were a team of software engineers, data scientists, cybersecurity expert&aposs and service professionals within the worlds leading retailer who make an epic impact and are at the forefront of the next retail disruption. People are why we innovate, and people power our innovations. We are people-led and tech-empowered. We train our team in the skillsets of the future and bring in experts like you to help us grow. We have roles for those chasing their first opportunity as well as those looking for the opportunity that will define their career. Here, you can kickstart a great career in tech, gain new skills and experience for virtually every industry, or leverage your expertise to innovate at scale, impact millions and reimagine the future of retail. Flexible, hybrid work We use a hybrid way of working with primary in office presence coupled with an optimal mix of virtual presence. We use our campuses to collaborate and be together in person, as business needs require and for development and networking opportunities. This approach helps us make quicker decisions, remove location barriers across our global team, be more flexible in our personal lives. Benefits Beyond our great compensation package, you can receive incentive awards for your performance. Other great perks include a host of best-in-class benefits maternity and parental leave, PTO, health benefits, and much more. Belonging We aim to create a culture where every associate feels valued for who they are, rooted in respect for the individual. Our goal is to foster a sense of belonging, to create opportunities for all our associates, customers and suppliers, and to be a Walmart for everyone. At Walmart, our vision is "everyone included." By fostering a workplace culture where everyone isand feelsincluded, everyone wins. Our associates and customers reflect the makeup of all 19 countries where we operate. By making Walmart a welcoming place where all people feel like they belong, were able to engage associates, strengthen our business, improve our ability to serve customers, and support the communities where we operate. Equal Opportunity Employer Walmart, Inc., is an Equal Opportunities Employer By Choice. We believe we are best equipped to help our associates, customers and the communities we serve live better when we really know them. That means understanding, respecting and valuing unique styles, experiences, identities, ideas and opinions while being inclusive of all people. Minimum Qualifications... Outlined below are the required minimum qualifications for this position. If none are listed, there are no minimum qualifications. Minimum Qualifications:Option 1: Bachelor&aposs degree in computer science, information technology, engineering, information systems, cybersecurity, or related area and 2years experience in software engineering or related area at a technology, retail, or data-driven company. Option 2: 4 years experience in software engineering or related area at a technology, retail, or data-driven company. Preferred Qualifications... Outlined below are the optional preferred qualifications for this position. If none are listed, there are no preferred qualifications. Certification in Security+, Network+, GISF, GSEC, CISSP, or CCSP, Masters degree in Computer Science, Information Technology, Engineering, Information Systems, Cybersecurity, or related area Primary Location... BLOCK- 1, PRESTIGE TECH PACIFIC PARK, SY NO. 38/1, OUTER RING ROAD KADUBEESANAHALLI, , India R-2260137 Show more Show less

Posted 5 days ago

Apply

9.0 - 11.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Siemens Digital Industries Software is a leading provider of solutions for the design, simulation, and manufacture of products across many different industries. Formula 1 cars, skyscrapers, ships, space exploration vehicles, and many of the objects we see in our daily lives are being conceived and manufactured using our Product Lifecycle Management (PLM) software We are seeking AI Backend Engineers to play a pivotal role in building our Agentic Workflow Service and Retrieval-Augmented Generation (RAG) Service. In this hybrid role, you&aposll leverage your expertise in both backend development and machine learning to create robust, scalable AI-powered systems using AWS Kubernetes, Amazon Bedrock models, AWS Strands Framework, and LangChain / LangGraph. Key Responsibilities: Design and implement core backend services and APIs for agentic framework and RAG systems LLM-based applications using Amazon Bedrock models .RAG systems with advanced retrieval mechanisms and vector database integration Implement agentic workflows using technologies such as AWS Strands Framework, LangChain / LangGraph Design and develop microservices that efficiently integrate AI capabilities Create scalable data processing pipelines for training data and document ingestion Optimize model performance, inference latency, and overall system efficiency Implement evaluation metrics and monitoring for AI components Write clean, maintainable, and well-tested code with comprehensive documentation Collaborate with multiple cross-functional team members including DevOps, product, and frontend engineers Stay current with the latest advancements in LLMs and AI agent architectures Qualifications: 9+ years of total software engineering experience Backend Development Experience With Strong Python Programming Skills Experience in ML/AI engineering, particularly with LLMs and generative AI applications & microservices architecture, API design, and asynchronous programming Demonstrated experience building RAG systems and working with vector databases LangChain/LangGraph or similar LLM orchestration frameworks Strong knowledge of AWS services, particularly Bedrock, Lambda, and container services Experience with containerization technologies and Kubernetes Understanding of ML model deployment, serving, and monitoring in production environments Knowledge of prompt engineering and LLM fine-tuning techniques Excellent Problem-solving Abilities And System Design Skills Strong communication skills and ability to explain complex technical concepts Experience in Kubernetes, AWS Serverless and working with Databases (SQL, NoSQL) and data structures Ability to learn new technologies quickly Candidate must have AWS certifications - Associate Architect / Developer / Data Engineer / AI Track, Familiarity with streaming architectures and real-time data processing. Must have experience with ML experiment tracking and model versioning .and have understanding of ML/AI ethics and responsible AI development Experience with AWS Strands Framework Knowledge of semantic search and embedding models Contributions to open-source ML/AI projects We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We are Siemens A collection of over 377,000 minds building the future, one day at a time in over 200 countries. We&aposre dedicated to equality, and we welcome applications that reflect the diversity of the communities we work in. All employment decisions at Siemens are based on qualifications, merit, and business need. Bring your curiosity and creativity and help us shape tomorrow! We offer a comprehensive reward package which includes a competitive basic salary, bonus scheme, generous holiday allowance, pension, and private healthcare. Siemens Software. Transform the everyday' , #SWSaaS Show more Show less

Posted 2 weeks ago

Apply

9.0 - 11.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Siemens Digital Industries Software is a leading provider of solutions for the design, simulation, and manufacture of products across many different industries. Formula 1 cars, skyscrapers, ships, space exploration vehicles, and many of the objects we see in our daily lives are being conceived and manufactured using our Product Lifecycle Management (PLM) software. We are seeking Backend Engineers to play a pivotal role in building our Data & AI services Agentic Workflow Service and Retrieval-Augmented Generation (RAG) Service. In this hybrid role, you&aposll leverage your expertise in both backend development and AI knowledge and skills to build robust, scalable Data & AI services using AWS Kubernetes, Amazon Bedrock models. Key Requirements: Backend development experience with strong Java programming skills along with basic Python programming knowledge Design and develop microservices with Java spring boot that efficiently integrate AI capabilities Experience with microservices architecture, API design, and asynchronous programming Experience in working with Databases (SQL, NoSQL) and data structures Solid understanding of AWS services, particularly Bedrock, Lambda, and container services Experience with containerization technologies, Kubernetes and AWS serverless Understanding of RAG systems with advanced retrieval mechanisms and vector database integration Understanding of agentic workflows using technologies such as AWS Strands Framework, LangChain / LangGraph Create scalable data processing pipelines for training data and document ingestion Write clean, maintainable, and well-tested code with comprehensive documentation Collaborate with multiple multi-functional team members including DevOps, product, and frontend engineers Stay ahead of with the latest advancements in Data, LLMs and AI agent architectures 9+ years of total software engineering experience Understanding building RAG systems and working with vector databases ML/AI engineering, particularly with LLMs and generative AI applications Awareness About LangChain/LangGraph Or Similar LLM Orchestration Frameworks Understanding of ML model deployment, serving, and monitoring in production environments Knowledge of timely engineering Excellent Problem-solving Abilities And System Design Skills Strong communication skills and ability to explain complex technical concepts Ability to learn new technologies quickly Qualifications: Must have AWS certifications - Associate Architect / Developer / Data Engineer / AI Track Must have familiarity with streaming architectures and real-time data processing Must have developed, delivered and operated microservices on AWS Understanding of ML/AI ethics and responsible AI development Knowledge of semantic search and embedding models We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We are Siemens A collection of over 377,000 minds building the future, one day at a time in over 200 countries. We&aposre dedicated to equality, and we welcome applications that reflect the diversity of the communities we work in. All employment decisions at Siemens are based on qualifications, merit, and business need. Bring your curiosity and creativity and help us shape tomorrow! We offer a comprehensive reward package which includes a competitive basic salary, bonus scheme, generous holiday allowance, pension, and private healthcare. Siemens Software. Transform the every day. #SWSaaS Show more Show less

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As an AWS AI Engineer at HCL, you will be responsible for designing and implementing AI-driven solutions using AWS services. Your role will involve building scalable AI components, integrating data sources, and supporting agentic workflows on AWS infrastructure. Key Responsibilities: - Develop, test, and maintain AI plugins and connectors for AWS-based AI platforms. - Design orchestration layers using AWS Lambda, Step Functions, and other services. - Ensure high-quality prompt engineering and evaluation. - Integrate structured and unstructured data using AWS-native tools. - Support use case development across no-code, low-code, pro-code environments. - Perform data mining, cleansing, and preparation for AI agents. - Build reusable components and services with a product mindset. - Transition solutions to support teams and assist in production issue resolution. Required Skills: - 5+ years of experience in Data Engineering, Semantic Search, and open-source development. - Strong experience with AWS services such as Lambda, Step Functions, Bedrock, DynamoDB, S3, SageMaker, etc. - Proficiency in Python, TypeScript, React, FastAPI, and LangChain. - Deep understanding of SQL, prompt engineering, and AI agent design. - Familiarity with IAM, KMS, Secrets Manager, and OpenSearch Serverless. - Experience with Git-based source control and Terraform for IaC. If you are passionate about innovation and have the required skills and experience, we are looking for early joiners to join our team in Chennai or anywhere in India. To apply, please share the following details: - Total years of experience - Experience in AWS - Experience in Python/Pyspark - Experience in AI/Gen AI - Current location - Preferred location For further information or to apply, please contact us at srikanth.domala@hcltech.com.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

You will be a key member of our team as a Software Engineer specializing in AI. Your role will involve working at the cutting-edge intersection of advanced AI technologies and robust backend systems. We are seeking an individual with a strong background in Python development and a keen interest in creating scalable GenAI applications. Your primary responsibilities will revolve around designing, developing, and deploying backend services that support AI-driven systems and user experiences with a focus on LLM-based technologies. Your main tasks will include designing and implementing GenAI pipelines such as LLM orchestration, RAG, and multi-agent workflows using frameworks like LangChain, Agno, LlamaIndex, ReAct, AutoGen, or CrewAI. You will also be responsible for building and maintaining Python backend services utilizing FastAPI or Flask, creating APIs and microservices to expose LLM capabilities, integrating external data sources into AI workflows, and managing model selection, fine-tuning, and optimization for production use. Additionally, you will collaborate closely with product and design teams to translate user requirements into backend AI features. It would be advantageous if you have experience or interest in frontend technologies like React, Next.js, or Vue, as well as deploying applications in cloud environments using tools such as Docker, Kubernetes, and CI/CD pipelines. Familiarity with authentication systems, API gateways, and basic DevOps workflows is also a plus. Exposure to Python libraries for ML/AI such as Transformers, scikit-learn, PyTorch, and TensorFlow would be beneficial. To excel in this role, you should possess at least 4 years of professional experience in Python backend development and machine learning. Experience in building and deploying GenAI applications using LLMs and related orchestration frameworks is essential. A strong understanding of RESTful API design, backend architecture, vector databases, and FastAPI or Flask for production-grade services is required. Additionally, you should have a solid grasp of software engineering principles, testing methodologies, version control practices, and deployment procedures. A proactive mindset and the ability to thrive in dynamic environments are highly valued traits. We value individuals who exhibit ownership, pragmatism, curiosity, and collaboration. Ownership entails taking full responsibility for your work and outcomes, while pragmatism involves balancing innovation with reliability and scalability. Curiosity drives continuous learning and experimentation with new AI tools and trends, while collaboration fosters effective cross-disciplinary teamwork and clear communication. This position is offered as Contract-to-Hire, with the potential for full-time conversion based on your performance and fit within the team.,

Posted 3 weeks ago

Apply

5.0 - 12.0 years

3 - 12 Lacs

Hyderabad, Telangana, India

On-site

What You Will Do Act as the Voice of the Customer by analyzing user behavior, needs, and journeys across Enterprise Data and AI platforms. Collaborate with business stakeholders to understand requirements and identify opportunities to enable data use and product success. Drive Go-to-Market (GTM) strategy by aligning product messaging and positioning with customer needs, supporting launches, and increasing adoption. Develop customer personas and journey maps to optimize the end-to-end user experience across global data and AI products. Build and nurture user communities , championing product education and peer knowledge-sharing. Offer last-mile support for data platforms and assist in customer onboarding and training activities. Engage with customers and stakeholders to evangelize product capabilities and identify internal brand ambassadors. Monitor product performance and usage metrics , delivering insights to product and engineering teams for data-driven decision-making. Collaborate with cross-functional teams including engineering, product, and GTM groups to ensure effective product delivery and enhancements. Use data and analytics to recommend optimizations for product features and user experience. What We Expect of You Educational Qualifications: Master's degree with 710 years of experience in Information Systems OR Bachelor's degree with 810 years of experience OR Diploma with 1012 years of experience Experience Requirements: 57 years of experience in a Product Analyst role, especially working on data and AI products. Hands-on experience with data platforms and working closely with data engineering teams. Demonstrated success in developing and executing GTM strategies . Track record of supporting product launches and ensuring effective product-market fit. Strong experience with Agile methodologies (Scrum/SAFe) and advanced Excel skills. Preferred Qualifications Experience working on search , especially semantic search technologies. Familiarity with big data technologies , AI platforms , and cloud-based data ecosystems. Strong working knowledge of SQL . Experience operating in matrixed organizations , driving collaboration across teams. Background in data and AI consulting or services . Exposure to biotech/pharma domains is an added advantage. Soft Skills Excellent analytical and problem-solving abilities Strong communication and presentation skills High initiative , self-motivation, and adaptability Ability to manage multiple priorities Team-oriented with a focus on collaboration and shared success Comfortable working with global, virtual teams

Posted 4 weeks ago

Apply

6.0 - 8.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

???? We&aposre Hiring: Artificial Intelligence Consultant! ???? We&aposre seeking a highly motivated and technically adept Artificial Intelligence Consultant to join our growing Artificial Intelligence and Business Transformation practice. This role is ideal for a strategic thinker with a strong blend of leadership, business consulting acumen, and technical expertise in Python, LLMs, Retrieval-Augmented Generation (RAG), and agentic systems. Experience Required: Minimum 6+ Years Location: Remote/ Work From Home Job Type: Contract to hire (1 Year /Renewable Contract) Notice Period: Immediate to 15 Days Max Mode of Interview: Virtual Roles And Responsibilities AI Engagements: Independently manage end-to-end delivery of AI-led transformation projects across industries, ensuring value realization and high client satisfaction. Strategic Consulting & Roadmapping: Identify key enterprise challenges and translate them into AI solution opportunities, crafting transformation roadmaps that leverage RAG, LLMs, and intelligent agent frameworks. LLM/RAG Solution Design & Implementation: Architect and deliver cutting-edge AI systems using Python, LangChain, LlamaIndex, OpenAI function calling, semantic search, and vector store integrations (FAISS, Qdrant, Pinecone, ChromaDB). Agentic Systems: Design and deploy multi-step agent workflows using frameworks like CrewAI, LangGraph, AutoGen or ReAct, optimizing tool-augmented reasoning pipelines. Client Engagement & Advisory: Build lasting client relationships as a trusted AI advisor, delivering technical insight and strategic direction on generative AI initiatives. Hands-on Prototyping: Rapidly prototype PoCs using Python and modern ML/LLM stacks to demonstrate feasibility and business impact. Thought Leadership: Conduct market research, stay updated with the latest in GenAI and RAG/Agentic systems, and contribute to whitepapers, blogs, and new offerings. Essential Skills Education : Bachelor&aposs or Masters in Computer Science, AI, Engineering, or related field. Experience : Minimum 6 years of experience in consulting or technology roles, with at least 3 years focused on AI & ML solutions. Leadership Quality: Proven track record in leading cross-functional teams and delivering enterprise-grade AI projects with tangible business impact. Business Consulting Mindset: Strong problem-solving, stakeholder communication, and business analysis skills to bridge technical and business domains. Python & AI Proficiency: Advanced proficiency in Python and popular AI/ML libraries (e.g., scikit-learn, PyTorch, TensorFlow, spaCy, NLTK). Solid understanding of NLP, embeddings, semantic search, and transformer models. LLM Ecosystem Fluency: Experience with OpenAI, Cohere, Hugging Face models; prompt engineering; tool/function calling; and structured task orchestration. Independent Contributor: Ability to own initiatives end-to-end, take decisions independently, and operate in fast-paced environments. Preferred Skills Cloud Platform Expertise: Strong familiarity with Microsoft Azure (preferred), AWS, or GCP including compute instances, storage, managed services, and serverless/cloud-native deployment models. Programming Paradigms: Hands-on experience with both functional and object-oriented programming in AI system design. Hugging Face Ecosystem: Proficiency in using Hugging Face Transformers, Datasets, and Model Hub. Vector Store Experience: Hands-on experience with FAISS, Qdrant, Pinecone, ChromaDB. LangChain Expertise: Strong proficiency in LangChain for agentic task orchestration and RAG pipelines. MLOps & Deployment: CI/CD for ML pipelines, MLOps tools (MLflow, Azure ML), containerization (Docker/Kubernetes). Cloud & Service Architecture: Knowledge of microservices, scaling strategies, inter-service communication. Programming Languages: Proficiency in Python and C# for enterprise-grade AI solution development. Show more Show less

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Senior Python Engineer at our company, you will leverage your deep expertise in data engineering and API development to drive technical excellence and autonomy. Your primary responsibility will be leading the development of scalable backend systems and data infrastructure that power AI-driven applications across our platform. You will design, develop, and maintain high-performance APIs and microservices using Python frameworks such as FastAPI and Flask. Additionally, you will build and optimize scalable data pipelines, ETL/ELT processes, and orchestration frameworks, ensuring the utilization of AI development tools like GitHub Copilot, Cursor, or CodeWhisperer to enhance engineering velocity and code quality. In this role, you will architect resilient and modular backend systems integrated with databases like PostgreSQL, MongoDB, and Elasticsearch. Managing workflows and event-driven architectures using tools such as Airflow, Dagster, or Temporal.io will be essential, as you collaborate with cross-functional teams to deliver production-grade systems in cloud environments (AWS/GCP/Azure) with high test coverage, observability, and reliability. To be successful in this position, you must have at least 5 years of hands-on experience in Python backend/API development, a strong background in data engineering, and proficiency in AI-enhanced development environments like Copilot, Cursor, or equivalent tools. Solid experience with Elasticsearch, PostgreSQL, and scalable data solutions, along with familiarity with Docker, CI/CD, and cloud-native deployment practices is crucial. You should also demonstrate the ability to take ownership of features from idea to production. Nice-to-have qualifications include experience with distributed workflow engines like Temporal.io, background in AI/ML systems (PyTorch or TensorFlow), familiarity with LangChain, LLMs, and vector search tools (e.g., FAISS, Pinecone), and exposure to weak supervision, semantic search, or agentic AI workflows. Join us to build infrastructure for cutting-edge AI products and work in a collaborative, high-caliber engineering environment.,

Posted 1 month ago

Apply

12.0 - 16.0 years

0 Lacs

delhi

On-site

We are seeking a talented Systems Architect (AVP level) with specialized knowledge in designing and expanding Generative AI solutions for production environments. In this pivotal position, you will collaborate across various teams including data scientists, ML engineers, and product leaders to mold enterprise-level GenAI platforms. Your responsibilities will include designing and scaling LLM-based systems such as chatbots, copilots, RAG, and multi-modal AI, architecting data pipelines, training/inference workflows, and integrating MLOps. You will be tasked with ensuring that systems are modular, secure, scalable, and cost-effective. Additionally, you will work on model orchestration, agentic AI, vector DBs, and CI/CD for AI. The ideal candidate should possess 12-15 years of experience in cloud-native and distributed systems, with 2-3 years focusing on GenAI/LLMs utilizing tools like LangChain, HuggingFace, and Kubeflow. Proficiency in cloud platforms such as AWS, GCP, or Azure (SageMaker, Vertex AI, Azure ML) is essential. Experience with RAG, semantic search, agent orchestration, and MLOps is highly valued. Strong architectural acumen, effective stakeholder communication skills, and preferred certifications in cloud technologies, AI open-source contributions, and knowledge of security and governance are all advantageous.,

Posted 1 month ago

Apply

12.0 - 16.0 years

0 Lacs

delhi

On-site

We are looking for a Systems Architect (AVP level) with extensive experience in designing and scaling Generative AI solutions for production. As a Systems Architect, you will play a crucial role in collaborating with data scientists, ML engineers, and product leaders to shape enterprise-grade GenAI platforms. Your responsibilities will include designing and scaling LLM-based systems such as chatbots, copilots, RAG, and multi-modal AI. You will also be responsible for architecting data pipelines, training/inference workflows, and MLOps integration. It is essential to ensure that the systems you design are modular, secure, scalable, and cost-effective. Additionally, you will work on model orchestration, agentic AI, vector DBs, and CI/CD for AI. The ideal candidate should have 12-15 years of experience in cloud-native and distributed systems, with at least 2-3 years of experience in GenAI/LLMs using tools like LangChain, HuggingFace, and Kubeflow. Proficiency in cloud platforms such as AWS, GCP, or Azure (SageMaker, Vertex AI, Azure ML) is required. Experience with technologies like RAG, semantic search, agent orchestration, and MLOps will be beneficial for this role. Strong architectural thinking and effective communication with stakeholders are essential skills. Preferred qualifications include cloud certifications, AI open-source contributions, and knowledge of security and governance principles. If you are passionate about designing cutting-edge Generative AI solutions and possess the necessary skills and experience, we encourage you to apply for this leadership role.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

DecisionX is pioneering a new category with the world's first Decision AI, an AI Super-Agent that assists high-growth teams in making smarter, faster decisions by transforming fragmented data into clear next steps. Whether it involves strategic decisions in the boardroom or operational decisions across various departments like Sales, Marketing, Product, and Engineering, down to the minutiae that drives daily operations, Decision AI serves as your invisible co-pilot, thinking alongside you, acting ahead of you, and evolving beyond you. We are seeking a dedicated and hands-on AI Engineer to join our Founding team. In this role, you will collaborate closely with leading AI experts to develop the intelligence layer of our exclusive "Agentic Number System." Key Responsibilities - Building, fine-tuning, and deploying AI/ML models for tasks such as segmentation, scoring, recommendation, and orchestration. - Developing and optimizing agent workflows using LLMs (OpenAI, Claude, Mistral, etc.) for contextual reasoning and task execution. - Creating vector-based memory systems utilizing tools like FAISS, Chroma, or Weaviate. - Working with APIs and connectors to incorporate third-party data sources (e.g., Salesforce, HubSpot, GSuite, Snowflake). - Designing pipelines that transform structured and unstructured signals into actionable insights. - Collaborating with GTM and product teams to define practical AI agent use cases. - Staying informed about the latest developments in LLMs, retrieval-augmented generation (RAG), and agent orchestration frameworks (e.g., CrewAI, AutoGen, LangGraph). Must Have Skills - 5-8 years of experience in AI/ML engineering or applied data science. - Proficient programming skills in Python, with expertise in LangChain, Pandas, NumPy, and Scikit-learn. - Experience with LLMs (OpenAI, Anthropic, etc.), prompt engineering, and RAG pipelines. - Familiarity with vector stores, embeddings, and semantic search. - Expertise in data wrangling, feature engineering, and model deployment. - Knowledge of MLOps tools such as MLflow, Weights & Biases, or equivalent. What you will get - Opportunity to shape the AI architecture of a high-ambition startup. - Close collaboration with a visionary founder and experienced product team. - Ownership, autonomy, and the thrill of building something from 0 to 1. - Early team equity and a fast growth trajectory.,

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

kolkata, west bengal

On-site

The role requires you to interpret data and analyze results using statistical techniques under supervision. You will be responsible for complying and assisting in mining and acquiring data from primary and secondary sources, reorganizing the data in a format that can be easily read by either a machine or a person. Additionally, you will assist in identifying, analyzing, and interpreting trends or patterns in data sets, generating insights, and helping clients make better decisions. Conducting research on specific data sets to enable senior analysts in their work and managing master data including creation, updates, and deletions will also be part of your responsibilities. You will help develop reports and analysis that effectively communicate trends, patterns, and predictions using relevant data and provide support with technical writing and editing as required. Furthermore, you will develop analytics to identify trend lines across several data sources within the organization and assist senior analysts in examining and evaluating existing business processes and systems, offering suggestions for changes. Setting FAST goals will also be a key aspect of your role. Your performance will be measured based on schedule adherence on tasks, quality errors in data presentation and interpretation, the number of business process changes highlighted due to vital analysis, number of stakeholder appreciations/escalations, number of customer appreciations, and the number of mandatory trainings completed. Your expected outputs will include acquiring data from various sources, reorganizing/filtering data by considering only relevant data and converting it into a consistent and analyzable format, analyzing data using statistical methods to generate useful results, creating data models that depict trends in the customer base and consumer population, creating reports depicting trends and behaviors from analyzed data, documenting your work as well as performing peer review of others" work, managing knowledge by consuming and contributing to project-related documents, reporting the status of assigned tasks, complying with project-related reporting standards/process, creating efficient and reusable code following coding best practices, organizing and managing changes and revisions to code using a version control tool like git or bitbucket, providing quality assurance of imported data, working with quality assurance analysts if necessary, setting FAST goals and seeking feedback from supervisors. The ideal candidate should possess analytical skills, communication skills, critical thinking abilities, attention to detail, quantitative skills, research skills, mathematical skills, teamwork skills, and proactively ask for and offer help. Proficiency in mathematics and calculations, spreadsheet tools such as Microsoft Excel or Google Sheets, knowledge of Tableau or PowerBI, SQL, Python, DBMS, operating systems and software platforms, knowledge about customer domains, subdomains, and code version control tools like git and bitbucket are also required. This job is for a replacement of Tarique Nomani and requires skills in Python, Machine Learning, and Semantic Search. UST is a global digital transformation solutions provider that partners with clients to embed innovation and agility into their organizations, touching billions of lives in the process.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

We are looking for a highly motivated Mid-Level AI Engineer to join our growing AI team. Your main responsibility will be to develop intelligent applications using Python, Large Language Models (LLMs), and Retrieval-Augmented Generation (RAG) systems. Working closely with data scientists, backend engineers, and product teams, you will build and deploy AI-powered solutions that provide real-world value. Your key responsibilities will include designing, developing, and optimizing applications utilizing LLMs such as GPT, LLaMA, and Claude. You will also be tasked with implementing RAG pipelines to improve LLM performance using domain-specific knowledge bases and search tools. Developing and maintaining robust Python codebases for AI-driven solutions will be a crucial part of your role. Additionally, integrating vector databases like Pinecone, Weaviate, and FAISS, as well as embedding models for information retrieval, will be part of your daily tasks. You will work with APIs, frameworks like LangChain and Haystack, and various tools to create scalable AI workflows. Collaboration with product and design teams to define AI use cases and deliver impactful features will also be a significant aspect of your job. Conducting experiments to assess model performance, retrieval relevance, and system latency will be essential for continuous improvement. Staying up-to-date with the latest research and advancements in LLMs, RAG, and AI infrastructure is crucial for this role. To be successful in this position, you should have at least 3-5 years of experience in software engineering or AI/ML engineering, with a strong proficiency in Python. Experience working with LLMs such as OpenAI and Hugging Face Transformers is required, along with hands-on experience in RAG architecture and vector-based retrieval techniques. Familiarity with embedding models like SentenceTransformers and OpenAI embeddings is also necessary. Knowledge of API design, deployment, performance optimization, version control (e.g., Git), containerization (e.g., Docker), and cloud platforms (e.g., AWS, GCP, Azure) is expected. Preferred qualifications include experience with LangChain, Haystack, or similar LLM orchestration frameworks. Understanding NLP evaluation metrics, prompt engineering best practices, knowledge graphs, semantic search, and document parsing pipelines will be beneficial. Experience deploying models in production, monitoring system performance, and contributing to open-source AI/ML projects are considered advantageous for this role.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

delhi

On-site

We are looking for a highly motivated and enthusiastic Senior Data Scientist with 5-8 years of experience to join our dynamic team. The ideal candidate should have a strong background in AI/ML analytics and a passion for utilizing data to drive business insights and innovation. Your main responsibilities will include developing and implementing machine learning models and algorithms, collaborating with project stakeholders to understand requirements and deliverables, analyzing and interpreting complex data sets using statistical and machine learning techniques, staying updated with the latest advancements in AI/ML technologies, and supporting various AI/ML initiatives by working with cross-functional teams. To qualify for this role, you should have a Bachelor's degree in Computer Science, Data Science, or a related field, along with a strong understanding of machine learning, deep learning, and Generative AI concepts. Preferred skills for this position include experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, and Deep Learning stack using Python. Additionally, expertise in cloud infrastructure for AI/ML on AWS (Sagemaker, Quicksight, Athena, Glue), building secure data ingestion pipelines for unstructured data, proficiency in Python, TypeScript, NodeJS, ReactJS, data visualization tools, deep learning frameworks, version control systems, and Generative AI/LLM based development is desired. Good to have skills include knowledge and experience in building knowledge graphs in production and an understanding of multi-agent systems and their applications in complex problem-solving scenarios. Pentair is an Equal Opportunity Employer, valuing cross-cultural insight and competence for ongoing success, with a belief that a diverse workforce enhances perspectives and ideas for continuous improvement.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

As a Mid-Level AI Engineer at our company, you will be an integral part of our AI team, focusing on the development of intelligent applications using Python, Large Language Models (LLMs), and Retrieval-Augmented Generation (RAG) systems. Your collaboration with data scientists, backend engineers, and product teams will be pivotal in building and deploying AI-powered solutions that bring real-world value. Your responsibilities will include designing, developing, and optimizing applications by leveraging LLMs such as GPT, LLaMA, and Claude. You will also be responsible for implementing RAG pipelines to enhance LLM performance through domain-specific knowledge bases and search tools. Developing and maintaining robust Python codebases for AI-driven solutions, integrating vector databases like Pinecone, Weaviate, FAISS, and embedding models for information retrieval will be a key part of your role. Additionally, you will work with APIs, frameworks like LangChain, Haystack, and tools to build scalable AI workflows. Collaborating with product and design teams to define AI use cases and deliver impactful features, conducting experiments to evaluate model performance, retrieval relevance, and system latency, as well as staying updated on research and advancements in LLMs, RAG, and AI infrastructure are also important aspects of your responsibilities. To be successful in this role, you should have at least 3-5 years of experience in software engineering or AI/ML engineering, with a strong proficiency in Python. Experience working with LLMs from OpenAI, Hugging Face Transformers, etc., hands-on experience with RAG architecture, vector-based retrieval techniques, embedding models like SentenceTransformers, OpenAI embeddings, and vector databases such as Pinecone, FAISS is required. Knowledge of API design, deployment, performance optimization, version control using Git, containerization with Docker, and cloud platforms like AWS, GCP, Azure is essential. Preferred qualifications include experience with LangChain, Haystack, or similar LLM orchestration frameworks, understanding of NLP evaluation metrics, prompt engineering best practices, exposure to knowledge graphs, semantic search, document parsing pipelines, experience in deploying models in production, monitoring system performance, and contributions to open-source AI/ML projects are a plus.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

delhi

On-site

We are looking for a highly motivated and enthusiastic Senior Data Scientist with 5-8 years of experience to join our dynamic team. The ideal candidate will have a strong background in AI/ML analytics and a passion for leveraging data to drive business insights and innovation. As a Senior Data Scientist, your key responsibilities will include developing and implementing machine learning models and algorithms. You will work closely with project stakeholders to understand requirements and translate them into deliverables. Utilize statistical and machine learning techniques to analyze and interpret complex data sets. It is essential to stay updated with the latest advancements in AI/ML technologies and methodologies and collaborate with cross-functional teams to support various AI/ML initiatives. To qualify for this role, you should have a Bachelor's degree in Computer Science, Data Science, or a related field. A strong understanding of machine learning, deep learning, and Generative AI concepts is required. Preferred skills for this position include experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, and Deep Learning stack using Python. Experience with cloud infrastructure for AI/ML on AWS (Sagemaker, Quicksight, Athena, Glue) is highly desirable. Expertise in building enterprise-grade, secure data ingestion pipelines for unstructured data (ETL/ELT) is a plus. Proficiency in Python, TypeScript, NodeJS, ReactJS, and frameworks like pandas, NumPy, scikit-learn, SKLearn, OpenCV, SciPy, Glue crawler, ETL, as well as experience with data visualization tools like Matplotlib, Seaborn, and Quicksight, is beneficial. Additionally, knowledge of deep learning frameworks such as TensorFlow, Keras, and PyTorch, experience with version control systems like Git and CodeCommit, and strong knowledge and experience in Generative AI/LLM based development are essential for this role. Experience working with key LLM models APIs (e.g., AWS Bedrock, Azure Open AI/OpenAI) and LLM Frameworks (e.g., LangChain, LlamaIndex), as well as proficiency in effective text chunking techniques and text embeddings, are also preferred skills. Good to have skills include knowledge and experience in building knowledge graphs in production and an understanding of multi-agent systems and their applications in complex problem-solving scenarios. Pentair is an Equal Opportunity Employer that values diversity and believes that a diverse workforce contributes different perspectives and creative ideas, enabling continuous improvement.,

Posted 1 month ago

Apply

4.0 - 9.0 years

5 - 15 Lacs

Mumbai, Hyderabad, Bengaluru

Work from Office

Design and develop backend services using FastAPI, integrate AI/ML features, manage PostgreSQL and vector DBs, implement Redis caching, deploy with Docker/Kubernetes, maintain CI/CD pipelines, and ensure high performance and scalability.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies