Jobs
Interviews

4097 Fastapi Jobs - Page 24

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Job Title: Senior Fast API Python Developer / Architect Employment Type: Full-time Location: Remote/On-site/Hybrid Job Summary: We are seeking an experienced Senior FastAPI Python Developer / Architect to design, develop, and maintain high-performance web applications and APIs. The ideal candidate will have extensive expertise in Python, FastAPI, and microservices architecture, along with a deep understanding of cloud-based, scalable, and secure backend solutions. This role requires leadership in architectural decision-making, performance optimization, and technical mentoring. Key Responsibilities: Architect, design, and develop scalable and high-performance FastAPI-based web applications and RESTful APIs. Lead the design and implementation of a microservices-based architecture for large scale applications. Implement best practices in API design, security, and performance optimization. Develop and manage asynchronous APIs and integrate them with databases, caching systems, and third-party services. Ensure code quality, maintainability, and scalability by enforcing best practices. Provide technical leadership and mentorship to the development team. Collaborate with frontend developers, DevOps engineers, and other stakeholders for seamless integration. Implement authentication, authorization, and security best practices using OAuth2, JWT, and API Key-based mechanisms. Optimize database schemas and queries for SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Redis) databases. Conduct code reviews, unit testing, and integration testing to maintain high reliability and performance. Stay up-to-date with FastAPI, Python, and modern backend development trends. Required Skills and Qualifications: Bachelor’s/Master’s degree in Computer Science, Engineering, or a related field. 5+ years of experience in backend development with Python. 3+ years of experience specifically with FastAPI. Strong expertise in asynchronous programming, microservices architecture, and API development. Experience with SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Redis) databases. Deep understanding of authentication mechanisms like OAuth2, JWT, and API security best practices. Expertise in Docker, Kubernetes, and CI/CD pipelines. Strong background in cloud platforms (AWS, Azure, or Google Cloud). Experience with scalability, performance optimization, and caching techniques. Hands-on experience with unit testing frameworks (pytest, unittest) and CI/CD pipelines. Strong problem-solving skills and the ability to handle complex challenges. Ability to lead, mentor, and provide technical guidance to a team. Preferred Skills: Experience with Celery or other task queues for background processing. Knowledge of WebSockets and real-time applications. Experience in event-driven architectures and message queues (Kafka, RabbitMQ, or similar). Experience with GraphQL (optional but a plus). Previous experience working in an Agile/Scrum development environment. Contribution to open-source projects or personal FastAPI projects is highly regarded.

Posted 1 week ago

Apply

5.0 - 6.0 years

5 - 10 Lacs

Bengaluru

Work from Office

Immediate Openings for Senior Python Developer Skills:Python,Fast API, ML/DL Modules, CI/CD, AWS/Azure Exp:5-6yrs Location: Bangalore

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

AIML Engineer - MBM Relevant Experience: 5+ Years Location: Bangalore/Pune Employment Type: Full-time Shape the Future with Generative AI Are you passionate about harnessing cutting-edge Generative AI to create transformative real-world applications? Do you dream of building autonomous systems that learn, adapt, and make decisions independently? At Maersk, we’re reimagining how businesses solve complex problems with the power of Generative AI and Autonomous Agents. We’re looking for a AI/ML Engineer to join our dynamic team and play a pivotal role in building AI solutions that will define the future of intelligent systems. If you're excited about applying your expertise to projects that disrupt industries and drive measurable impact, we want to hear from you! This is an exciting opportunity for self-motivated engineers with technical expertise, creative problem-solving skills, and a talent for disruptive process transformation using AI/ML. What you'll do: Lead with autonomy: Take ownership of AI/ML projects from ideation to deployment, pushing boundaries of innovation. Design the future: Develop and fine-tune Generative AI models (LLMs, diffusion models, GANs, VAEs, etc.) to optimize SCP business processes and enhance productivity. Empower AI agents: Create agent AI architectures for autonomous decision-making, task delegation, and multi-agent collaboration using Agentic AI frameworks like AutoGPT. Innovate with LLM’s: Build & optimize LLM’s applications, leveraging RAG and build robust Machine Learning pipelines for NLP, Multimodal AI tasks. Work with cutting-edge tools: Harness the power of Vector Databases (e.g., Pinecone, FAISS, ChromaDB) and LLM APIs (OpenAI, Anthropic, Hugging Face, Mistral, Llama). Collaborate for impact: Partner with cross-functional teams to integrate AI solutions into real-world applications like chatbots, copilots, automation tools, etc.). Stay ahead: Perform continuous research on state-of-the-art AI methodologies, exploring advancements in Generative AI, Autonomous Agents, and NLP to drive innovation. Required Skills & Qualifications [Must have] Strong foundation in Machine Learning & Deep Learning with expertise in neural networks, optimization techniques and model evaluation. Experience with LLMs, Transformer architectures (BERT, GPT, LLaMA, Mistral, Claude, Gemini, etc.). Proficiency in Python, LangChain, Hugging Face transformers, MLOps. Experience with Reinforcement Learning and multi-agent systems for decision-making in dynamic environments. Knowledge of multimodal AI (integrating text, image, other data modalities into unified models. Nice-to-have Skills Experience with Prompt Engineering, Fine-tuning, and RAG techniques. Familiarity with Cloud Platforms (AWS, GCP, Azure) and deployment tools like Docker, Kubernetes, FastAPI, or Flask. Why Join Us: Innovate at scale: Work on groundbreaking Generative AI technologies that redefine what’s possible. Transform industries: Your work will directly impact how global organizations drive productivity and solve complex supply chain challenges. Collaborate with the best: Join a team of forward-thinking engineers, researchers, and product visionaries who are shaping the future of AI. Accelerate your growth: Enjoy opportunities to learn, grow, and lead in a fast-paced, innovation-driven environment. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Associate AI/ML Scientist – Global Data Analytics, Technology (Maersk) This position will be based in India – Bangalore/Pune A.P. Moller - Maersk A.P. Moller – Maersk is the global leader in container shipping services. The business operates in 130 countries and employs 80,000 staff. An integrated container logistics company, Maersk aims to connect and simplify its customers’ supply chains. Today, we have more than 180 nationalities represented in our workforce across 131 Countries and this mean, we have elevated level of responsibility to continue to build inclusive workforce that is truly representative of our customers and their customers and our vendor partners too. The Brief In this role as an AI/ML Scientist on the Global Data and Analytics (GDA) team, you will support the development of strategic, visibility-driven recommendation systems that serve both internal stakeholders and external customers. This initiative aims to deliver actionable insights that enhance supply chain execution, support strategic decision-making, and enable innovative service offerings. You should be able to design, develop, and implement machine learning models, conduct deep data analysis, and support decision-making with data-driven insights. Responsibilities include building and validating predictive models, supporting experiment design, and integrating advanced techniques like transformers, GANs, and reinforcement learning into scalable production systems. The role requires solving complex problems using NLP, deep learning, optimization, and computer vision. You should be comfortable working independently, writing reliable code with automated tests, and contributing to debugging and refinement. You’ll also document your methods and results clearly and collaborate with cross-functional teams to deliver high-impact AI/ML solutions that align with business objectives and user needs. What I'll be doing – your accountabilities? Design, develop, and implement machine learning models, conduct in-depth data analysis, and support decision-making with data-driven insights Develop predictive models and validate their effectiveness Support the design of experiments to validate and compare multiple machine learning approaches Research and implement cutting-edge techniques (e.g., transformers, GANs, reinforcement learning) and integrate models into production systems, ensuring scalability and reliability Apply creative problem-solving techniques to design innovative models, develop algorithms, or optimize workflows for data-driven tasks Independently apply data-driven solutions to ambiguous problems, leveraging tools like Natural Language Processing, deep learning frameworks, machine learning, optimization methods and computer vision frameworks Understand technical tools and frameworks used by the team, including programming languages, libraries, and platforms and actively support debugging or refining code in projects Write and integrate automated tests alongside their models or code to ensure reproducibility, scalability, and alignment with established quality standards Contribute to the design and documentation of AI/ML solutions, clearly detailing methodologies, assumptions, and findings for future reference and cross-team collaboration Collaborate across teams to develop and implement high-quality, scalable AI/ML solutions that align with business goals, address user needs, and improve performance Foundational Skills Mastered Data Analysis and Data Science concepts and can demonstrate this skill in complex scenarios AI & Machine Learning, Programming and Statistical Analysis Skills beyond the fundamentals and can demonstrate the skills in most situations without guidance. Specialized Skills To be able to understand beyond the fundamentals and can demonstrate in most situations without guidance: Data Validation and Testing Model Deployment Machine Learning Pipelines Deep Learning Natural Language Processing (NPL) Optimization & Scientific Computing Decision Modelling and Risk Analysis. To understand fundamentals and can demonstrate this skill in common scenarios with guidance: Technical Documentation. Qualifications & Requirements Bachelor's degree in B.E/BTech, preferably in computer science Experience with collaborative development workflow: IDE (Integrated Development Environment), Version control(github), CI/CD (e.g. automated tests in github actions) Communicate effectively with technical and non-technical audiences with experience in stakeholder management Structured, highly analytical mind-set and excellent problem-solving skills; Self-starter, highly motivated & Willing to share knowledge and work as a team. An individual who respects the opinion of others; yet can drive a decision though the team; Preferred Experiences 5+ years of years of relevant experience in the field of Data Engineering 3+ years of hands-on experience with Apache Spark, Python and SQL Experience working with large datasets and big data technologies to train and evaluate machine learning models. Experience with containerization: Kubernetes & Docker Expertise in building cloud native applications and data pipelines (Azure, Databricks, AWS, GCP) C Experience with common dashboarding and API technologies (PowerBI, Superset, Flask, FastAPI, etc As a performance-oriented company, we strive to always recruit the best person for the job – regardless of gender, age, nationality, sexual orientation or religious beliefs. We are proud of our diversity and see it as a genuine source of strength for building high-performing teams. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Role Summary 1. Demonstrate solid proficiency in Python development, writing clean, maintainable code. 2. Collaborator in the design and implementation of AI-driven applications leveraging large language models (LLMs). 3. Develop and maintain Django-based RESTful APIs to support backend services. 4. Integrate with LLM provider APIs (e.g., GPT, Claude, Cohere) and agent frameworks (LangChain, AgentStudio). 5. Build and optimize data pipelines for model training and inference using Pandas, NumPy, and Scikit-learn. 6. Ensure robust unit and integration testing via pytest to maintain high code quality. 7. Participate in agile ceremonies, contributing estimations, design discussions, and retrospectives. 8. Troubleshoot, debug, and optimize performance in multi-threaded and distributed environments. 9. Document code, APIs, and data workflows in accordance with software development best practices. 10. Continuously learn and apply new AI/ML tools, frameworks, and cloud services. Key Responsibilities 1. Write, review, and optimize Python code for backend services and data science workflows. 2. Design and implement Django REST APIs, ensuring scalability and security. 3. Integrate LLMs into applications: handle prompt construction, API calls, and result processing. 4. Leverage agent frameworks (LangChain, AgentStudio) to orchestrate complex LLM workflows. 5. Develop and maintain pytest suites covering unit, integration, and end-to-end tests. 6. Build ETL pipelines to preprocess data for model training and feature engineering. 7. Work with relational databases (PostgreSQL) and vector stores (FAISS, Weaviate, Milvus). 8. Containerize applications using Docker and deploy on Kubernetes or serverless platforms (AWS, GCP, Azure). 9. Monitor and troubleshoot application performance, logging, and error handling. 10. Collaborate with data scientists to deploy and serve ML models via FastAPI or vLLM. 11. Maintain CI/CD pipelines for automated testing and deployment. 12. Engage in technical learning sessions and share best practices across the team. Desired Skills & Qualifications - 1–3 years of hands-on experience in Python application development. - Proven pytest expertise, with a focus on test-driven development. - Practical knowledge of Django (or FastAPI) for building RESTful services. - Experience with LLM APIs (OpenAI, Anthropic, Cohere) and prompt engineering. - Familiarity with at least one agent framework (LangChain, AgentStudio). - Working experience in data science libraries: NumPy, Pandas, Scikit-learn. - Exposure to ML model serving tools (MLflow, FastAPI, vLLM). - Experience with container orchestration (Docker, Kubernetes, Docker Swarm). - Basic understanding of cloud platforms (AWS, Azure, or GCP). - Knowledge of SQL and database design; familiarity with vector databases. - Eagerness to learn emerging AI/ML technologies and frameworks. - Excellent problem-solving, debugging, and communication skills. Education & Attitude - Bachelor’s or Master’s in Computer Science, Data Science, Statistics, Mathematics, or related field. - Growth-mindset learner: proactive in upskilling and sharing knowledge. - Strong collaboration ethos and adaptability in a fast-paced AI environment.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Senior Software Engineer – Back end and Inferencing– Technology (Maersk) This position will be based in India – Bangalore/Pune A.P. Moller - Maersk A.P. Moller – Maersk is the global leader in container shipping services. The business operates in 130 countries and employs 80,000 staff. An integrated container logistics company, Maersk aims to connect and simplify its customers’ supply chains. Today, we have more than 180 nationalities represented in our workforce across 131 Countries and this mean, we have elevated level of responsibility to continue to build inclusive workforce that is truly representative of our customers and their customers and our vendor partners too. The Brief We are seeking a Senior Software Engineer with deep backend expertise to lead the development of scalable infrastructure for LLM inferencing, Model Context Protocol (MCP) integration, Agent-to-Agent (A2A) communication, prompt engineering, and robust API platforms. This role sits at the core of our AI systems stack — enabling structured, contextual, and intelligent communication between models, agents, and services. You'll design modular backend services that interface seamlessly with inferencing engines, orchestrate model contexts, and expose capabilities via APIs for downstream products and agents. What I'll be doing – your accountabilities? Architect and implement backend services that support dynamic model context management via MCP for LLM-based systems. Build scalable and token-efficient inference pipelines with support for streaming, context merging, memory, and retrieval. Enable Agent-to-Agent (A2A) messaging and task coordination through contextual protocols, message contracts, and execution chains. Design and maintain developer-friendly, secure, and versioned APIs for agents, tools, memory, context providers, and prompt libraries. Lead efforts in prompt engineering workflows including templating, contextual overrides, and programmatic prompt generation. Collaborate across engineering, ML, and product teams to define and implement context-aware agent systems and inter-agent communication standards to enable closed-loop enterprise AI Services ready for consumption by the enterprise. Own end-to-end delivery of infrastructure, inferencing, back-end, API and communication management in multi-agent system. Ensure models are modular, extensible, and easily integrated with external services/platforms (e.g., dashboards, analytics, AI agents). Foundational / Must Have Skills Bachelor’s, Master’s or Phd in Computer Science, Engineering, or related technical field. 8+ years of experience in backend systems design and development — ideally in AI/ML or data infrastructure domains. Strong proficiency in Python (FastAPI preferred); additional experience with Node.js, Go, or Rust is a plus. Experience with LLM inferencing pipelines, context windowing, and chaining prompts with memory/state persistence. Familiarity with or active experience implementing Model Context Protocol (MCP) or similar abstraction layers for context-driven model orchestration. Strong understanding of REST/GraphQL API design, OAuth2/JWT-based auth, and event-driven backend architectures. Practical knowledge of Redis, PostgreSQL, and one or more vector databases (e.g., Weaviate, Qdrant). Comfortable working with containerized applications, CI/CD pipelines, and cloud-native deployments (AWS/GCP/Azure). Preferred To Have Experience building or contributing to agent frameworks (e.g., LangGraph, CrewAI, AutoGen, Agno etc.). Background in multi-agent systems, dialogue orchestration, or synthetic workflows. Familiarity with OpenAI, Anthropic, HuggingFace, or open-weight model APIs and tool-calling protocols. Strong grasp of software security, observability (OpenTelemetry, Prometheus), and system performance optimization. Experience designing abstraction layers for LLM orchestration across different provider APIs (OpenAI, Claude, local inference). What You Can Expect Opportunity to lead backend architecture for cutting-edge, LLM-native systems. High-impact role in shaping the future of context-aware AI agent communication. Autonomy to drive backend standards, protocols, and platform capabilities across the org. Collaborative, remote-friendly culture with deep technical peers. As a performance-oriented company, we strive to always recruit the best person for the job – regardless of gender, age, nationality, sexual orientation or religious beliefs. We are proud of our diversity and see it as a genuine source of strength for building high-performing teams. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.

Posted 1 week ago

Apply

3.0 - 6.0 years

10 - 20 Lacs

Bengaluru

Work from Office

We are looking for talented Python Developers with 3 - 6 years of hands-on experience in Natural Language Processing (NLP) and Generative AI Models to join our growing team working on innovative projects in enterprise scale AI applications and solutions. Responsibilities: Design, evaluation, and best fit analysis of NLP / ML Models Design, fine-tuning, and deployment of open-source and API-based LLMs to solve real-world use cases Engage with clients for data preparation process, the subtleties of the fine-tuning procedure, and the challenges it presents. Lead efforts to address challenging data science and machine learning problems, spanning predictive modeling to natural language processing, with a significant impact on organizational success. Building data ingestion pipelines, handling various data formats such as JSON and YAML. Technical Qualifications: Strong in data collection, curation, and dataset preparation for ML Models (Classifiers / Categorization). Demonstrable expertise in area of data science or analytics (e.g., machine learning, deep learning, NLP, predictive modeling and forecasting, statistics). Strong experience in machine learning (unsupervised and supervised techniques) and experience in machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, logistic regression, MLPs, RNNs, Attention, Generative Models etc would be a plus. SQL expertise for data extraction, transformation, and analysis. Experience in building and maintaining scalable APIs using FastAPI / Flask frameworks. Proficiency in Python, with experience in libraries such as PyTorch or TensorFlow. Solid understanding of data structures, embedding techniques, and vector search systems. Experience in SQL / NoSQL databases including PostgreSQL, MySQL and MongoDB. Proficiency in Graph Databases Neo4j. Generic Qualifications: Comfortable working with cross-functional teams product managers, data scientists, and other developers to deliver quality software solutions. Ensure software quality through code reviews, unit testing, and best practices for DevOps and CI/CD pipelines. Ability to manage time wisely across projects and competing priorities, meet agreed upon deadlines, and be accountable for work. Able to write maintainable and functionally tested modules. You should not hesitate in learning and building innovative solutions in newer technology stacks. Others: Candidate should be willing to relocate or commute to North Bangalore (Yelahanka) for work.

Posted 1 week ago

Apply

6.0 - 9.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Description The Role: Integrate with internal/external LLM APIs (e.g., OpenAI, Azure OpenAI), including prompt engineering and pre/post-processing as required. Build and maintain data analysis workflows using Pandas for data transformation and insight delivery. Develop RESTful APIs using FastAPI or Flask for data and document management. Design and implement clean, efficient, and modular Python codebases for backend services, data pipelines, and document processing workflows. Support the team in onboarding new data sources, integrating with Azure services, and ensuring smooth cloud deployments. Collaborate with product, data science, and engineering teams to translate business requirements into technical solutions. Write unit tests and contribute to CI/CD pipelines for robust, production-ready code. Stay up to date with advances in Python, LLM, and cloud technologies. Qualifications Qualifications: Bachelor’s or master’s in computer science, Engineering, or related quantitative discipline. Experience 6 to 9 years of hands-on experience in data engineering or backend development with Python. Technical Competencies Exposure to LLM integration (prompt design, API integration, handling text data). Strong experience in Python with focus on data analysis (Pandas) and scripting. Hands-on experience in building REST APIs (FastAPI or Flask). Experience in developing data pipelines, data cleaning, and transformation. Working knowledge of Azure cloud services (Azure Functions, Blob Storage, App Service, etc.) (Nice to have) Experience integrating MongoDB with Python for data storage, modelling, or reporting.

Posted 1 week ago

Apply

0 years

0 Lacs

India

On-site

Sanctity AI is a Netherlands-based startup founded by an IIT alum, specializing in ethical, safe, and impactful artificial intelligence. Our agile team is deeply focused on critical areas like AI alignment, responsible LLM training, prompt orchestration, and advanced agent infrastructure. In a landscape where many talk ethics, we build and deploy solutions that genuinely embody ethical AI principles. Sanctity AI is positioned at the forefront of solving real-world alignment challenges, shaping the future of trustworthy artificial intelligence. We leverage proprietary algorithms, rigorous ethical frameworks, and cutting-edge research to deliver AI solutions with unparalleled transparency, robustness, and societal impact. Sanctity AI represents a rare opportunity in the rapidly evolving AI ecosystem, committed to sustainable innovation and genuine human-AI harmony. The Role As an AI ML Intern reporting directly to the founder, you’ll go beyond just coding. You’ll own whole pipelines—from data wrangling to deploying cutting-edge ML models in production. You’ll also get hands-on experience with large language models (LLMs), prompt engineering, semantic search, and retrieval-augmented generation. Whether it’s spinning up APIs in FastAPI, containerizing solutions with Docker, or exploring vector and graph databases like Pinecone and Neo4j, you’ll be right at the heart of our AI innovation. What You’ll Tackle Data to Insights: Dive into heaps of raw data, and turn it into actionable insights that shape real decisions. Model Building & Deployment: Use Scikit-learn, XGBoost, LightGBM, and advanced deep learning frameworks (TensorFlow, PyTorch, Keras) to develop state-of-the-art models. Then, push them to production—scaling on AWS, GCP, or other cloud platforms. LLM & Prompt Engineering: Fine-tune and optimize large language models. Experiment with prompt strategies and incorporate RAG (Retrieval-Augmented Generation) for more insightful outputs. Vector & Graph Databases: Implement solutions using Pinecone, Neo4j, or similar technologies for advanced search and data relationships. Microservices & Big Data: Leverage FastAPI (or similar frameworks) to build robust APIs. If you love large-scale data processing, dabble in Apache Spark, Hadoop, or Kafka to handle the heavy lifting. Iterative Improvement: Observe model performance, gather metrics, and keep refining until the results shine. Who You Are Python Pro: You write clean, efficient Python code using libraries like Pandas, NumPy, and Scikit-learn. Passionate About AI/ML: You’ve got a solid grasp of algorithms and can’t wait to explore deep learning or advanced NLP. LLM Enthusiast: You’re familiar with training or fine-tuning large language models and love the challenge of prompt engineering. Cloud & Containers Savvy: You’ve at least toyed with AWS, GCP, or similar, and have some experience with Docker or other containerization tools. Data-Driven & Detail-Oriented: You enjoy unearthing insights in noisy datasets and take pride in well-documented, maintainable code. Curious & Ethical: You believe AI should be built responsibly and love learning about new ways to do it better. Languages: You can fluently communicate complex technical ideas in English. Fluency in Dutch, Spanish or French is a plus. Math Wizard: You have a strong grip on Advanced Mathematics and Statistical modeling. This is a core requirement. Why Join Us? Real-World Impact: Your work will address real world and industry challenges—problems that genuinely need AI solutions. Mentorship & Growth: Team up daily with founders and seasoned AI pros, accelerating your learning and skill-building. Experimentation Culture: We encourage big ideas and bold experimentation. Want to try a new approach? Do it. Leadership Path: Show us your passion and skills, and you could move into a core founding team member role, shaping our future trajectory. Interested? Send over your résumé, GitHub repos, or any project links that showcase your passion and talent. We can’t wait to see how you think, build, and innovate. Let’s team up to create AI that isn’t just powerful—but also responsibly built for everyone.

Posted 1 week ago

Apply

5.0 - 7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About Firstsource Firstsource Solutions Limited, an RP-Sanjiv Goenka Group company (NSE: FSL, BSE: 532809, Reuters: FISO.BO, Bloomberg: FSOL:IN), is a specialized global business process services partner, providing transformational solutions and services spanning the customer lifecycle across Healthcare, Banking and Financial Services, Communications, Media and Technology, Retail, and other diverse industries. With an established presence in the US, the UK, India, Mexico, Australia, South Africa, and the Philippines, we make it happen for our clients, solving their biggest challenges with hyper-focused, domain-centered teams and cutting-edge tech, data, and analytics. Our real-world practitioners work collaboratively to deliver future-focused outcomes. Position Summary As a Data Scientist at FSL, you will leverage your expertise in Machine Learning, Deep Learning, Computer Vision, Natural Language Processing and Generative AI to develop innovative data-driven solutions and applications. You will play a key role in designing and deploying dynamic models and applications using modern web frameworks like Flask and FastAPI, ensuring efficient deployment and ongoing monitoring of these systems. Job Title: Sr. Consultant- Data Scientist Key Responsibilities Model Development and Application: Design and implement advanced ML and DL models. Develop web applications for model deployment using Flask and FastAPI to enable real-time data processing and user interaction. Data Analysis: Perform exploratory data analysis to understand underlying patterns, correlations and trends. Develop comprehensive data processing pipelines to prepare large datasets for analysis and modeling. Generative AI: Employ Generative AI techniques to create new data points, enhance content generation and innovate within the field of synthetic data production. Collaborative Development: Work with cross-functional teams to integrate AI capabilities into products and systems. Ensure that all AI solutions are aligned with business goals and user needs. Research and Innovation: Stay updated with the latest developments in AI, ML, DL, CV and NLP. Explore new technologies and methodologies that can impact our products and services positively. Communication: Effectively communicate complex quantitative analysis in a clear, precise and actionable manner to senior management and other departments Required Skills And Qualifications Education: BE or Master’s or PhD in Computer Science, Data Science, Statistics or a related field. Experience: 5 to 7 years of relevant experience in a data science role with a strong focus on ML, DL and statistical modeling. Technical Skills: Strong coding skills in Python, including experience with Flask or FastAPI. Proficiency in ML, DL frameworks (e.g., PyTorch, TensorFlow), CV (e.g., OpenCV) and NLP libraries (e.g., NLTK, spaCy). Generative AI: Experience with generative models such as GANs, VAEs or Transformers. Deployment Skills: Experience with Docker, Kubernetes and continuous integration/continuous deployment (CI/CD) pipelines. Strong Analytical Skills: Ability to translate complex data into actionable insights. Communication: Excellent written and verbal communication skills Certifications: Certifications in Data Science, ML or AI from recognized institutions is added advantage. ⚠️ Disclaimer: Firstsource follows a fair, transparent, and merit-based hiring process. We never ask for money at any stage. Beware of fraudulent offers and always verify through our official channels or @firstsource.com email addresses.

Posted 1 week ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence, and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* Data, Analytics & Insights Technology (DAIT) provides customer, client, and operational data in support of Consumer, Business, Wealth, and Payments Technology with responsibility for a number of key data technologies. These include 16 Authorized Data Sources (ADS), marketing and insights platforms, advanced analytics Platforms, core client data and more. DAIT drives these capabilities with the goal of maximizing data assets to serve bank operations, meet regulatory requirements and personalize interactions with our customers across all channels. GBDART , a sub-function of DAIT, is the Bank’s strategic initiative to modernize data architecture and enable cloud-based, connected data experiences for analytics and insights across commercial banking. It delivers real-time operational integration, a comprehensive data management and regulatory framework, and technology solutions supporting key programs such as 14Q, CECL/CCAR/IFRS9, Flood, Climate Risk, and more. GBDART provides vision and oversight for data digitization, quality excellence using AI/ML/NLP, and process simplification, ensuring a single version of truth for enterprise risk and controls through Authorized Data Sources across all major lines of business. Job Description* This role provides leadership, technical direction, and oversight to a team delivering technology solutions. Key responsibilities include overseeing the design, implementation, and maintenance of complex applications, aligning technical solutions with business objectives, and ensuring coding practices and quality comply with software development standards. The position requires managing multiple software implementations and demonstrating expertise across several technical competencies. Responsibilities* 10+ years of team leadership experience with a strategic mindset, preferably in Agile/Scrum environments. Strong hands-on development experience in React, JavaScript, HTML, and CSS. Experience in enterprise-level architecture and solution-based development. Architectural design and design thinking skills (desirable). Experience with OpenShift containers: creating, deploying images, configuring services/routes, persistent volumes, and secret management. Configuring reverse proxy, whitelisting services, forwarding headers, and managing CORS. Experience with OAuth or SSO. Python FastAPI development and database proficiency. Stakeholder management, primarily with US-based leadership and teams. Requirements* Education* B.E. / B.Tech / M.E. / M.Tech / M.C.ABE / MCA. Certifications (preferred): React, Python with SQL. Experience Range* 06 Years To 12 Years. Foundational Skills* In-depth knowledge of the Systems Development Life Cycle (SDLC). Proficient in Windows and Linux systems. Knowledge of database systems (MySQL or any RDBMS). Systems engineering and deployment experience. Strong problem-solving skills with the ability to minimize risk and negative impact. Ability to work independently with minimal oversight. Motivated and eager to learn. Broad knowledge of information security principles, including identity, access, and authorization. Strong analytical and conceptual thinking skills. Desired Skills* Effective communication across a wide range of technical audiences. Comfortable with CI/CD processes and tools (e.g., Ansible, Jenkins, JIRA). Work Timings* 11:30AM - 8:30PM (IST). Job Location* Chennai, GIFT.

Posted 1 week ago

Apply

3.0 - 7.0 years

5 - 10 Lacs

Chennai

Work from Office

Job Title: AI Engineer Location: Chennai Experience: 3 to 5 years Employment Type: Full-Time Job Summary: We are seeking an AI Engineer with strong proficiency in Python and hands-on experience in building AI-powered applications. The ideal candidate should have experience working with FastAPI, Langchain, and Pydantic, along with a solid understanding of Generative AI concepts, Large Language Models (LLMs), Prompt Engineering, and Retrieval-Augmented Generation (RAG). Key Responsibilities: Develop, integrate, and optimize AI-powered applications using FastAPI and Langchain. Design and deploy APIs using FastAPI with Pydantic for data validation. Implement and fine-tune solutions leveraging Large Language Models (LLMs). Build and optimize Prompt Engineering pipelines for LLM interactions. Design and implement Retrieval-Augmented Generation (RAG) solutions for enhanced contextual responses. Collaborate with data scientists, product managers, and engineers to deliver high-quality AI applications. Continuously research and experiment with advancements in Generative AI technologies. Key Skills & Technologies: Programming: Python (Intermediate to Advanced) Frameworks & Libraries: FastAPI (API development) Langchain (LLM integrations & workflows) Pydantic (Data validation and settings management) AI/ML Concepts: Generative AI (Basics) Large Language Models (LLM) Understanding architecture, capabilities, and limitations Prompt Engineering Techniques for effective LLM interactions Retrieval-Augmented Generation (RAG) – Building hybrid retrieval-generation systems Version Control: Git, GitHub/GitLab CI/CD: Familiarity with deployment pipelines (optional) Preferred/Optional (Good to Have): Experience with AI Agent Frameworks such as: LangGraph CrewAI Knowledge of Vector Databases (e.g., FAISS, Chroma) Cloud Platforms (AWS, Azure, GCP) Containerization (Docker)

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Position: Fullstack Engineer — Frontend-Focused Experience: 3–5 Years Location: Chennai Tech Stack: React.js, Python, FastAPI, Golang, Langraph, Redis About the Role: We’re seeking a skilled Fullstack Engineer with a strong frontend focus to help us build engaging, high-performance web applications. You’ll spend the majority of your time developing intuitive and responsive user interfaces using React.js, while also collaborating on backend APIs using Python (FastAPI) and Golang when needed. Key Responsibilities : ● Design and develop responsive, reusable UI components with React.js. ● Optimize frontend performance and ensure cross-browser compatibility. ● Implement state management using modern React hooks and patterns. ● Build and maintain RESTful APIs and microservices with FastAPI and Golang. ● Integrate frontend with backend services, ensuring seamless user experience. ● Use Redis for caching and performance improvements where needed. ● Collaborate closely with designers and backend engineers. ● Work with Langraph for AI agent integration (if applicable). Requirements: ● 3–5 years of experience in frontend development, with strong React.js expertise. ● Solid understanding of HTML5, CSS3, JavaScript ES6+, and component-based architecture. ● Practical experience with Python (FastAPI) and Golang for backend tasks. ● Familiarity with Redis and caching strategies. ● Basic understanding of AI agent frameworks like Langraph is a plus. ● Good grasp of REST API integration. ● Strong communication and teamwork skills. Nice to Have: ● Familiarity with design systems, accessibility, or animation libraries. ● Experience with containerization (Docker) or cloud services. ● Knowledge of CI/CD pipelines.

Posted 1 week ago

Apply

0 years

3 - 3 Lacs

Kollam

On-site

Amrita Vishwa Vidyapeetham, Amritapuri Campus is inviting applications from qualified candidates for the post of ML/DL R&D Engineer For Details Contact: hr-cybersecurity@am.amrita.edu Job Title ML/DL R&D Engineer Location Kollam, Kerala Job description Join our research-driven team to develop tools for robustness and fairness testing in AI models, integrating ML/DL research with practical backend development for scalable deployments at Center for Cybersecurity Systems and Networks Key Responsibilities Research and implement adversarial ML techniques (e.g., FGSM, PGD, DeepFool) to test model robustness. Design and run evaluation pipelines that probe AI models for toxic outputs, bias, or misinformation propagation, using established LLM auditing frameworks or evaluation methods. Build scalable, production-ready APIs and tools for model evaluation using FastAPI and Docker. Integrate with libraries and frameworks like IBM ART, Foolbox, and CleverHans. Automate ML testing pipelines with continuous integration/deployment. Maintain reproducible, well-documented code and experiments in a collaborative Git-based environment. Ensure robustness tools are optimized and compatible with Linux-based systems. Required Skills Machine Learning / Deep Learning Strong grasp of ML/DL fundamentals and hands-on experience with PyTorch and TensorFlow. Familiarity with adversarial ML concepts: evasion, poisoning, extraction, etc. Familiarity with fundamentals of LLMs and LLM safety evaluations and auditing. Backend Engineering Proficient in building and deploying APIs with FastAPI or equivalent. Hands-on experience with Docker, Git, Linux shell environments. Familiarity with relational databases such as PostgreSQL, MySQL, or SQLite, including basic querying and integration with backend systems. Others: Strong analytical and problem-solving skills. Ability to work independently in a research + engineering hybrid environment. Strong written communication for documentation and reporting. Last date to apply July 31, 2025

Posted 1 week ago

Apply

5.0 years

11 - 15 Lacs

Hyderābād

On-site

Job Title: Python Backend Engineer – AWS | GenAI & ML Experience: 5 Years Employment Type: Full-Time Job Summary: We are seeking an experienced Python Backend Engineer with strong AWS expertise and a background in AI/ML to build and scale intelligent backend systems and GenAI-driven applications. The ideal candidate should have hands-on experience building REST APIs using Django or FastAPI and implementing AI/LLM applications using Langchain. Experience with LangGraph is a strong plus. Key Responsibilities: Design, develop, and maintain Python-based backend systems and AI-powered services. Build and manage RESTful APIs using Django or FastAPI for AI/ML model integration. Develop and deploy machine learning and GenAI models using frameworks such as TensorFlow, PyTorch, or Scikit-learn. Implement GenAI pipelines using Langchain; LangGraph experience is a plus. Utilize AWS services including EC2, Lambda, S3, SageMaker, and CloudFormation for infrastructure and deployment. Collaborate with data scientists, DevOps, and architects to integrate models and workflows into production. Build and manage CI/CD pipelines for backend and model deployments. Ensure performance, scalability, and security of applications in cloud environments. Monitor production systems, troubleshoot issues, and optimize model and API performance. Required Skills and Experience: 5 years of hands-on experience in Python backend development. Strong experience building RESTful APIs using Django or FastAPI. Proficiency in AWS cloud services: EC2, S3, Lambda, SageMaker, CloudFormation, etc. Solid understanding of ML/AI concepts and model deployment practices. Experience with ML libraries like Pandas, NumPy, Scikit-learn, TensorFlow, or PyTorch. Hands-on experience with Langchain for building GenAI applications. Familiarity with DevOps tools: Docker, Kubernetes, Git, Jenkins, Terraform. Good understanding of microservices architecture and CI/CD workflows. Agile development experience. Nice to Have: Experience with LangGraph for agentic workflows or graph-based orchestration. Knowledge of LLMs, embeddings, and vector databases (e.g., Pinecone, FAISS). Exposure to OpenAI APIs, AWS Bedrock, or similar GenAI platforms. Understanding of MLOps tools and practices for model monitoring, versioning, and retraining. Job Types: Full-time, Permanent Pay: ₹1,100,000.00 - ₹1,500,000.00 per year Benefits: Health insurance Provident Fund Location Type: In-person Schedule: Day shift Monday to Friday Morning shift Work Location: In person Speak with the employer +91 9966550640

Posted 1 week ago

Apply

0 years

0 Lacs

Haryana

Remote

About The Flex: The Flex is on a mission to transform the rental sector globally. We believe that renting a home should be as easy as buying an item from Amazon. Giving tenants the option to easily rent anywhere in the world and giving landlords simple, hassle-free property management without excessive management fees. We are building a small and dynamic team of A-Players, who are committed to growth and ready to scale The Flex to a global powerhouse in its sector. We believe in rewarding ambition and promoting from within. Position Summary As a Software Developer , you will be instrumental in designing, developing, and maintaining robust digital solutions to support The Flex operations and customer experiences. You will work on a variety of projects spanning front-end and back-end development, cloud deployment, and automation. This role requires strong problem-solving skills, adaptability, and a proactive approach to driving innovation and efficiency in our software systems. Key Responsibilities: Full-Stack Development: Design, develop, and maintain scalable web applications using Node.js and React . Deployment & Cloud Management: Deploy and manage applications on AWS Cloud , utilizing serverless architecture . API Development: Design, implement, and optimize RESTful APIs using FastAPI (Python optional) and other modern frameworks. Automation & Scripting: Build automation tools to streamline development processes. Problem-Solving & Debugging: Analyze complex problems, identify root causes, and implement efficient solutions. Collaboration & Communication: Work closely with cross-functional teams to ensure seamless integration and execution of key projects. Code Quality & Best Practices: Implement CI/CD pipelines, conduct code reviews, and ensure best practices in Git, testing, and software quality assurance . What We’re Looking For: Proficiency in Node.js, React, and AWS Cloud . Experience with serverless applications and cloud infrastructure . Strong problem-solving skills and ability to quickly learn new technologies. Familiarity with FastAPI, Python, and scripting is a plus. Understanding of modern software development practices (CI/CD, testing, Git) . Excellent communication and collaboration skills. Adaptability and a proactive, solution-oriented mindset. Why Join The Flex? Be part of an innovative and fast-growing company revolutionizing the real estate industry. Opportunity to build a team and establish a long-term presence in one of Europe’s most vibrant cities. Competitive salary and performance-based incentives. A chance to grow professionally in a hands-on, entrepreneurial role. You Should not apply if: You are looking for a corporate 9 to 5 job You are political and enjoy gossiping and talking about people behind their backs. You are looking for a stable and slow dead-end job You do not aim to be one of the best in the world at what you do #LI-Remote

Posted 1 week ago

Apply

0.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

About Gartner IT: Join a world-class team of skilled engineers who build creative digital solutions to support our colleagues and clients. We make a broad organizational impact by delivering cutting-edge technology solutions that power Gartner. Gartner IT values its culture of nonstop innovation, an outcome-driven approach to success, and the notion that great ideas can come from anyone on the team. About the role: Gartner is seeking a talented and passionate MLOps Engineer to join our growing team. In this role, you will be responsible for building Python and Spark-based ML solutions that ensure the reliability and efficiency of our machine learning systems in production. You will collaborate closely with data scientists to operationalize existing models and optimize our ML workflows. Your expertise in Python, Spark, model inferencing, and AWS services will be crucial in driving our data-driven initiatives. What you’ll do: Develop ML inferencing and data pipelines with AWS tools (S3, EMR, Glue, Athena). Python API development using Frameworks like FASTAPI and DJANGO Deploy and optimize ML models on SageMaker and EKS Implement IaC with Terraform and CI/CD for seamless deployments. Ensure quality, scalability and performance of API’s. Collaborate with product manager, data scientists and other engineers for smooth operations. Communicate technical insights clearly and support production troubleshooting when needed. What you’ll need: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Must have: 0-2 years of experience building data and MLOPS pipelines using Python and Spark. Strong proficiency in Python. Exposure to Spark is good to have. Hands-on experience Restful development using Python frameworks like Fast API and Django Experience with Docker and Kubernetes (EKS) or Sagemaker. Experience with CloudFormation or Terraform for deploying and managing AWS resources. Strong problem-solving and analytical skills. Ability to work effectively within a agile environment. Who you are: Bachelor’s degree or foreign equivalent degree in Computer Science or a related field required Excellent communication and prioritization skills. Able to work independently or within a team proactively in a fast-paced AGILE-SCRUM environment. Owns success – Takes responsibility for successful delivery of the solutions. Strong desire to improve upon their skills in software testing and technologies Don’t meet every single requirement? We encourage you to apply anyway. You might just be the right candidate for this, or other roles. Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work. What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com. Job Requisition ID:101728 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser.

Posted 1 week ago

Apply

8.0 - 12.0 years

9 - 15 Lacs

Chennai

Work from Office

Responsibilities: * Design, develop, test & maintain Power Automated solutions using Python, FastAPI & Azure DevOps,Langchain,OOPS. * Collaborate with cross-functional teams on project delivery within Vector DBs.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Key Responsibilities: · Azure Cloud s Databricks: o Design and build efficient data pipelines using Azure Databricks (PySpark). o Implement business logic for data transformation and enrichment at scale. o Manage and optimize Delta Lake storage solutions. · API Development: o Develop REST APIs using FastAPI to expose processed data. o Deploy APIs on Azure Functions for scalable and serverless data access. · Data Orchestration s ETL: o Develop and manage Airflow DAGs to orchestrate ETL processes. o Ingest and process data from various internal and external sources on a scheduled basis. · Database Management: o Handle data storage and access using PostgreSQL and MongoDB. o Write optimized SQL queries to support downstream applications and analytics. · Collaboration: o Work cross-functionally with teams to deliver reliable, high-performance data solutions. o Follow best practices in code quality, version control, and documentation. Required Skills s Experience: · 5+ years of hands-on experience as a Data Engineer. · Strong experience with Azure Cloud services. · Proficient in Azure Databricks, PySpark, and Delta Lake. · Solid experience with Python and FastAPI for API development. · Experience with Azure Functions for serverless API deployments. · Skilled in managing ETL pipelines using Apache Airflow. · Hands-on experience with PostgreSQL and MongoDB. · Strong SQL skills and experience handling large datasets.

Posted 1 week ago

Apply

3.0 - 5.0 years

7 - 17 Lacs

Pune

Hybrid

We are looking for a skilled Senior Data Scientist with experience in building cutting-edge AI solutions, particularly in the domains of Generative AI, Agentic AI, and LLM-based architectures. If youre passionate about developing intelligent, autonomous systems, this role offers an exciting opportunity to work on impactful, next-gen AI initiatives. Responsibilities Architect and implement multi-agent AI systems leveraging LLM frameworks such as LangChain, LangGraph, and AutoGen. Develop context-aware RAG pipelines to power grounded, relevant AI outputs. Design and deploy scalable Generative AI services using FastAPI and integrate them with Azure AI pipelines. Orchestrate complex agent workflows involving memory, planning, and decision-making tools. Partner closely with engineers, product teams, and legal domain experts to identify automation opportunities within legal and compliance workflows. Apply NLP techniques to process and extract insights from unstructured legal data. Manage and optimize structured and unstructured data pipelines using tools like PostgreSQL, Solr, and CosmosDB Key Requirements Strong command of Python and relevant libraries such as LangChain, LangGraph, Transformers, spaCy, and scikit-learn. Proven experience working with Large Language Models (GPT, Claude, LLaMA, etc.), including fine-tuning and customization. Familiarity with agentic AI concepts and hands-on experience developing tool-using, reasoning agents. Skilled in building APIs using FastAPI and managing ML workflows with MLflow. Experience working with cloud-based AI platforms such as Azure AI, OpenAI APIs, or AWS Bedrock. Deep understanding of RAG architectures, semantic search, and vector databases. Proficient in SQL, comfortable working on Linux, and experienced with containerized and distributed environments. Strong grasp of data structures, statistical modelling, and backend database architecture.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

For over four decades, PAR Technology Corporation (NYSE: PAR) has been a leader in restaurant technology, empowering brands worldwide to create lasting connections with their guests. Our innovative solutions and commitment to excellence provide comprehensive software and hardware that enable seamless experiences and drive growth for over 100,000 restaurants in more than 110 countries. Embracing our "Better Together" ethos, we offer Unified Customer Experience solutions, combining point-of-sale, digital ordering, loyalty and back-office software solutions as well as industry-leading hardware and drive-thru offerings. To learn more, visit partech.com or connect with us on LinkedIn, X (formerly Twitter), Facebook, and Instagram. Position Description We are seeking a Machine Learning Engineer to join our growing AI team at PAR . This role will focus on developing and scaling GenAI-powered services, recommender systems, and ML infrastructure that fuel personalized customer engagement. You will work across teams to drive technical excellence and real-world ML application impact. Position Location: Jaipur / Gurgaon Reports To [Hiring Manager Title – e.g., Head of AI or Senior Director, AI Engineering] Entrees (Requirements) What We’re Looking For: Master’s or PhD in Computer Science, Machine Learning, or a related field 3+ years of experience delivering production-ready machine learning solutions Deep understanding of ML algorithms, recommender systems, and NLP Experience with LLM frameworks (Hugging Face Transformers, LangChain, OpenAI API, Cohere) Strong proficiency in Python, including object-oriented design and scalable architecture Advanced expertise in Databricks: notebooks, MLflow tracking, data pipelines, job orchestration Hands-on experience with cloud-native technologies – preferably AWS (S3, Lambda, ECS/EKS, SageMaker) Experience working with modern data platforms: Delta Lake, Redis, Elasticsearch, NoSQL, BigQuery Strong verbal and written communication skills to translate technical work into business impact Flexibility to collaborate with global teams in PST/EST time zones when required With a Side Of (Additional Skills) Familiarity with vector databases (FAISS, ChromaDB, Pinecone, Weaviate) Experience with retrieval-augmented generation (RAG) and hybrid search systems Skilled in deploying ML APIs using FastAPI or Flask Background in text-to-SQL applications or domain-specific LLMs Knowledge of ML Ops practices: model versioning, automated retraining, monitoring Familiarity with CI/CD for ML pipelines via Databricks Repos, GitHub Actions, etc. Contributions to open-source ML or GenAI projects Experience in the restaurant/hospitality tech or digital marketing domain Unleash Your Potential: What You Will Be Doing and Owning: Build and deploy GenAI-powered microservices and personalized recommendation engines Design and manage Databricks data pipelines for training, feature engineering, and inference Develop high-performance ML APIs and integrate with frontend applications Implement retrieval pipelines with vector DBs and search engines Define and maintain ML Ops workflows for versioning, retraining, and monitoring Drive strategic architectural decisions for LLM-powered, multi-model systems Collaborate across product and engineering teams to embed intelligence in customer experiences Enable CI/CD for ML systems with modern orchestration tools Advocate for scalability, performance, and clean code in all deployed solutions Interview Process Interview #1: Phone Screen with Talent Acquisition Team Interview #2: Technical Interview – Round 1 with AI/ML Team (via MS Teams / F2F) Interview #3: Technical Interview – Round 2 with AI/ML Team (via MS Teams / F2F) Interview #4: Final Round with Hiring Manager and Cross-functional Stakeholders (via MS Teams / F2F PAR is proud to provide equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. We also provide reasonable accommodations to individuals with disabilities in accordance with applicable laws. If you require reasonable accommodation to complete a job application, pre-employment testing, a job interview or to otherwise participate in the hiring process, or for your role at PAR, please contact accommodations@partech.com. If you’d like more information about your EEO rights as an applicant, please visit the US Department of Labor's website.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Credgenics is a leading SaaS-based debt resolution and legal automation platform. We help financial institutions improve their collections, reduce delinquencies, and enhance customer relationships using data-driven insights and advanced technology. Key Responsibilities: ● Lead and mentor a team of DevOps engineers; drive execution and delivery. ● Architect and maintain robust CI/CD pipelines for Python/FastAPI-based services. ● Manage cloud infrastructure (AWS/GCP) using Terraform/Pulumi. ● Oversee Kubernetes-based deployments, observability, and incident response. ● Collaborate with engineering teams to improve build, release, and monitoring processes. ● Drive best practices in security, scalability, and automation. Requirements: ● 5+ years of DevOps experience with at least 1–2 years in a leadership role. ● Strong knowledge of cloud platforms (preferably AWS), Kubernetes, and IaC. ● Experience with CI/CD (GitHub Actions, GitLab CI), observability (Prometheus, Grafana), and Postgres. ● Familiarity with Python-based applications and microservices architecture. ● Ability to balance hands-on execution with team management.Good to Have: ● Exposure to fintech or SaaS product environments. ● Understanding of security and compliance practices in cloud deployments. Why Join Credgenics? Be part of a rapidly growing SaaS organization revolutionizing debt collections. Work in a dynamic and collaborative environment with opportunities for career growth. Contribute to a meaningful mission that impacts financial institutions and their customers.

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Responsibilities: Design, implement, and optimize LLM pipelines for NLP tasks such as summarization, classification, and entity recognition. Build and deploy AI models to production with scalable, testable code. Develop and fine-tune models using domain-specific datasets, integrating them into live systems. Create retrieval-augmented systems using vector stores and embeddings for contextual Q&A. Support ASR-based NLP systems with diarization, tagging, and post-processing. Work closely with senior leadership to ensure technical alignment with product goals. Contribute to AI/ML best practices and help standardize development workflows across teams. Primary Skills (Must have): Strong Python development with OOP principles and ML/AI best practices. Expertise in machine learning, deep learning, and NLP using libraries like scikit-learn, PyTorch, TensorFlow. Practical experience with LLM fine-tuning, LoRA, PEFT, and embedding-based models. Experience building RAG pipelines for question-answering, search, and knowledge summarization. Hands-on with vector stores (FAISS, Pinecone, ChromaDB), transformers, and Hugging Face models. Experience deploying models via FastAPI, Flask, and Docker. Good knowledge of speech-to-text tools like Whisper and AWS/GCP STT for transcription. Familiarity with prompt engineering, LangChain, and retriever models. Experience with cloud ML platforms (AWS, Azure, GCP) for model training and inference. Exposure to MLOps practices like versioning, monitoring, and automated retraining. Secondary Skills Domain knowledge in financial services, market places, healthcare, pharma and life sciences NLP use cases. Experience with knowledge graphs, graph neural networks, and embeddings-based reasoning. Understanding of privacy-preserving ML (e.g., differential privacy, federated learning). Exposure to LangChain, LlamaIndex, Haystack, or other GenAI orchestration tools.

Posted 1 week ago

Apply

4.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Python Developer Location: Hyderabad (On-Site) Job Type: Full-Time Experience: 4 to 7 Years Notice Period: Immediate to 15 Days Job Summary We are looking for a talented and motivated Python Developer with strong experience in building APIs using FastAPI and Flask. The ideal candidate will possess excellent problem-solving and communication skills and a passion for delivering high-quality, scalable backend solutions. You will play a key role in developing robust backend services, integrating APIs, and collaborating with frontend and QA teams to deliver production-ready software. Key Responsibilities Design, develop, and maintain backend services using FastAPI and Flask. Write clean, reusable, and efficient Python code following best practices. Work with Large Language Models (LLMs) and contribute to building advanced AI-driven solutions. Collaborate with cross-functional teams to gather requirements and translate them into technical implementations. Optimize applications for maximum speed, scalability, and reliability. Implement secure API solutions and ensure compliance with data protection standards. Develop and maintain unit tests, integration tests, and documentation for code, APIs, and system architecture. Participate in code reviews and contribute to continuous improvement of development processes. Required Skills & Qualifications Strong programming skills in Python with hands-on experience in backend development. Proficiency in developing RESTful APIs using FastAPI and Flask frameworks. Solid understanding of REST principles and asynchronous programming in Python. Good communication skills and the ability to troubleshoot and solve complex problems effectively. Experience with version control tools like Git. Eagerness to learn and work with LLMs, Vector Databases, and other modern AI technologies. Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience. Nice to Have Experience with LLMs, Prompt Engineering, and Vector Databases. Understanding of Transformer architecture, Embeddings, and Retrieval-Augmented Generation (RAG). Familiarity with data processing libraries like NumPy and Pandas. Knowledge of Docker for containerized application development and deployment. Skills Python, FastAPI, Flask, REST APIs, Asynchronous Programming, Git, API Security, Data Protection, LLMs, Vector DBs, Transformers, RAG, NumPy, Pandas, Docker. If you are passionate about backend development and eager to work on innovative AI solutions, we would love to hear from you! Skills: llms,flask,numpy,git,transformers,python,asynchronous programming,rag,docker,api,data protection,vector databases,api security,fastapi,rest apis,pandas

Posted 1 week ago

Apply

4.0 - 7.0 years

10 - 20 Lacs

Hyderabad, Bengaluru

Hybrid

Experience: 4+ Years Employment Type: Full-time Job Summary: We are looking for a skilled Python Backend Developer with hands-on experience in building scalable and high-performance backend applications. The ideal candidate will have strong expertise in Python, Flask, Django, FastAPI, and RESTful API development. A solid understanding of Object-Oriented Programming (OOPs), Data Structures and Algorithms (DSA), Method Resolution Order (MRO), and ORMs is essential. Experience with both SQL and MongoDB is required. Key Responsibilities: - Design, develop, and maintain backend services and REST APIs using Flask, Django, and FastAPI. - Build efficient, reusable, and reliable Python code. - Implement secure, scalable, and high-performing APIs and backend logic. - Work with SQL databases (MySQL/PostgreSQL) and NoSQL databases (MongoDB). - Leverage strong understanding of OOPs, DSA, and MRO to write clean, maintainable code. - Integrate third-party APIs and services. - Collaborate with cross-functional teams including frontend developers, DevOps, and QA engineers. - Perform unit testing and participate in code reviews. - Write clear documentation and follow coding standards. Required Technical Skills: - Languages: Strong expertise in Python - Frameworks: Flask, Django, FastAPI - API Development: RESTful API design and implementation - OOPs: Solid understanding of principles like inheritance, polymorphism, encapsulation, abstraction, and MRO - DSA: Strong knowledge of core algorithms and data structures - ORMs: Experience with SQLAlchemy, Django ORM - Databases: MySQL/PostgreSQL (RDBMS), MongoDB (NoSQL) - Version Control: Git - Testing: Experience with PyTest/unittest Nice to Have: - Familiarity with Docker, containerization, and cloud platforms (AWS/GCP). - Experience with CI/CD tools and pipeline setup. - Exposure to Agile/Scrum methodologies. Soft Skills: - Strong problem-solving and analytical skills. - Excellent communication and collaboration abilities. - Self-driven and able to take ownership of tasks.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies