Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 10.0 years
45 - 50 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
About the Role We are looking for a hands-on and passionate Full Stack Lead to join our team and take a leading role in the development of cutting-edge applications, including those involving machine learning and AI. You will be responsible for designing, developing, and deploying full stack solutions, guiding and mentoring other developers, and ensuring the delivery of high-quality, scalable, and maintainable code. You will have a deep understanding of front-end and back-end technologies, with proven expertise in React, Node.js, and Python, as well as some hands-on experience with ML engineering principles. Responsibilities: Technical Leadership: o Provide technical leadership and guidance to a team of full stack developers. o Design and architect full stack solutions for complex applications, including those with AI/ML components. o Define coding standards and best practices, and ensure adherence across the team. o Conduct code reviews and provide constructive feedback. o Stay up-to-date with the latest technologies and trends in full stack development and AI/ML. Full Stack Development: o Develop and maintain web applications using React, Node.js, and Python. o Build reusable components and libraries for future use. o Design and implement RESTful APIs for communication between front-end and back-end. o Optimize applications for maximum speed and scalability. o Write clean, efficient, and well-documented code. AI/ML Integration: o Collaborate with data scientists and ML engineers to integrate machine learning models into applications. o Build and maintain the infrastructure for deploying and serving ML models. o Implement data pipelines for processing and feeding data to ML models. o Monitor the performance of ML models in production and identify areas for improvement. Project Management: o Participate in sprint planning and task estimation. o Track progress and report on development activities. o Identify and mitigate potential risks and issues. o Ensure timely delivery of high-quality software. Skills: Essential: o JavaScript (Mastery): Deep understanding of JavaScript, including ES6+ features and asynchronous programming. Mastery of React ecosystem, including state management (e.g., Redux, Context API), routing, and testing. o Node.js (Proficiency): Proficient in building scalable and performant back-end applications with Node.js and Express.js. Experience with middleware, authentication, and authorization. o Python (Proficiency): Proficient in Python and at least one web framework (e.g., Flask, Django). Experience with data structures, algorithms, and object-oriented programming. o Databases (Experience): Experience with both relational databases (e.g., PostgreSQL) and NoSQL databases (e.g., MongoDB). Ability to design database schemas and optimize queries. o API Design (Experience): Solid understanding of RESTful API design principles. Experience with API documentation tools (e.g., Swagger). o Version Control (Mastery): Mastery of Git and version control best practices, including branching, merging, and conflict resolution. o Containers and Orchestration (Experience): Hands-on experience with Docker for containerizing applications. Familiarity with Kubernetes for orchestrating and managing containers. o Testing (Proficiency): Proficient in writing unit tests and integration tests. Experience with testing frameworks for both front-end and back-end. o Communication & Collaboration (Mastery): Excellent written and verbal communication skills. Proven ability to collaborate effectively with cross-functional teams. o Leadership & Mentorship (Proficiency): Strong leadership and mentoring skills. Ability to guide and motivate a team of developers. Desirable: o Cloud Platforms (Experience): Experience with cloud platforms like AWS or Azure. Knowledge of cloud services relevant to application deployment and scaling. o DevOps (Experience): Experience with DevOps practices and CI/CD pipelines. Familiarity with tools like Jenkins or Azure DevOps. o Machine Learning (Experience): Familiarity with machine learning algorithms and libraries (e.g., scikit-learn, TensorFlow, PyTorch). Experience with data processing and visualization tools. Qualifications: Essential: o Bachelor's degree in Computer Science or related field. o 6+ years of experience in full stack development, with at least 2 years in a lead role. o Proven experience in developing and deploying applications with AI/ML components. Location - Remote, Hyderabad,ahmedabad,pune,chennai,kolkata,Delhi / NCR,Mumbai,Bengaluru
Posted 1 day ago
6.0 - 8.0 years
8 - 10 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Key Responsibilities: Backend Service Development: Design and implement robust, scalable, and maintainable backend services using Python. Utilize appropriate frameworks and libraries to streamline development and enhance productivity. Integrate AI models and algorithms into backend services, ensuring efficient and reliable communication. AI Model Integration: Collaborate with data scientists and AI engineers to understand AI model requirements and specifications. Develop APIs and interfaces to facilitate seamless integration of AI models into backend services. Cloud Infrastructure Management: Deploy and manage backend services on cloud platforms (e.g., AWS, Azure & GCP). Leverage cloud-native technologies and services to optimize infrastructure costs and performance. Ensure the security and compliance of cloud infrastructure. Collaboration and Mentorship: Work collaboratively with a cross-functional team of engineers, data scientists, and project stakeholders. Provide technical guidance and mentorship to junior engineers. Qualifications and Skills: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 5+ years of experience in Python programming, with a focus on backend development. Strong understanding of object-oriented programming (OOP) principles and design patterns. Experience with Python web frameworks (e.g., Django, Flask) and RESTful API development. Proficiency in cloud technologies (e.g., AWS, Azure & GCP) and containerization (e.g., Docker & Kubernetes). Familiarity with AI principles, machine learning algorithms, and deep learning frameworks (e.g., TensorFlow, PyTorch). Preferred Qualifications: Experience with large-scale distributed systems and microservices architectures. Knowledge of data engineering principles and big data technologies (e.g., Apache Spark). Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote Work Timings 2.30 pm -11.30 pm
Posted 1 day ago
0.0 - 2.0 years
1 - 2 Lacs
Nagercoil
Work from Office
We are seeking a skilled and passionate Python Machine Learning Developer to join our team. In this role, you will be responsible for designing, developing, and implementing machine learning models and data pipelines using Python. You will work closely with our data science and product teams to turn data into actionable insights.
Posted 1 day ago
6.0 - 8.0 years
1 - 5 Lacs
Bengaluru, Karnataka
Work from Office
We are seeking an experienced professional in AI and machine learning with a strong focus on large language models (LLMs) for a 9-month project The role involves hands-on expertise in developing and deploying agentic systems and working with transformer architectures, fine-tuning, prompt engineering, and task adaptation Responsibilities include leveraging reinforcement learning or similar methods for goal-oriented autonomous systems, deploying models using MLOps practices, and managing large datasets in production environments The ideal candidate should excel in Python, ML libraries (Hugging Face Transformers, TensorFlow, PyTorch), data engineering principles, and cloud platforms (AWS, GCP, Azure) Strong analytical and communication skills are essential to solve complex challenges and articulate insights to stakeholders
Posted 1 day ago
2.0 - 4.0 years
2 - 6 Lacs
Thiruvananthapuram
Work from Office
Key Responsibilities Write clean, efficient, and maintainable code based on specifications Collaborate with cross-functional teams to implement new features Troubleshoot and debug applications to optimize performance Stay updated with emerging technologies and industry trends About CompanyAt Nuchange, were at the forefront of healthcare innovation with our flagship solution, Nuacare We are dedicated to revolutionizing hospital operations worldwide, making patient care smarter, faster, and more efficient Join us to be part of this transformative journey and make a meaningful impact on healthcare technology Product Overview Nuacare is more than just software its a leap forward in healthcare advancement Our all-in-one automation platform empowers hospitals to enhance patient care, streamline operations, and reduce costs With Nuacare, innovation becomes a game-changing reality
Posted 1 day ago
2.0 - 4.0 years
2 - 6 Lacs
Karnataka
Work from Office
Key Responsibilities Write clean, efficient, and maintainable code based on specifications Collaborate with cross-functional teams to implement new features Troubleshoot and debug applications to optimize performance Stay updated with emerging technologies and industry trends About CompanyAt Nuchange, were at the forefront of healthcare innovation with our flagship solution, Nuacare We are dedicated to revolutionizing hospital operations worldwide, making patient care smarter, faster, and more efficient Join us to be part of this transformative journey and make a meaningful impact on healthcare technology Product Overview Nuacare is more than just software its a leap forward in healthcare advancement Our all-in-one automation platform empowers hospitals to enhance patient care, streamline operations, and reduce costs With Nuacare, innovation becomes a game-changing reality
Posted 1 day ago
2.0 - 4.0 years
2 - 6 Lacs
Bengaluru
Work from Office
Key Responsibilities Write clean, efficient, and maintainable code based on specifications Collaborate with cross-functional teams to implement new features Troubleshoot and debug applications to optimize performance Stay updated with emerging technologies and industry trends About CompanyAt Nuchange, were at the forefront of healthcare innovation with our flagship solution, Nuacare We are dedicated to revolutionizing hospital operations worldwide, making patient care smarter, faster, and more efficient Join us to be part of this transformative journey and make a meaningful impact on healthcare technology Product Overview Nuacare is more than just software its a leap forward in healthcare advancement Our all-in-one automation platform empowers hospitals to enhance patient care, streamline operations, and reduce costs With Nuacare, innovation becomes a game-changing reality
Posted 1 day ago
2.0 - 4.0 years
2 - 6 Lacs
Thrissur
Work from Office
Key Responsibilities Write clean, efficient, and maintainable code based on specifications Collaborate with cross-functional teams to implement new features Troubleshoot and debug applications to optimize performance Stay updated with emerging technologies and industry trends About CompanyAt Nuchange, were at the forefront of healthcare innovation with our flagship solution, Nuacare We are dedicated to revolutionizing hospital operations worldwide, making patient care smarter, faster, and more efficient Join us to be part of this transformative journey and make a meaningful impact on healthcare technology Product Overview Nuacare is more than just software its a leap forward in healthcare advancement Our all-in-one automation platform empowers hospitals to enhance patient care, streamline operations, and reduce costs With Nuacare, innovation becomes a game-changing reality
Posted 1 day ago
10.0 - 15.0 years
15 - 20 Lacs
Pune
Work from Office
Company Overview: At Codvo, software and people transformations go hand-in-hand We are a global empathy led technology services company Product innovation and mature software engineering are part of our core DNA Respect, Fairness, Growth, Agility, and Inclusiveness are the core values that we aspire to live by each day We continue to expand our digital strategy, design, architecture, and product management capabilities to offer expertise, outside-the-box thinking, and measurable results About the Role: We are seeking a highly experienced and visionary AI Senior Technical Lead to drive the technical direction and execution of our AI initiatives This role is pivotal in shaping our AI strategy, leading a team of talented AI engineers, and ensuring the delivery of innovative and impactful AI solutions As a technical leader, you will be responsible for defining architectural vision, driving best practices, and fostering a culture of technical excellence across diverse AI domains Responsibilities: Technical Leadership & Strategy: Define and communicate the technical vision and strategy for AI projects Lead the architectural design and implementation of complex AI systems Evaluate and recommend emerging AI technologies and methodologies Drive the adoption of best practices for AI development, deployment, and maintenance Contribute to the strategic planning of AI initiatives, aligning them with business goals Team Leadership & Mentorship: Lead and mentor a team of AI engineers, providing technical guidance and support Foster a collaborative and innovative team environment Conduct code reviews and ensure code quality Identify and address skill gaps within the team Facilitate knowledge sharing and continuous learning AI System Development & Implementation: Oversee the development and deployment of scalable and robust AI solutions Ensure the performance, reliability, and security of AI systems Drive the development of AI pipelines and infrastructure Lead the integration of AI models with existing systems and applications Troubleshoot and resolve complex technical issues Research & Innovation: Stay abreast of the latest advancements in AI and machine learning Identify opportunities to apply AI to solve complex business challenges, including opportunities within Generative AI Drive research and development efforts to explore new AI technologies Evaluate and prototype new AI solutions Collaboration & Communication: Collaborate with cross-functional teams, including product managers, data scientists, and software engineers Communicate complex technical concepts to both technical and non-technical audiences Provide technical presentations and demonstrations Document technical designs, code, and processes Required Skills and Qualifications: EducationBatchelors or Masters or Ph.D in Computer Science, Artificial Intelligence, Machine Learning, or a related field Experience: 8+ years of experience in AI/ML development, with a proven track record of leading complex AI projects Extensive experience in designing and implementing scalable AI systems Proven experience in leading and mentoring technical teams Deep understanding of machine learning and deep learning algorithms and architectures Strong experience with cloud platforms (e.g., AWS, Azure, GCP) and their AI/ML services Technical Skills: Expertise in Python and relevant AI/ML libraries (e.g., TensorFlow, PyTorch, scikit-learn) Strong understanding of AI infrastructure and deployment strategies Experience with MLOps best practices Experience with vector databases, and LLM implementations Proficiency in software development principles and best practices Experience with containerization and orchestration tools (e.g., Docker, Kubernetes) Soft Skills: Exceptional leadership and communication skills Strong problem-solving and analytical skills Ability to think strategically and drive innovation Excellent interpersonal and collaboration skills Strong presentation skills Preferred Qualifications: Experience with Generative AI technologies and methodologies (e.g., GANs, Diffusion Models, Transformers for generation) Experience with natural language processing (NLP) or computer vision Contributions to open-source AI projects Experience with distributed systems and big data technologies Experience with prompt engineering
Posted 1 day ago
5.0 - 8.0 years
7 - 11 Lacs
Pune
Work from Office
About The Role Senior Computer Vision Machine Learning Engineer About Us At Codvo, software and people transformations go together We are a global empathy-led technology services company with a core DNA of product innovation and mature software engineering We uphold the values of Respect, Fairness, Growth, Agility, and Inclusiveness in everything we do Job Overview We are looking for a Senior Computer Vision Machine Learning Engineer to lead the development of real-time CV/ML systems, with an emphasis on deploying models on edge platforms like the NVIDIA IGX Orin The ideal candidate will have experience in designing robust vision pipelines, training and optimizing deep learning models, and working closely with hardware platforms for deployment Responsibilities Lead the design, development, and deployment of end-to-end computer vision and deep learning models Optimize and deploy CV/ML pipelines on edge platforms, particularly NVIDIA IGX (Orin preferred) Work with cross-functional teams to integrate models into real-time applications (e.g., robotics, safety systems, industrial inspection) Develop and maintain datasets, perform data augmentation, and ensure quality training inputs Leverage NVIDIA SDKs (e.g., DeepStream, TensorRT, TAO Toolkit, CUDA) for performance and acceleration Collaborate with hardware engineers to fine-tune models for power, latency, and throughput constraints Stay up to date with the latest research and techniques in computer vision, edge AI, and embedded ML Requirements Bachelors or Masters degree in Computer Science, Electrical Engineering, or related field 5+ years of experience in Computer Vision and Machine Learning (deep learning emphasis) Proficiency in Python, C++, TensorFlow, PyTorch Strong understanding of model optimization techniques for edge deployment Hands-on experience with NVIDIA platforms- IGX, Jetson, or Xavier (IGX Orin highly preferred) Experience with NVIDIA SDKs (e.g., DeepStream, TensorRT, CUDA, TAO Toolkit) Solid knowledge of vision tasksobject detection, tracking, classification, segmentation Familiarity with containerization (Docker), CI/CD pipelines, and version control (Git) Preferred Qualifications Experience in industrial AI, medical imaging, or robotics Exposure to RTOS, safety-critical systems, or IEC 61508/ISO 26262 environments Familiarity with ONNX, OpenCV, ROS, or GStreamer What We Offer Opportunity to work on cutting-edge AI/edge technology with real-world impact Collaborative and fast-paced engineering culture Flexible working hours and remote work options Competitive salary and benefits package Show more Show less
Posted 1 day ago
6.0 - 10.0 years
11 - 15 Lacs
Pune
Work from Office
About The Role Senior AI Engineer At Codvo, software and people transformations go hand-in-hand We are a global empathy-led technology services company where product innovation and mature software engineering are embedded in our core DNA Our core values of Respect, Fairness, Growth, Agility, and Inclusiveness guide everything we do We continually expand our expertise in digital strategy, design, architecture, and product management to offer measurable results and outside-the-box thinking About the Role: We are seeking a highly skilled and experienced Senior AI Engineer to lead the design, development, and implementation of robust and scalable pipelines and backend systems for our Generative AI applications In this role, you will be responsible for orchestrating the flow of data, integrating AI services, developing RAG pipelines, working with LLMs, and ensuring the smooth operation of the backend infrastructure that powers our Generative AI solutions Responsibilities: Generative AI Pipeline Development: Design and implement efficient and scalable pipelines for data ingestion, processing, and transformation, tailored for Generative AI workloads Orchestrate the flow of data between various AI services, databases, and backend systems within the Generative AI context Build and maintain CI/CD pipelines for deploying and updating Generative AI services and pipelines Data and Document Ingestion: Develop and manage systems for ingesting diverse data sources (text, images, code, etc.) used in Generative AI applications Implement OCR and other preprocessing techniques to prepare data for use in Generative AI pipelines Ensure data quality, consistency, and security throughout the ingestion process AI Service Integration: Integrate and manage external AI services (e.g., cloud-based APIs for image generation, text generation, LLMs) into our Generative AI applications Develop and maintain APIs for seamless communication between AI services and backend systems Monitor and optimize the performance of integrated AI services within the Generative AI pipeline Retrieval Augmented Generation (RAG) Pipelines: Design and implement RAG pipelines to enhance Generative AI capabilities with external knowledge sources Develop and optimize data retrieval and indexing strategies for RAG systems used in conjunction with Generative AI Evaluate and improve the accuracy and relevance of RAG-generated responses in the context of Generative AI applications Large Language Model (LLM) Integration: Develop and manage interactions with LLMs through APIs and SDKs within Generative AI pipelines Implement prompt engineering strategies to optimize LLM performance for specific Generative AI tasks Analyze and debug LLM outputs to ensure quality and consistency in Generative AI applications Backend Services Ownership: Design, develop, and maintain backend services that support Generative AI applications Ensure the scalability, reliability, and security of backend infrastructure for Generative AI workloads Implement monitoring and logging systems for backend services and pipelines supporting Generative AI Troubleshoot and resolve backend-related issues impacting Generative AI applications Required Skills and Qualifications: EducationBachelors or Masters degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field Experience: 5+ years of experience in AI/ML development with a focus on building and deploying AI pipelines and backend systems Proven experience in designing and implementing data ingestion and processing pipelines Strong experience with cloud platforms (e.g., AWS, Azure, GCP) and their AI/ML services Technical Skills: Expertise in Python and relevant AI/ML libraries Strong understanding of AI infrastructure and deployment strategies Experience with data engineering and data processing techniques Proficiency in software development principles and best practices Experience with containerization and orchestration tools (e.g., Docker, Kubernetes) Experience with version control (Git) Experience with RESTful APIs and API development Experience with vector databases and their application in AI/ML, particularly for similarity search and retrieval Generative AI Specific Skills: Familiarity with Generative AI concepts and techniques (e.g., GANs, Diffusion Models, VAEs, LLMs) Experience with integrating and managing Generative AI services Understanding of RAG pipelines and their application in Generative AI Experience with prompt engineering for LLMs in Generative AI contexts Soft Skills: Strong problem-solving and analytical skills Excellent communication and collaboration skills Ability to work in a fast-paced environment Preferred Qualifications: Experience with OCR and document processing technologies Experience with MLOps practices for Generative AI Contributions to open-source AI projects Strong experience with vector databases and their optimization for Generative AI applications Experience 5+ years Shift Time 2:30PM to 11:30PM Show more Show less
Posted 1 day ago
5.0 - 10.0 years
6 - 10 Lacs
Pune
Work from Office
Job Description: We, at Jet2 (UK’s third largest airlines and the largest tour operator), have set up a state-of-the-art Technology and Innovation Centre in Pune, India. The Data Visualisation Developer will join our growing Data Visualisation team with delivering impactful data visualisation projects (using Tableau). The team currently works with a range of departments including Pricing & Revenue, Overseas Operations and Contact Centre. This new role provides a fantastic opportunity to influence key business decisions through data visualisation. You will work closely with the Jet2 Travel Technology visualisation team, whilst working alongside Data Engineers, Data Scientists and Business Analysts to help business users get the most insight out of their data. You will also support our growing internal community of Tableau users through engagement activities and support inbox queries, that develop their visualisation knowledge and data fluency. Roles and Responsibilities What you’ll be doing: The successful candidate will work independently on data visualisation projects with guidance from the Jet2TT Data Visualisation Team Lead, the incumbent is expected to operate out of Pune location and collaborate with various stakeholders in Pune, Leeds, and Sheffield. Create impactful data visualisations and dashboards using Tableau Desktop / Cloud. Working with Business Analysts and Product Owners to understand requirements. Presenting visualisations to stakeholders. Teaching colleagues about new Tableau features and visualisation best practices Governance and monitoring of users and content on Tableau Cloud, including permissions management. Management of Tableau Support inbox via Outlook. What you’ll have Extensive experience in the use of Tableau, preferably evidenced by a strong Tableau Public portfolio. Comfortable presenting data visualisations and dashboards. Strong communication skills – Written & Verbal Knowledge of data visualisation best practices. SQL experience is desirable, but not essential. Working with cloud-based data technologies (e.g. Snowflake, Google BigQuery or similar) is desirable, but not essential. Experience of working in Agile Scrum framework to deliver high quality solutions. Experience of working with people from different geographies particularly UK & US
Posted 1 day ago
10.0 - 15.0 years
11 - 15 Lacs
Pune
Work from Office
Job Description: We, at Jet2 (UK’s third largest airlines and the largest tour operator), have set up a state-of-the-art Technology and Innovation Centre in Pune, India. The Lead Visualisation Developer will join our growing Data Visualisation team with delivering impactful data visualisation projects (using Tableau) whilst leading the Jet2TT visualisation function. The team currently works with a range of departments including Pricing & Revenue, Overseas Operations and Contact Centre. This new role provides a fantastic opportunity to represent visualisation to influence key business decisions. As part of the wider Data function, you will be working alongside Data Engineers, Data Scientists and Business Analysts to understand and gather requirements. In the role, you will be scoping visualisation projects, to deliver or delegate to members of the team, ensuring they have everything need to start development whilst guiding them through visualisation delivery. You will also support our visualisation Enablement team by supporting with the release of new Tableau features. Roles and Responsibilities What you’ll be doing: The successful candidate will work independently on data visualisation projects with zero or minimal guidance, the incumbent is expected to operate out of Pune location and collaborate with various stakeholders in Pune, Leeds, and Sheffield. Representing visualisation during project scoping. Working with Business Analysts and Product Owners to understand and scope requirements. Working with Data Engineers and Architects to ensure data models are fit visualisation. Developing Tableau dashboards from start to finish, using Tableau Desktop / Cloud – from gathering requirements, designing dashboards, and presenting to internal stakeholders. Presenting visualisations to stakeholders. Supporting and guiding members of the team through visualisation delivery. Supporting feature releases for Tableau. Teaching colleagues about new Tableau features and visualisation best practices. What you’ll have Extensive experience in the use of Tableau, evidenced by a strong Tableau Public portfolio. Expertise in the delivery of data visualisation Experience in r equirements gathering and presenting visualisations to internal stakeholders. Strong understanding of data visualisation best practices Experience of working in Agile Scrum framework to deliver high quality solutions. Strong communication skills – Written & Verbal Knowledge of the delivery of Data Engineering and Data Warehousing to Cloud Platforms. Knowledge of or exposure to Cloud Data Warehouse platforms (Snowflake Preferred) Knowledge and experience of working with a variety of databases (e.g., SQL).
Posted 1 day ago
5.0 - 10.0 years
12 - 19 Lacs
Bengaluru
Hybrid
Role & responsibilities 12+ years of experience in Python, Statistical Analysis, Machine Learning, Deep Learning Experience End-to-end architecture design for machine learning solutions Deployment of AI models into scalable services Staying at the forefront of generative AI advancements Optimizing models for performance and scalability Collaborating with cross-functional teams for successful project delivery Degree in Computer Science, Information Technology, Electrical Engineering, or related field Experience or understanding of machine learning, deep learning, generative AIand MLOps Knowledge of deep learning techniques and frameworks (TensorFlow, PyTorch) Familiarity with cloud platforms, preferably Microsoft Azure A certification in AI or machine learning is advantageous Strong analytical and problem-solving skills Excellent communication and team collaboration abilities Only female candidates can apply. Preferred candidate profile
Posted 1 day ago
3.0 - 5.0 years
15 - 25 Lacs
Faridabad
Work from Office
We’re a forward-thinking team building cutting-edge AI/ML solutions that power intelligent systems and drive real-world impact. We value curiosity, innovation, and a passion for artificial intelligence. Roles and Responsibilities Design, develop, and deploy machine learning and artificial intelligence models into production systems. Build and maintain scalable data pipelines, feature stores, and model training workflows. Implement MLOps best practices: containerization (Docker), orchestration (Kubernetes), CI/CD pipelines, and model monitoring. Optimize performance and reliability of AI systems, including model versioning, performance monitoring, and automated retraining. Collaborate with cross-functional teams—data scientists, software engineers, DevOps—to integrate models into products. Ensure code quality, reproducibility, security, and compliance across the ML lifecycle. Required Experience & Skills 3–5 years in software engineering, ML engineering, or AI-focused roles. Proficiency in Python for ML development, scripting, and automation. Strong grasp of machine learning concepts, algorithms, and evaluation methodologies. Practical experience with popular ML frameworks: TensorFlow, PyTorch, scikit-learn. Hands-on with MLOps technologies: CI/CD (e.g., Jenkins, GitLab CI), containerization (Docker), orchestration (Kubernetes). Familiarity with cloud platforms (AWS, GCP, or Azure) and MLOps tools (MLflow, Kubeflow, Airflow). Skilled in monitoring solutions (Prometheus, Grafana) and version control (Git, DVC, MLflow). Excellent problem-solving, communication, and collaboration skills.
Posted 1 day ago
10.0 - 15.0 years
20 - 35 Lacs
Noida, Gurugram, Greater Noida
Hybrid
Role & responsibilities Machine Learning, Data Science, Model Customization [4+ Years] Exp with performing above on cloud services e.g AWS SageMaker and other tools AI/ Gen AI skills: [1 or 2 years] MCP, RAG pipelines, A2A, Agentic / AI Agents Framework Auto Gen, Lang graph, Lang chain, codeless workflow builders etc. Preferred candidate profile Build working POC and prototypes rapidly. Build / integrate AI driven solutions to solve the identified opportunities, challenges. Lead cross functional teams in identifying and prioritizing key business areas in which AI solutions can result benefits. Proposals to executives and business leaders on broad range of technology, strategy and standard, governance for AI. Work on functional design, process design (flow mapping), prototyping, testing, defining support model in collaboration with Engineering and business leaders. Articulate and document the solutions architecture and lessons learned for each exploration and accelerated incubation. Relevant IT Experience: - 10+ years of relevant IT experience in given technology
Posted 1 day ago
2.0 - 3.0 years
9 - 10 Lacs
Hyderabad
Work from Office
Key Responsibilities: Engage with clients during discovery sessions to understand business needs and identify AI opportunities. Design and propose AI/ML solutions tailored to client use cases (e.g., NLP, computer vision, predictive analytics). Build and present solution architectures, prototypes, and supporting technical documentation. Collaborate with sales, product, and delivery teams to ensure AI project success. Deliver technical demos, proof-of-concepts (POCs), and presentations to CXO-level stakeholders. Draft proposals, respond to RFPs, and create detailed technical documents. Stay updated on AI advancements and identify relevant applications for business use. Required Skills & Experience: 23 years of experience in AI/ML solutioning, presales, or technical consulting. Strong knowledge of ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). Hands-on experience with Python, APIs, and cloud-based AI services (AWS, Azure, or GCP). Experience building or supporting Generative AI, LLM-based solutions, or chatbot architectures is a strong plus. Excellent communication, presentation, and stakeholder engagement skills. Ability to translate complex business problems into effective AI solutions.
Posted 1 day ago
4.0 - 6.0 years
10 - 20 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled and innovative Data Scientist with a strong focus on Generative AI, NLP, and Large Language Models (LLMs). The ideal candidate will design, develop, and deploy end-to-end data science solutions that harness the power of advanced ML, deep learning, and Gen AI technologies to drive real-world impact. Key Responsibilities: Design and implement scalable data science solutions using Generative AI, NLP, and ML techniques. Develop, fine-tune, and evaluate Large Language Models (LLMs) such as GPT, BERT, and similar architectures. Analyze structured and unstructured data to generate actionable insights for business problems. Collaborate cross-functionally with engineering, product, and business teams to integrate AI models into production systems. Conduct cutting-edge research in Gen AI and deep learning; evaluate and apply recent advancements. Build and maintain robust pipelines for model training, evaluation, monitoring, and deployment. Communicate complex technical findings clearly to technical and non-technical stakeholders. Required Skills & Qualifications: Strong hands-on experience with Generative AI, LLMs (e.g., OpenAI, Hugging Face Transformers, etc.). Proven expertise in core NLP tasks: text classification, summarization, named entity recognition, sentiment analysis, etc. Proficient in Python and related libraries: NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch. Experience developing, validating, and deploying ML models in production environments. Familiarity with vector databases (e.g., FAISS, Pinecone), embeddings, and semantic search. Exposure to cloud platforms like AWS, Azure, or GCP, and MLOps tools/workflows (CI/CD, model monitoring, etc.). Preferred Qualifications: Experience in prompt engineering, Retrieval-Augmented Generation (RAG), or fine-tuning/customizing LLMs. Contributions to AI research papers, open-source projects, or participation in ML competitions (e.g., Kaggle) is a plus. Knowledge of responsible AI practices, model interpretability, and bias mitigation techniques.
Posted 1 day ago
2.0 - 7.0 years
6 - 16 Lacs
Noida
Hybrid
AI Engineer JD (2+ years) Job Description Were searching for a hands-on AI engineer who can take modern LLMs from prototype to production, orchestrating multi-agent workflows using libraries such as LlamaIndex Workflows, LangGraph , and structured function calling . You should bring a solid foundation in classical ML (e.g., XGBoost) and deep learning with transformer-based models especially LLaMA and Qwen-family models along with experience in Retrieval-Augmented Generation (RAG) pipelines. A track record of building reliable, scalable systems is essential. Responsibilities: Design, build, and deploy ML/DL models for vision (YOLO), tabular (XGBoost),, and NLP / GenAI use-cases (function calling, RAG). Work on fine-tuning and deploying LLMs using platforms like Hugging Face and PyTorch. Engineer agent-based LLM solutions Compose multi-step, tool-calling workflows with LlamaIndex or LangGraph. Implement structured function calling and dynamic tool selection for Retrieval-Augmented Generation (RAG) pipelines. Orchestrate agent state, memory, and conversation context to solve complex user tasks. Fine-tune and serve LLMs on Hugging Face / PyTorch, including efficient-tuning methods (LoRA, QLoRA, PEFT). Operate on cloud (Azure preferred, AWS acceptable) set up training jobs, GPU/ACI deployments, CI/CD, observability, and cost governance. Collaborate cross-functionally with product, data, and frontend teams to turn fuzzy ideas into measurable impact. Build FastAPI micro-services for low-latency inference, streaming responses, and secure integration with downstream systems. Requirements: Proficiency in Python, with exposure to FastAPI and/or Java. Solid understanding and practical experience in Machine Learning with models like XGBoost. Experience with Deep Learning using YOLO, OCR frameworks, and PyTorch. NLP / GenAI: LLM fine-tuning, prompt engineering, RAG design. Hugging Face Transformers, PEFT, vector databases. Experience implementing agent frameworks (LlamaIndex, LangGraph, LangChain Agents) and function-calling patterns. MLOps: Docker, CI/CD, experiment tracking, model versioning, monitoring, and rollback strategies. Cloud: Azure ML / Azure Functions / AKS (preferred) or AWS SageMaker / Lambda basics. Bonus: experience with Triton / vLLM, streaming websockets, or GPU cost-optimization. Benefits of Working with Us: Best of Both Worlds: Enjoy the enthusiasm and learning curve of a startup combined with the deliveries and performance of an enterprise service provider. Flexible Working Hours: We offer a delivery-oriented approach with flexible working hours to help you maintain a healthy work-life balance. Limitless Growth Opportunities: The sky is not the limit when it comes to learning, growth, and sharing ideas. We encourage continuous learning and personal development. Flat Organizational Structure: We don't follow the typical corporate hierarchy ladder, fostering an open and collaborative work environment where everyone's voice is heard. As part of our dedication to an inclusive and diverse workforce, TechChefz Digital is committed to Equal Employment Opportunity without regard to race, color, national origin, ethnicity, gender, protected veteran status, disability, sexual orientation, gender identity, or religion. If you need assistance, you may contact us at joinus@techchefz.com
Posted 1 day ago
0 years
1 - 3 Lacs
Hyderabad, Telangana
On-site
Hiring: Male Candidates – M.Sc Organic Chemistry (Fresher/Experienced) Work Location: Kukatpally, Hyderabad Position Details: Role: [Specify if it's R&D, QC, or Production – Optional] Qualification: M.Sc Organic Chemistry Experience: Freshers or Experienced Candidates Salary: Freshers: ₹15,000 per month Experienced: Based on Current CTC Requirements: Sound knowledge in Organic Chemistry Willingness to work in pharmaceutical/chemical environment Good communication & learning attitude Male candidates only Interested candidates can Call/WhatsApp: 7396423749 Job Types: Full-time, Permanent, Fresher Pay: ₹15,000.00 - ₹30,000.00 per month Benefits: Health insurance Provident Fund Work Location: In person
Posted 1 day ago
5.0 - 10.0 years
14 - 24 Lacs
Pune, Bengaluru, Greater Noida
Work from Office
Role & responsibilities: Looking for 5 to 8 years experience of ML Engineer with strong Azure Cloud DevOps with even stronger DABs DevOps skills with even stronger DABs Databricks Asset Bundles implementation knowledge. 1 Azure Cloud Engineer 2 Azure DevOps CICD experienced in DABs 3 ML Engineer for deployment Translate business requirement into technical solution Implementation of MLOPS Scalable solution using AIML and reduce the risk of Fraud and other fiscal crisis Creating MLOPS Architecture and implementing it for multiple models in a scalable and automated way Designing and implementing end to end ML solutions Operationalize and monitor machine learning models using high end tools and technologies Design implementation of DevOps principles in Machine Learning Data Science quality assurance and testing Collaborate with data scientists engineers and other key stakeholders Preferred candidate profile: Azure Cloud Engineering Design implement and manage scalable cloud infrastructure on Microsoft Azure Ensure high availability performance and security of cloud based applications Collaborate with cross functional teams to define and implement cloud solutions Azure DevOps CICD Develop and maintain CICD pipelines using Azure DevOps Automate deployment processes to ensure efficient and reliable software delivery Monitor and troubleshoot CICD pipelines to ensure smooth operation DABs Databricks Asset Bundles Implementation Lead the implementation and management of Databricks Asset Bundles Optimize data workflows and ensure seamless integration with existing systems Provide expertise in DABs to enhance data processing and analytics capabilities Machine Learning Deployment Deploy machine learning models into production environments Monitor and maintain ML models to ensure optimal performance Collaborate with data scientists and engineers to integrate ML solutions into applications.
Posted 1 day ago
0 years
0 Lacs
Gurugram, Haryana
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are looking for an AI/ML Specialist with expertise in clustering algorithms, dimension reduction algorithms, and anomaly detection techniques on large datasets. The ideal candidate will be well-versed in unsupervised learning techniques and have solid proficiency in Python and various ML modules such as PyTorch, TensorFlow, or Scikit-learn. Additionally, the candidate should have extensive knowledge of Snowflake, Databricks, and Azure services like Azure ML and AKS. Knowledge of healthcare and FHIR standards. Primary Responsibilities: Develop and implement clustering algorithms to analyze large datasets and detect anomalies Apply dimension reduction techniques to enhance data processing and model performance Utilize unsupervised learning techniques to uncover patterns and insights from data Select, write, train, and test AI/ML models to ensure optimal accuracy and performance Collaborate with cross-functional teams to integrate AI/ML solutions into existing systems Utilize Python and ML modules (PyTorch, TensorFlow, Scikit-learn) for model development and deployment Leverage Snowflake and Databricks for data warehousing and management Utilize Azure services such as Azure ML and AKS for model deployment and management Continuously monitor and refine models to improve their effectiveness Document processes and results, and present findings to stakeholders Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor's or Master's degree in Computer Science, Data Science, or a related field Proven experience in developing and implementing clustering algorithms, dimension reduction techniques, and anomaly detection Experience with Azure services (Azure ML, AKS) Knowledge of Snowflake and Databricks Proficiency in Python and ML modules (PyTorch, TensorFlow, Scikit-learn) Solid understanding of unsupervised learning techniques Proven excellent problem-solving skills and attention to detail Proven ability to work independently and as part of a team. Proven solid communication skills to convey complex technical concepts to non-technical stakeholders At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.
Posted 1 day ago
10.0 years
0 Lacs
Kolkata, West Bengal
On-site
Responsibilities : About Lexmark: Founded in 1991 and headquartered in Lexington, Kentucky, Lexmark is recognized as a global leader in print hardware, service, software solutions and security by many of the technology industry’s leading market analyst firms. Lexmark creates cloud-enabled imaging and IoT solutions that help customers in more than 170 countries worldwide quickly realize business outcomes. Lexmark’s digital transformation objectives accelerate business transformation, turning information into insights, data into decisions, and analytics into action. Lexmark India, located in Kolkata, is one of the research and development centers of Lexmark International Inc. The India team works on cutting edge technologies & domains like cloud, AI/ML, Data science, IoT, Cyber security on creating innovative solutions for our customers and helping them minimize the cost and IT burden in providing a secure, reliable, and productive print and imaging environment. At our core, we are a technology company – deeply committed to building our own R&D capabilities, leveraging emerging technologies and partnerships to bring together a library of intellectual property that can add value to our customer's business. Caring for our communities and creating growth opportunities by investing in talent are woven into our culture. It’s how we care, grow, and win together. Job Description/Responsibilities: We are looking for a highly skilled and strategic Data Architect with deep expertise in the Azure Data ecosystem. This role requires a strong command over Azure Databricks, Azure Data Lake, Azure Data Factory, data warehouse design, SQL optimization, and AI/ML integration. The Data Architect will design and oversee robust, scalable, and secure data architectures to support advanced analytics and machine learning workloads. Qualification: BE/ME/MCA with 10+ Years in IT Experience. Must Have Skills/Skill Requirement: Define and drive the overall Azure-based data architecture strategy aligned with enterprise goals. Architect and implement scalable data pipelines, data lakes, and data warehouses using Azure Data Lake, ADF, and Azure SQL/Synapse. Provide technical leadership on Azure Databricks (Spark, Delta Lake, Notebooks, MLflow etc.) for large-scale data processing and advanced analytics use cases. Integrate AI/ML models into data pipelines and support end-to-end ML lifecycle (training, deployment, monitoring). Collaborate with cross-functional teams including data scientists, DevOps engineers, and business analysts. Evaluate and recommend tools, platforms, and design patterns for data and ML infrastructure. Mentor data engineers and junior architects on best practices and architectural standards. Strong experience with data modeling, ETL/ELT frameworks, and data warehousing concepts. Proficient in SQL, Python, PySpark. Solid understanding of AI/ML workflows and tools. Exposure on Azure DevOps. Excellent communication and stakeholder management skills. How to Apply ? Are you an innovator? Here is your chance to make your mark with a global technology leader. Apply now!
Posted 1 day ago
4.0 - 6.0 years
7 - 11 Lacs
Chennai
Remote
Location: Remote Timings: Full Time (As per company timings) Notice Period: (Immediate Joiner - Only) Experience: 4 Years Key Responsibilities: Fine-tune and optimize Large Language Models (LLMs) for mid-to-large-scale live production-quality applications. Host and deploy LLMs on custom infrastructure, ensuring high availability and performance. Conduct LLM evaluation following best practices as outlined in the Hugging Face LLM Evaluation Guide. Collaborate with cross-functional teams to design, develop, and implement AI-driven solutions tailored to business needs. Ensure model scalability, security, and compliance with industry standards. Required Skills and Experience : Experience: 4 years of hands-on experience with Generative AI and LLMs. (Total IT experience is not a priority.) Domain Expertise: Prior experience in the fintech or financial services domain is essential. LLM Fine-Tuning: Demonstrated expertise in fine-tuning LLMs for live production environments (academic or PoC projects are not relevant). Infrastructure Management: Experience with hosting and deploying LLMs on custom infrastructure. LLM Evaluation: Proficiency in conducting LLM evaluations using industry-recognized methodologies and frameworks. Technical Skills : Proficiency in Python and relevant AI/ML libraries (e.g., PyTorch, TensorFlow, Hugging Face). Strong understanding of LLM architectures and their optimization techniques. Familiarity with cloud-based or on-premise infrastructure for AI deployments.
Posted 1 day ago
2.0 years
0 Lacs
Mumbai, Maharashtra
On-site
Grade M1 to M4 Reports to: Lead/Principal Data Scientist Is a Team leader? No Team Size: - Role Data Scientist/Sr Data Scientist Business: Not Applicable Department: Analytics Sub-Department: Not Applicable Location: Mumbai Role As a Data Scientist specializing in to building Analytical Models for Banks/NBFCs/Insurance industry, your primary responsibility is to utilize machine learning algorithms or statistical models to optimize Business Processes across Customer Life Journey. You will be required to build the model pipeline, work on its deployment and provide the required support until its last leg of adoption. Your role involves collaborating with cross functional teams to develop predictive models that assess Credit Risk and/ or personalize Customer Experience. You will also be responsible for identifying relevant base for targeting (cross-sell/up-sell) and running these campaigns from time to time. Key Responsibilities 1. Writing Optimized Code: Develop and maintain efficient, scalable code for model pipelines Implement best practices in coding to ensure performance and resource optimization 2. Version Control and Lifecycle Configuration: Familiar with best practices to manage codebase changes with available resources Maintain and update configuration files to manage the lifecycle of data science models 3. Extracting Usable Information from Data: Translate business needs into data-driven solutions and actionable insights Conduct thorough exploratory data analysis to uncover trends and patterns. Create and maintain feature repositories to document and manage features used in models 4. Building Robust Models: Develop models that are reliable and generalize well across different time periods and customer segments. Continuously monitor and improve model performance to adapt to changing data and business conditions 5. Publishing Model Outcomes: Communicate model results and insights effectively to stakeholders. Facilitate the adoption of models by ensuring they meet business needs and are user-friendly. Qualifications Master/PGDM in Statistics/Economics OR MCA/ IIT - Btech/Bachelors in Statistics + up to 2 years of relevant experience Role Proficiencies Must Have Skills : Ms Excel & PowerPoint SQL Python Supervised & Unsupervised machine learning algorithms Strong Communication skills Collaborative Mind-set Pro-active Approach to problem solving & stakeholder engagement Good to have Skills : SAS AWS S3/ Sagemaker Version Control
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20183 Jobs | Dublin
Wipro
10025 Jobs | Bengaluru
EY
8024 Jobs | London
Accenture in India
6531 Jobs | Dublin 2
Amazon
6260 Jobs | Seattle,WA
Uplers
6244 Jobs | Ahmedabad
Oracle
5916 Jobs | Redwood City
IBM
5765 Jobs | Armonk
Capgemini
3771 Jobs | Paris,France
Tata Consultancy Services
3728 Jobs | Thane