Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Gurugram, Haryana, India
On-site
At Airtel , we’re not just scaling connectivity—we’re redefining how India experiences digital services. With 400M+ customers across telecom, financial services, and entertainment, our impact is massive. But behind every experience is an opportunity to make it smarter . We're looking for a Product Manager – AI to drive next-gen intelligence for our customers and business. AI is a transformational technology and we are looking or skilled product managers who will work on leveraging AI to power everything from our digital platforms to customer experience. You’ll work at the intersection of machine learning, product design, and systems thinking to deliver AI-driven products that create tangible business impact—fast. What You’ll Do Lead and contribute to AI-Powered Product Strategy Define product vision and strategy for AI-led initiatives that enhance productivity, automate decisions, and personalise user interactions across Airtel platforms. Translate Business Problems into AI Opportunities Partner with operations, engineering, and data science to surface high-leverage AI use cases across workforce management, customer experience, and process automation. Build & Scale ML-Driven Products Define data product requirements, work closely with ML engineers to develop models, and integrate intelligent workflows that continuously learn and adapt. Own Product Execution End-to-End Drive roadmaps, lead cross-functional teams, launch MVPs, iterate based on real-world feedback, and scale solutions with measurable ROI. What You Need to be Successful Influential Communication - Craft clarity from complexity. You can tailor messages for execs, engineers, and field teams alike—translating AI into business value. Strategic Prioritisation - Balance business urgency with technical feasibility. You can decide what not to build, and defend those decisions with data and a narrative Systems Thinking - You can sees the big picture —how decisions in one area ripple across the business, tech stack, and user experience. High Ownership & Accountability - Operate with a founder mindset. You don't wait for direction — you can rally teams, removes blockers, deal with tough stakeholders and drives outcomes. Adaptability - You thrive in ambiguity and pivot quickly without losing sight of long-term vision—key in fast-moving digital organizations. Skills You'll Need AI / ML Fundamentals Understanding of ML model types: Supervised, unsupervised, reinforcement learning Common algorithms: Linear/logistic regression, decision trees, clustering, neural networks Model lifecycle: Training, validation, testing, tuning, deployment, monitoring Understanding of LLMs, transformers, diffusion models, vector search, etc. Familiarity with GenAI product architecture: Retrieval-Augmented Generation (RAG), prompt tuning, fine-tuning Awareness of real-time personalization, recommendation systems, ranking algorithms, etc Data Fluency Understanding Data pipelines Working knowledge of SQL and Python for analysis Understanding of data annotation, labeling, and versioning Ability to define data requirements and assess data readiness AI Product Development Defining ML problem scope: Classification vs. regression vs. ranking vs. generation Model evaluation metrics: Precision, recall, etc. A/B testing & online experimentation for ML-driven experiences ML Infrastructure Awareness Know what it takes to make things work and happen. Model deployment techniques: Batch vs real-time inference, APIs, model serving Monitoring & drift detection: How to ensure models continue performing over time Familiarity with ML platforms/tools: TensorFlow, PyTorch, Hugging Face, Vertex AI, SageMaker, etc. (at a product level) Understanding latency, cost, and resource implications of ML choices AI Ethics & Safety We care deeply about our customers, their privacy and compliance to regulation. Understand Bias and fairness in models: How to detect and mitigate them Explainability & transparency: Importance for user trust and regulation Privacy & security: Understanding implications of sensitive or PII data in AI Alignment and guardrails in generative AI systems Preferred Qualifications Experienced Machine Learning/Artificial Intelligence PMs Experience building 0-1 products, scaled platforms/ecosystem products, or ecommerce Bachelor's degree in Computer Science, Engineering, Information Systems, Analytics, Mathematics Masters degree in Business Why Airtel Digital? Massive Scale : Your products will impact 400M+ users across sectors Real-World Relevance : Solve meaningful problems for our customers — protecting our customers, spam & fraud prevention, personalised experiences, connecting homes. Agility Meets Ambition : Work like a startup with the resources of a telecom giant AI That Ships : We don’t just run experiments. We deploy models and measure real-world outcomes Leadership Access : Collaborate closely with CXOs and gain mentorship from India’s top product and tech leaders
Posted 2 days ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
As a SAP FI Consultant at Sarus Inc., you will be responsible for configuring the Indirect tax with the SAP standards. Your core responsibilities will include analyzing clients" requirements, completing customizing/developments, and transporting them to the Production area. You will also be coordinating with users for issue replication in test systems to provide quick resolutions related to Indirect tax, GL, AP, and AR. Additionally, you will provide end user training and OneSource Indirect Tax Determination (SABRIX) Configuration, which includes VAT, Sales Tax, Use Tax, and GST. Your role will involve applying SAP FI and Sabrix Tax engine functionality to support business needs. You will be resolving assigned tickets and providing timely resolutions to customers. Analyzing tickets and providing detailed analysis related to SAP & Sabrix Tax engine will be part of your responsibilities. You will also provide solutions for issues related to FI and OneSource/Sabrix in support projects. You will work on SAP Global Next and OneSource/Sabrix support along with Enhancements. Coordinating with team members to find the root cause of problems/queries raised by clients on tax-related issues and making relevant adjustments in Tax engine and SAP ERP will be essential. SAP Configuration such as Tax procedure, Tax condition type, tax code configuration, and print program logic design will also be within your scope. For Global Next Configuration, you will handle Flex Field mapping, additional settings on Global Next route configuration, Freight condition configuration, US AP tax code configuration, and IDT table data for reporting. OneSource Configuration tasks will include Company data setup, Transediter, Custom Rates, Rules & Authority setups, and Work-Bench testing. Sarus Inc. is a preferred tax technology partner with Fortune 500 companies, delivering Indirect Tax Automation & Tax Support Services for over 10 years. We assist global corporations in addressing the complex world of global business and indirect tax automation by selecting and implementing the right tax technology platform. Headquartered in Atlanta, Georgia, USA, with offices in McKinney, Texas, Dubai, and Bangalore, India, we encourage creativity and innovative thinking to design and implement cost-effective solutions that meet client needs.,
Posted 2 days ago
10.0 - 12.0 years
0 Lacs
Delhi, India
On-site
Sr/Lead ML Engineer Placement type (FTE/C/CTH): C/CTH Duration : 6 month with extension Location: Phoenix AZ, must be onsite 5 days a week Start Date: 2 weeks from the offer Interview Process One and done Reason for position Integration ML to the Observability Grafana platform Team Overview Onshore and offshore Project Description AI/ML for Observability (AIOps) Developed machine learning and deep learning solutions for observability data to enhance IT operations. Implemented time series forecasting, anomaly detection, and event correlation models. Integrated LLMs using prompt engineering, fine-tuning, and RAG for incident summarization. Built MCP client-server architecture for seamless integration with the Grafana ecosystem. Duties/Day to Day Overview Machine Learning & Model Development Design and develop ML/DL models for: Time series forecasting (e.g., system load, CPU/memory usage) Anomaly detection in logs, metrics, or traces Event classification and correlation to reduce alert noise Select, train, and tune models using frameworks like TensorFlow, PyTorch, or scikit-learn Evaluate model performance using metrics like precision, recall, F1-score, and AUC ML Pipeline Engineering Build scalable data pipelines for training and inference (batch or streaming) Preprocess large observability datasets from tools like Prometheus, Kafka, or BigQuery Deploy models using cloud-native services (e.g., GCP Vertex AI, Azure ML, Docker/Kubernetes) Maintain retraining pipelines and monitor for model drift LLM Integration for Observability Intelligence Implement LLM-based workflows for summarizing incidents or logs Develop and refine prompts for GPT, LLaMA, or other large language models Integrate Retrieval-Augmented Generation (RAG) with vector databases (e.g., FAISS, Pinecone) Control latency, hallucinations, and cost in production LLM pipelines Grafana & MCP Ecosystem Integration Build or extend MCP client/server components for Grafana Surface ML model outputs (e.g., anomaly scores, predictions) in observability dashboards Collaborate with observability engineers to integrate ML insights into existing monitoring tools Collaboration & Agile Delivery Participate in daily stand-ups, sprint planning, and retrospectives Collaborate with: Data engineers on pipeline performance and data ingestion Frontend developers for real-time data visualizations SRE and DevOps teams for alert tuning and feedback loop integration Translate model outputs into actionable insights for platform teams Testing, Documentation & Version Control Write unit, integration, and regression tests for ML code and pipelines Maintain documentation on models, data sources, assumptions, and APIs Use Git, CI/CD pipelines, and model versioning tools (e.g., MLflow, DVC) Top Requirements (Must haves) AI ML Engineer Skills Design and develop machine learning algorithms and deep learning applications and systems for Observability data (AIOps) Hands on experience in Time series forecasting/prediction, anomaly detection ML algorithms Hands on experience in event classification and correlation ML algorithms Hands on experience on integrating with LLMs with prompt/fine-tuning/rag for effective summarization Working knowledge on implementing MCP client and server for Grafana Eco-system or similar exposure Key Skills: Programming languages: Python, R ML Frameworks: TensorFlow, PyTorch, scikit-learn Cloud platforms: Google Cloud, Azure Front-End Frameworks/Libraries: Experience with frameworks like React, Angular, or Vue.js, and libraries like jQuery. Design Tools: Proficiency in design software like Figma, Adobe XD, or Sketch. Databases: Knowledge of database technologies like MySQL, MongoDB, or PostgreSQL. Server-Side Languages: Familiarity with server-side languages like Python, Node.js, or Java. Version Control: Experience with Git and other version control systems. Testing: Knowledge of testing frameworks and methodologies. Agile Development: Experience with agile development methodologies. Communication and Collaboration: Strong communication and collaboration skills. Experience: Lead – 10 to 12 Years (Onshore and Offshore). Developers - 6 to 8 Years for Engineers
Posted 2 days ago
3.0 years
0 Lacs
Haryana, India
On-site
Job Description About TaskUs: TaskUs is a provider of outsourced digital services and next-generation customer experience to fast-growing technology companies, helping its clients represent, protect and grow their brands. Leveraging a cloud-based infrastructure, TaskUs serves clients in the fastest-growing sectors, including social media, e-commerce, gaming, streaming media, food delivery, ride-sharing, HiTech, FinTech, and HealthTech. The People First culture at TaskUs has enabled the company to expand its workforce to approximately 45,000 employees globally. Presently, we have a presence in twenty-three locations across twelve countries, which include the Philippines, India, and the United States. It started with one ridiculously good idea to create a different breed of Business Processing Outsourcing (BPO)! We at TaskUs understand that achieving growth for our partners requires a culture of constant motion, exploring new technologies, being ready to handle any challenge at a moment's notice, and mastering consistency in an ever-changing world. What We Offer: At TaskUs, we prioritize our employees' well-being by offering competitive industry salaries and comprehensive benefits packages. Our commitment to a People First culture is reflected in the various departments we have established, including Total Rewards, Wellness, HR, and Diversity. We take pride in our inclusive environment and positive impact on the community. Moreover, we actively encourage internal mobility and professional growth at all stages of an employee's career within TaskUs. Join our team today and experience firsthand our dedication to supporting People First. Software Engineer - AI Solutions Taskus | Full-Time | About The Job Join our AI Solutions team to build cutting-edge applications powered by Large Language Models and other AI technologies for enterprise clients. You'll work closely with Technical Product Managers to transform client requirements into production-ready AI solutions. This role combines software engineering excellence with practical AI implementation. You'll integrate LLMs into client systems, optimize performance, and deliver scalable solutions across various industries and use cases. Responsibilities Design and implement AI-powered features using LLMs and other machine learning models Develop proof-of-concepts and MVPs to demonstrate AI capabilities to stakeholders Optimize LLM performance including prompt engineering, response latency, and cost efficiency Implement evaluation frameworks and monitoring systems for deployed AI solutions Collaborate with Product Managers to translate client requirements into technical specifications Ensure code quality, security, and scalability in production environments Document technical implementations and create deployment guides for client teams Minimum qualifications Bachelor's degree in Computer Science or equivalent practical experience 3+ years of software engineering experience Strong proficiency in Python and modern web frameworks (FastAPI, Flask, Django) Experience with REST APIs and microservices architecture Understanding of cloud platforms (AWS, Azure, or GCP) Excellent problem-solving and debugging skills Preferred qualifications Experience integrating LLM APIs (OpenAI, Anthropic, Google Vertex AI) Familiarity with vector databases (Pinecone, Weaviate, ChromaDB) Knowledge of prompt engineering and LLM fine-tuning techniques Experience with containerization (Docker, Kubernetes) Background in client-facing or professional services roles Understanding of ML model deployment and monitoring Work Mode : Onsite Work Schedule : 8am to 5pm IST How We Partner To Protect You: TaskUs will neither solicit money from you during your application process nor require any form of payment in order to proceed with your application. Kindly ensure that you are always in communication with only authorized recruiters of TaskUs. DEI: In TaskUs we believe that innovation and higher performance are brought by people from all walks of life. We welcome applicants of different backgrounds, demographics, and circumstances. Inclusive and equitable practices are our responsibility as a business. TaskUs is committed to providing equal access to opportunities. If you need reasonable accommodations in any part of the hiring process, please let us know. We invite you to explore all TaskUs career opportunities and apply through the provided URL https://www.taskus.com/careers/ . TaskUs is proud to be an equal opportunity workplace and is an affirmative action employer. We celebrate and support diversity; we are committed to creating an inclusive environment for all employees. TaskUs people first culture thrives on it for the benefit of our employees, our clients, our services, and our community. Req Id: R_2507_10186 Posted At: Tue Jul 29 2025 00:00:00 GMT+0000 (Coordinated Universal Time)
Posted 2 days ago
3.0 years
0 Lacs
Haryana, India
On-site
Job Description About TaskUs: TaskUs is a provider of outsourced digital services and next-generation customer experience to fast-growing technology companies, helping its clients represent, protect and grow their brands. Leveraging a cloud-based infrastructure, TaskUs serves clients in the fastest-growing sectors, including social media, e-commerce, gaming, streaming media, food delivery, ride-sharing, HiTech, FinTech, and HealthTech. The People First culture at TaskUs has enabled the company to expand its workforce to approximately 45,000 employees globally. Presently, we have a presence in twenty-three locations across twelve countries, which include the Philippines, India, and the United States. It started with one ridiculously good idea to create a different breed of Business Processing Outsourcing (BPO)! We at TaskUs understand that achieving growth for our partners requires a culture of constant motion, exploring new technologies, being ready to handle any challenge at a moment's notice, and mastering consistency in an ever-changing world. What We Offer: At TaskUs, we prioritize our employees' well-being by offering competitive industry salaries and comprehensive benefits packages. Our commitment to a People First culture is reflected in the various departments we have established, including Total Rewards, Wellness, HR, and Diversity. We take pride in our inclusive environment and positive impact on the community. Moreover, we actively encourage internal mobility and professional growth at all stages of an employee's career within TaskUs. Join our team today and experience firsthand our dedication to supporting People First. Software Engineer - AI Solutions Taskus | Full-Time | About The Job Join our AI Solutions team to build cutting-edge applications powered by Large Language Models and other AI technologies for enterprise clients. You'll work closely with Technical Product Managers to transform client requirements into production-ready AI solutions. This role combines software engineering excellence with practical AI implementation. You'll integrate LLMs into client systems, optimize performance, and deliver scalable solutions across various industries and use cases. Responsibilities Design and implement AI-powered features using LLMs and other machine learning models Develop proof-of-concepts and MVPs to demonstrate AI capabilities to stakeholders Optimize LLM performance including prompt engineering, response latency, and cost efficiency Implement evaluation frameworks and monitoring systems for deployed AI solutions Collaborate with Product Managers to translate client requirements into technical specifications Ensure code quality, security, and scalability in production environments Document technical implementations and create deployment guides for client teams Minimum qualifications Bachelor's degree in Computer Science or equivalent practical experience 3+ years of software engineering experience Strong proficiency in Python and modern web frameworks (FastAPI, Flask, Django) Experience with REST APIs and microservices architecture Understanding of cloud platforms (AWS, Azure, or GCP) Excellent problem-solving and debugging skills Preferred qualifications Experience integrating LLM APIs (OpenAI, Anthropic, Google Vertex AI) Familiarity with vector databases (Pinecone, Weaviate, ChromaDB) Knowledge of prompt engineering and LLM fine-tuning techniques Experience with containerization (Docker, Kubernetes) Background in client-facing or professional services roles Understanding of ML model deployment and monitoring Work Mode : Onsite Work Schedule : 8am to 5pm IST How We Partner To Protect You: TaskUs will neither solicit money from you during your application process nor require any form of payment in order to proceed with your application. Kindly ensure that you are always in communication with only authorized recruiters of TaskUs. DEI: In TaskUs we believe that innovation and higher performance are brought by people from all walks of life. We welcome applicants of different backgrounds, demographics, and circumstances. Inclusive and equitable practices are our responsibility as a business. TaskUs is committed to providing equal access to opportunities. If you need reasonable accommodations in any part of the hiring process, please let us know. We invite you to explore all TaskUs career opportunities and apply through the provided URL https://www.taskus.com/careers/ . TaskUs is proud to be an equal opportunity workplace and is an affirmative action employer. We celebrate and support diversity; we are committed to creating an inclusive environment for all employees. TaskUs people first culture thrives on it for the benefit of our employees, our clients, our services, and our community. Req Id: R_2507_10183 Posted At: Tue Jul 29 2025 00:00:00 GMT+0000 (Coordinated Universal Time)
Posted 2 days ago
7.5 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Project Role : Application Tech Support Practitioner Project Role Description : Act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. Must have skills : Python (Programming Language) Good to have skills : Generative AI Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: We are seeking a highly motivated and technically skilled GenAI & Prompt Engineering Specialist to join our Automation & Asset Development & Deployment team. This role will focus on designing, developing, and optimizing generative AI solutions using Python and large language models (LLMs). You will be instrumental in building intelligent automation workflows, refining prompt strategies, and ensuring scalable, secure AI deployments. Roles & Responsibilities: -• Design, test, and optimise prompts for LLMs to support use cases which benefit the infra & application managed services. • Build and maintain Python-based microservices and scripts for data processing, API integration, and model orchestration. • Collaborate with SMEs to convert business requirements into GenAI-powered workflows, including chunking logic, token optimisation, and schema transformation. • Work with foundation models and APIs (e.g., OpenAI, Vertex AI, Claude Sonnet) to embed GenAI capabilities into enterprise platforms. • Ensure all AI solutions comply with internal data privacy, PII masking, and security standards. • Conduct A/B testing of prompts, evaluate model outputs, and iterate based on SME feedback. • Maintain clear documentation of prompt strategies, model behaviors, and solution architectures. Professional & Technical Skills: • Strong proficiency in Python, including experience with REST APIs, data parsing, and automation scripting. • Deep understanding of LLMs, prompt engineering, and GenAI frameworks (e.g., LangChain, RAG pipelines). • Familiarity with data modelling, SQL, and RDBMS concepts. • Experience with agentic workflows, token optimization, and schema chunking. Additional Information: - The candidate should have minimum 7.5 years of experience in Python (Programming Language). - This position is based at our Noida office. - A 15 years full time education is required.
Posted 2 days ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Project Role : Application Tech Support Practitioner Project Role Description : Act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. Must have skills : Python (Programming Language) Good to have skills : Generative AI Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: We are seeking a highly motivated and technically skilled GenAI & Prompt Engineering Specialist to join our Automation & Asset Development & Deployment team. This role will focus on designing, developing, and optimizing generative AI solutions using Python and large language models (LLMs). You will be instrumental in building intelligent automation workflows, refining prompt strategies, and ensuring scalable, secure AI deployments. Roles & Responsibilities: -• Design, test, and optimise prompts for LLMs to support use cases which benefit the infra & application managed services. • Build and maintain Python-based microservices and scripts for data processing, API integration, and model orchestration. • Collaborate with SMEs to convert business requirements into GenAI-powered workflows, including chunking logic, token optimisation, and schema transformation. • Work with foundation models and APIs (e.g., OpenAI, Vertex AI, Claude Sonnet) to embed GenAI capabilities into enterprise platforms. • Ensure all AI solutions comply with internal data privacy, PII masking, and security standards. • Conduct A/B testing of prompts, evaluate model outputs, and iterate based on SME feedback. • Maintain clear documentation of prompt strategies, model behaviors, and solution architectures. Professional & Technical Skills: • Strong proficiency in Python, including experience with REST APIs, data parsing, and automation scripting. • Deep understanding of LLMs, prompt engineering, and GenAI frameworks (e.g., LangChain, RAG pipelines). • Familiarity with data modelling, SQL, and RDBMS concepts. • Experience with agentic workflows, token optimization, and schema chunking. Additional Information: - The candidate should have minimum 5 years of experience in Python (Programming Language). - This position is based at our Noida office. - A 15 years full time education is required.
Posted 2 days ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Project Role : Application Tech Support Practitioner Project Role Description : Act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. Must have skills : Python (Programming Language) Good to have skills : Generative AI Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: We are seeking a highly motivated and technically skilled GenAI & Prompt Engineering Specialist to join our Automation & Asset Development & Deployment team. This role will focus on designing, developing, and optimizing generative AI solutions using Python and large language models (LLMs). You will be instrumental in building intelligent automation workflows, refining prompt strategies, and ensuring scalable, secure AI deployments. Roles & Responsibilities: -• Design, test, and optimise prompts for LLMs to support use cases which benefit the infra & application managed services. • Build and maintain Python-based microservices and scripts for data processing, API integration, and model orchestration. • Collaborate with SMEs to convert business requirements into GenAI-powered workflows, including chunking logic, token optimisation, and schema transformation. • Work with foundation models and APIs (e.g., OpenAI, Vertex AI, Claude Sonnet) to embed GenAI capabilities into enterprise platforms. • Ensure all AI solutions comply with internal data privacy, PII masking, and security standards. • Conduct A/B testing of prompts, evaluate model outputs, and iterate based on SME feedback. • Maintain clear documentation of prompt strategies, model behaviors, and solution architectures. Professional & Technical Skills: • Strong proficiency in Python, including experience with REST APIs, data parsing, and automation scripting. • Deep understanding of LLMs, prompt engineering, and GenAI frameworks (e.g., LangChain, RAG pipelines). • Familiarity with data modelling, SQL, and RDBMS concepts. • Experience with agentic workflows, token optimization, and schema chunking. Additional Information: - The candidate should have minimum 5 years of experience in Python (Programming Language). - This position is based at our Noida office. - A 15 years full time education is required.
Posted 2 days ago
3.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Who we are We are Fluxon, a product development team founded by ex-Googlers and startup founders. We offer full-cycle software development from ideation and design to build and go-to-market. We partner with visionary companies, ranging from fast-growing startups to tech leaders like Google and Stripe, to turn bold ideas into products with the power to transform the world. The role is open to candidates based in Gurugram, India. About the role As a Senior Software Engineer at Fluxon, you’ll have the opportunity to bring products to market while learning, contributing, and growing with our team. You'll be responsible for: Driving end-to-end implementations all the way to the user, collaborating with your team to build and iterate in a dynamic environment Engaging directly with clients to understand business goals, give demos, and debug production issues Informing product requirements, identifying appropriate technical designs in partnership with our Product and Design teams Proactively communicating progress and challenges in your work and seeking help when you need it Performing code reviews and cross-feature validations Providing mentorship in your areas of expertise You'll work with a diversity of technologies, including: Languages TypeScript/JavaScript, Java, .Net, Python, Golang, Rust, Ruby on Rails, Kotlin, Swift Frameworks Next.js, React, Angular, Spring, Expo, FastAPI, Django, SwiftUI Cloud Service Providers Google Cloud Platform, Amazon Web Services, Microsoft Azure Cloud Services Compute Engine, AWS Amplify, Fargate, Cloud Run Apache Kafka, SQS, GCP CMS S3, GCS Technologies AI/ML, LLMs, Crypto, SPA, Mobile apps, Architecture redesign Google Gemini, OpenAI ChatGPT, Vertex AI, Anthropic Claude, Huggingface Databases Firestore(Firebase), PostgreSQL, MariaDB, BigQuery, Supabase Redis, Memcache Qualifications 3+years of industry experience in software development Experienced with the full product lifecycle, including CI/CD, testing, release management, deployment, monitoring and incident response Fluent in software design patterns, scalable system architectures, tooling, fundamentals of data structures and algorithms What we offer Exposure to high-profile SV startups and enterprise companies Competitive salary Fully remote work with flexible hours Flexible paid time off Profit-sharing program Healthcare Parental leave, including adoption and fostering Gym membership and tuition reimbursement Hands-on career development
Posted 2 days ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About AiSensy AiSensy is a WhatsApp based Marketing & Engagement platform helping businesses like Adani, Delhi Transport Corporation, Yakult, Godrej, Aditya Birla Hindalco, Wipro, Asian Paints, India Today Group, Skullcandy, Vivo, Physicswallah, and Cosco grow their revenues via WhatsApp. Enabling 100,000+ Businesses with WhatsApp Engagement & Marketing 400 Crores+ WhatsApp Messages exchanged between Businesses and Users via AiSensy per year Working with top brands like Delhi Transport Corporation, Vivo, Physicswallah & more High Impact as Businesses drive 25-80% Revenues using AiSensy Platform Mission-Driven and Growth Stage Startup backed by Marsshot.vc, Bluelotus.vc & 50+ Angel Investors Role Overview We are looking for a Senior Machine Learning Engineer to lead the development and deployment of cutting-edge AI/ML systems with a strong focus on LLMs, Retrieval-Augmented Generation (RAG), AI agents , and intelligent automation. You will work closely with cross-functional teams to translate business needs into AI solutions, bringing your expertise in building scalable ML infrastructure, deploying models in production, and staying at the forefront of AI innovation. Key Responsibilities AI & ML System Development Design, develop, and optimize end-to-end ML models using LLMs, transformer architectures, and custom RAG frameworks. Fine-tune and evaluate generative and NLP models for business-specific applications such as conversational flows, auto-replies, and intelligent routing. Lead prompt engineering and build autonomous AI agents capable of executing multi-step reasoning. Infrastructure, Deployment & MLOps Architect and automate scalable training, validation, and deployment pipelines using tools like MLflow, SageMaker, or Vertex AI. Integrate ML models with APIs, databases (vector/graph), and production services ensuring performance, reliability, and traceability. Monitor model performance in real-time, and implement A/B testing, drift detection, and re-training pipelines. Data & Feature Engineering Work with structured, unstructured, and semi-structured data (text, embeddings, chat history). Build and manage vector databases (e.g., Pinecone, Weaviate) and graph-based retrieval systems. Ensure high-quality data ingestion, feature pipelines, and scalable pre-processing workflows. Team Collaboration & Technical Leadership Collaborate with product managers, software engineers, and stakeholders to align AI roadmaps with product goals. Mentor junior engineers and establish best practices in experimentation, reproducibility, and deployment. Stay updated on the latest in AI/ML (LLMs, diffusion models, multi-modal learning), and drive innovation in applied use cases. Required Qualifications Bachelor’s/Master’s degree in Computer Science, Engineering, AI/ML, or a related field from a Tier 1 institution (IIT, NIT, IIIT or equivalent). 2+ years of experience building and deploying machine learning models in production. Expertise in Python and known the frameworks like TensorFlow, PyTorch, Hugging Face, Scikit-learn . Hands-on experience with transformer models , LLMs , LangChain , LangGraph, OpenAI API , or similar. Deep knowledge of machine learning algorithms , model evaluation, hyperparameter tuning, and optimization. Experience working with cloud platforms (AWS, GCP, or Azure) and ML Ops tools (MLflow, Airflow, Kubernetes). Strong understanding of SQL , data engineering concepts, and working with large-scale datasets. Preferred Qualifications Experience with prompt tuning, agentic AI systems , or multi-modal learning . Familiarity with vector search systems (e.g., Pinecone, FAISS, Milvus) and knowledge graphs . Contributions to open-source AI/ML projects or publications in AI journals/conferences. Experience in building conversational AI or smart assistants using WhatsApp or similar messaging APIs. Why Join AiSensy? Build AI that directly powers growth for 100,000+ businesses. Work on cutting-edge technologies like LLMs, RAG, and AI agents in real production environments. High ownership, fast iterations, and impact-focused work. Ready to build intelligent systems that redefine communication? Apply now and join the AI revolution at AiSensy .
Posted 2 days ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
This role is for one of Weekday's clients Min Experience: 4 years Location: Chennai JobType: full-time Requirements Responsibilities: Design, develop, and implement AI/ML models and algorithms Focus on building Proof of Concept (POC) applications to demonstrate the feasibility and value of AI solutions Write clean, efficient, and well-documented code Collaborate with data engineers to ensure data quality and availability for model training and evaluation Work closely with senior team members to understand project requirements and contribute to technical solutions Troubleshoot and debug AI/ML models and applications Stay up-to-date with the latest advancements in AI/ML Utilize machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) to develop and deploy models Develop and deploy AI solutions on Google Cloud Platform (GCP) Implement data preprocessing and feature engineering techniques using libraries like Pandas and NumPy Utilize Vertex AI for model training, deployment, and management. Integrate and leverage Google Gemini for specific AI functionalities. Qualifications: Bachelor's degree in computer science, Artificial Intelligence, or a related field. 3+ years of experience in developing and implementing AI/ML models. Strong programming skills in Python. Experience with machine learning frameworks such as TensorFlow, PyTorch, or Scikit-learn. Good understanding of machine learning concepts and techniques. Ability to work independently and as part of a team. Strong problem-solving skills. Good communication skills. Experience with Google Cloud Platform (GCP) is preferred. Familiarity with Vertex AI is a plus
Posted 2 days ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Spyne At Spyne , we’re reimagining how cars are bought and sold globally with cutting-edge Generative AI . What started as a bold idea — using AI to help auto dealers sell faster online — has evolved into a full-fledged, AI-first automotive retail ecosystem. Backed by $16M in Series A from Accel, Vertex Ventures, and other top investors, we’re shaping the future of car retail: AI-powered image, video, and 360° solutions for automotive dealers GenAI Retail Suite for Inventory, Marketing, and CRM across global markets 1,500+ dealers onboarded across the US, EU, and key markets Team of 150+ passionate individuals , equally split across R&D and GTM Learn more about our products: Spyne AI Products - StudioAI, RetailAI Series A Announcement - CNBC-TV18, Yourstory More about us - ASOTU Location: Gurugram | On-site | Full-time Role Overview We’re looking for a hands‑on Lead – Demand Generation to own the growth funnel — from acquiring quality traffic to converting Marketing Qualified Leads (MQLs) into revenue. You will be responsible for planning, executing, and optimizing demand generation campaigns across organic, paid, and account-based marketing efforts. If you have deep experience in SEO , CRO , and funnel management , and are passionate about driving revenue impact, we’d love to meet you! What Will You Do? ⚡️ SEO & Organic Growth Lead end‑to‑end SEO strategy — site audits, keyword research, backlink planning, and site optimization Collaborate closely with Product, Marketing, and Engineering to implement and measure impact Stay updated with search engine trends , best practices, and algorithm changes 📈 Conversion Rate Optimization (CRO) Build, test, and optimize landing pages and conversion flows for higher traffic‑to‑lead conversion Lead A/B testing, user experience reviews, and site optimization efforts Analyze site behavior and implement recommendations for better conversion metrics 🎯 Demand Generation & Funnel Management Plan, launch, and manage multi‑channel demand generation campaigns across paid and organic platforms Measure campaign effectiveness across metrics like CPL, CAC, MQL‑to‑Revenue conversion, and pipeline velocity Lead nurturing efforts to drive high‑quality MQLs and enable seamless conversion to revenue 💡 Analytics & Optimization Maintain and manage dashboards for campaign performance across traffic, lead, MQL, and revenue metrics Identify trends, bottlenecks, and growth opportunities within the marketing funnel Leverage marketing automation platforms (HubSpot, Marketo, or equivalent) for nurturing and conversion optimization What We’re Looking For 5–8 years experience in Demand Generation roles, with a focus on SEO, CRO, and MQL‑to‑Revenue funnel optimization Strong knowledge of SEO best practices , site analytics, and conversion optimization techniques Hands‑on experience managing platforms like Google Analytics, SEMrush/Ahrefs, HubSpot, Marketo, Salesforce , or similar A data‑driven approach with a proven track record of optimizing marketing funnel metrics Strong communication and stakeholder management abilities across Product, Sales, and Marketing teams An agile mindset and comfort working in a high‑growth, fast‑paced SaaS environment Why Join Spyne? 🚀 High‑Growth Company – We’ve scaled revenues 5X in 15 months and are poised for another 3–4X growth in the coming year 👥 High Ownership Culture – Autonomy, accountability, and room to make an impact every day 💻 Best‑in‑Class Tools – Laptop of your choice + access to premium marketing platforms and analytics tools 🌍 Global Impact – Build marketing strategies that drive results across the US, EU, and beyond 🥇 Learning Culture – We hire sharp minds and foster a space for them to learn, experiment, and evolve 📈 Exponential Growth – Join at the best time and scale your role, expertise, and career 10X with the company If you’re passionate about scaling growth , optimizing conversion , and making a global impact, we’d love to have you on the journey!
Posted 2 days ago
9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We seek an experienced Principal Data Scientist to lead our data science team and drive innovation in machine learning, advanced analytics, and Generative AI. This role blends strategic leadership with deep technical expertise across ML engineering, LLMs, deep learning, and multi-agent systems. You will be at the forefront of deploying AI-driven solutions, including agentic frameworks and LLM orchestration, to tackle complex, real-world problems at scale. Primary Stack: Languages: Python, SQL Cloud Platforms: AWS or GCP preferred ML & Deep Learning: PyTorch, TensorFlow, Scikit-learn GenAI & LLM Toolkits: Hugging Face, LangChain, OpenAI APIs, Cohere, Anthropic Agentic & Orchestration Frameworks: LangGraph, CrewAI, Agno, Autogen, AutoGPT Vector Stores & Retrieval: FAISS, Pinecone, Weaviate MLOps & Deployment: MLflow, SageMaker, Vertex AI, Kubeflow, Docker, Kubernetes, Fast API Key Responsibilities: Lead and mentor a team of 10+ data scientists and ML engineers, promoting a culture of innovation, ownership, and cross-functional collaboration. Drive the development, deployment, and scaling of advanced machine learning, deep learning, and GenAI applications across the business. Build and implement agentic architectures and multi-agent systems using tools like LangGraph, CrewAI, and Agno to solve dynamic workflows and enhance LLM reasoning capabilities. Architect intelligent agents capable of autonomous planning, decision-making, tool use, and collaboration. Leverage LLMs and transformer-based models to power solutions in NLP, conversational AI, information retrieval, and decision support. Develop and scale ML pipelines on cloud platforms, ensuring performance, reliability, and reproducibility. Establish and maintain MLOps processes (CI/CD for ML, monitoring, governance) and ensure best practices in responsible AI. Collaborate with product, engineering, and business teams to align AI initiatives with strategic goals. Stay ahead of the curve on AI/ML trends, particularly in the multi-agent and agentic systems landscape, and advocate for their responsible adoption. Present results, insights, and roadmaps to senior leadership and non-technical stakeholders in a clear, concise manner. Qualifications: 9+ years of experience in data science, business analytics, or ML engineering, with 3+ years in a leadership or principal role. Demonstrated experience in architecting and deploying LLM-based solutions in production environments. Deep understanding of deep learning, transformers, and modern NLP. Proven hands-on experience building multi-agent systems using LangGraph, CrewAI, Agno, or related tools. Strong grasp of agent design principles, including memory management, planning, tool selection, and self-reflection loops. Expertise in cloud-based ML platforms (e.g., AWS SageMaker, GCP Vertex AI) and MLOps best practices. Familiarity with retrieval-augmented generation (RAG) and vector databases (e.g., FAISS, Pinecone). Excellent communication, stakeholder engagement, and cross-functional leadership skills.
Posted 2 days ago
7.0 years
0 Lacs
Hyderābād
On-site
EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking an experienced Lead Platform Engineer to join our Automation Engineering team. The ideal candidate will excel in cloud infrastructure automation, generative AI, and machine learning, with a strong foundation in DevOps practices and modern scripting tools. This role involves designing cutting-edge AI-driven solutions for AIOps while innovating cloud automation processes to optimize operational efficiency. Responsibilities Design and develop automated workflows for cloud infrastructure provisioning using IaC tools like Terraform Build frameworks to support deployment, configuration, and management across diverse cloud environments Develop and manage service catalog components, ensuring integration with platforms like Backstage Implement GenAI models to enhance service catalog functionality and code quality across automation pipelines Design and implement CI/CD pipelines and maintain CI pipeline code for cloud automation use cases Write scripts to support cloud deployment orchestration using Python, Bash, or other scripting languages Design and deploy generative AI models for AIOps applications such as anomaly detection and predictive maintenance Work with frameworks like LangChain or cloud platforms such as Bedrock, Vertex AI, and Azure AI to deploy RAG workflows Build and optimize vector databases and document sources using tools like OpenSearch, Amazon Kendra, or equivalent solutions Prepare and label data for generative AI models, ensuring scalability and integrity Create agentic workflows using frameworks like Langraph or cloud GenAI platforms such as Bedrock Agents Integrate generative AI models with operational systems and AIOps platforms for enhanced automation Evaluate AI model performance and ensure continuous optimization over time Develop and maintain MLOps pipelines to monitor and mitigate model decay Collaborate with cross-functional teams to drive innovation and improve cloud automation processes Research and recommend new tools and best practices to enhance operational efficiency Requirements Bachelor's or Master's degree in Computer Science, Engineering, or a related field 7+ years of experience in cloud infrastructure automation, scripting, and DevOps Strong proficiency in IaC tools like Terraform, CloudFormation, or similar Expertise in Python, cloud AI frameworks such as LangChain, and generative AI workflows Demonstrated background in developing and deploying AI models such as RAG or transformers Proficiency in building vector databases and document sources using solutions like OpenSearch or Amazon Kendra Competency in preparing and labeling datasets for AI models and optimizing data inputs Familiarity with cloud platforms including AWS, Google Cloud, or Azure Capability to implement MLOps pipelines and monitor AI system performance Nice to have Knowledge of agentic architectures such as React and flow engineering techniques Background in using Bedrock Agents or Langraph for workflow creation Understanding of integrating generative AI into legacy or complex operational systems We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)
Posted 2 days ago
3.0 - 8.0 years
0 Lacs
Hyderābād
On-site
EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking a skilled Platform Engineer to join our Automation Engineering team, bringing expertise in cloud infrastructure automation, DevOps, scripting, and advanced AI/ML practices. The role focuses on integrating generative AI into automation workflows, enhancing operational efficiency, and supporting cloud-first initiatives. Responsibilities Design cloud automation workflows using Infrastructure-as-Code tools such as Terraform or CloudFormation Build scalable frameworks to manage infrastructure provisioning, deployment, and configuration across multiple cloud platforms Create service catalog components compatible with automation platforms like Backstage Integrate generative AI models to improve service catalog functionalities, including automated code generation and validation Architect CI/CD pipelines for automated build, test, and deployment processes Maintain deployment automation scripts utilizing technologies such as Python or Bash Implement generative AI models (e.g., RAG, agent-based workflows) for AIOps use cases like anomaly detection and root cause analysis Employ AI/ML tools such as LangChain, Bedrock, Vertex AI, or Azure AI for advanced generative AI solutions Develop vector databases and document sources using services like Amazon Kendra, OpenSearch, or custom solutions Engineer data pipelines to stream real-time operational insights that support AI-driven automation Build MLOps pipelines to deploy and monitor generative AI models, ensuring optimal performance and avoiding model decay Select appropriate LLM models for specific AIOps use cases and integrate them effectively into workflows Collaborate with cross-functional teams to design and refine automation and AI-driven processes Research emerging tools and technologies to enhance operational efficiency and scalability Requirements Bachelor's or Master's degree in Computer Science, Engineering, or related field 3-8 years of experience in cloud infrastructure automation, DevOps, and scripting Proficiency with Infrastructure-as-Code tools such as Terraform or CloudFormation Expertise in Python and generative AI frameworks like RAG and agent-based workflows Knowledge of cloud-based AI services, including Bedrock, Vertex AI, or Azure AI Familiarity with vector databases like Amazon Kendra, OpenSearch, or custom database solutions Competency in data engineering tasks such as feature engineering, labeling, and real-time data streaming Proven track record in creating and maintaining MLOps pipelines for AI/ML models in production environments Nice to have Background in Flow Engineering tools such as Langraph or platform-specific workflow orchestration tools Understanding of comprehensive AIOps processes to refine cloud-based automation solutions We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)
Posted 2 days ago
7.0 years
0 Lacs
Gurgaon
On-site
EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking an experienced Lead Platform Engineer to join our Automation Engineering team. The ideal candidate will excel in cloud infrastructure automation, generative AI, and machine learning, with a strong foundation in DevOps practices and modern scripting tools. This role involves designing cutting-edge AI-driven solutions for AIOps while innovating cloud automation processes to optimize operational efficiency. Responsibilities Design and develop automated workflows for cloud infrastructure provisioning using IaC tools like Terraform Build frameworks to support deployment, configuration, and management across diverse cloud environments Develop and manage service catalog components, ensuring integration with platforms like Backstage Implement GenAI models to enhance service catalog functionality and code quality across automation pipelines Design and implement CI/CD pipelines and maintain CI pipeline code for cloud automation use cases Write scripts to support cloud deployment orchestration using Python, Bash, or other scripting languages Design and deploy generative AI models for AIOps applications such as anomaly detection and predictive maintenance Work with frameworks like LangChain or cloud platforms such as Bedrock, Vertex AI, and Azure AI to deploy RAG workflows Build and optimize vector databases and document sources using tools like OpenSearch, Amazon Kendra, or equivalent solutions Prepare and label data for generative AI models, ensuring scalability and integrity Create agentic workflows using frameworks like Langraph or cloud GenAI platforms such as Bedrock Agents Integrate generative AI models with operational systems and AIOps platforms for enhanced automation Evaluate AI model performance and ensure continuous optimization over time Develop and maintain MLOps pipelines to monitor and mitigate model decay Collaborate with cross-functional teams to drive innovation and improve cloud automation processes Research and recommend new tools and best practices to enhance operational efficiency Requirements Bachelor's or Master's degree in Computer Science, Engineering, or a related field 7+ years of experience in cloud infrastructure automation, scripting, and DevOps Strong proficiency in IaC tools like Terraform, CloudFormation, or similar Expertise in Python, cloud AI frameworks such as LangChain, and generative AI workflows Demonstrated background in developing and deploying AI models such as RAG or transformers Proficiency in building vector databases and document sources using solutions like OpenSearch or Amazon Kendra Competency in preparing and labeling datasets for AI models and optimizing data inputs Familiarity with cloud platforms including AWS, Google Cloud, or Azure Capability to implement MLOps pipelines and monitor AI system performance Nice to have Knowledge of agentic architectures such as React and flow engineering techniques Background in using Bedrock Agents or Langraph for workflow creation Understanding of integrating generative AI into legacy or complex operational systems We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)
Posted 2 days ago
3.0 - 8.0 years
0 Lacs
Gurgaon
On-site
EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking a skilled Platform Engineer to join our Automation Engineering team, bringing expertise in cloud infrastructure automation, DevOps, scripting, and advanced AI/ML practices. The role focuses on integrating generative AI into automation workflows, enhancing operational efficiency, and supporting cloud-first initiatives. Responsibilities Design cloud automation workflows using Infrastructure-as-Code tools such as Terraform or CloudFormation Build scalable frameworks to manage infrastructure provisioning, deployment, and configuration across multiple cloud platforms Create service catalog components compatible with automation platforms like Backstage Integrate generative AI models to improve service catalog functionalities, including automated code generation and validation Architect CI/CD pipelines for automated build, test, and deployment processes Maintain deployment automation scripts utilizing technologies such as Python or Bash Implement generative AI models (e.g., RAG, agent-based workflows) for AIOps use cases like anomaly detection and root cause analysis Employ AI/ML tools such as LangChain, Bedrock, Vertex AI, or Azure AI for advanced generative AI solutions Develop vector databases and document sources using services like Amazon Kendra, OpenSearch, or custom solutions Engineer data pipelines to stream real-time operational insights that support AI-driven automation Build MLOps pipelines to deploy and monitor generative AI models, ensuring optimal performance and avoiding model decay Select appropriate LLM models for specific AIOps use cases and integrate them effectively into workflows Collaborate with cross-functional teams to design and refine automation and AI-driven processes Research emerging tools and technologies to enhance operational efficiency and scalability Requirements Bachelor's or Master's degree in Computer Science, Engineering, or related field 3-8 years of experience in cloud infrastructure automation, DevOps, and scripting Proficiency with Infrastructure-as-Code tools such as Terraform or CloudFormation Expertise in Python and generative AI frameworks like RAG and agent-based workflows Knowledge of cloud-based AI services, including Bedrock, Vertex AI, or Azure AI Familiarity with vector databases like Amazon Kendra, OpenSearch, or custom database solutions Competency in data engineering tasks such as feature engineering, labeling, and real-time data streaming Proven track record in creating and maintaining MLOps pipelines for AI/ML models in production environments Nice to have Background in Flow Engineering tools such as Langraph or platform-specific workflow orchestration tools Understanding of comprehensive AIOps processes to refine cloud-based automation solutions We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)
Posted 2 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking a highly skilled and hands-on AI Architect to lead the design and deployment of next-generation AI systems for our cutting-edge platform. You will be responsible for architecting scalable GenAI and machine learning solutions, establishing MLOps best practices, and ensuring robust security and cost-efficient operations across our AI-powered modules Primary Skills: • System architecture for GenAI: design scalable pipelines using LLMs, RAG, multi‐agent orchestration (LangGraph, CrewAI, AutoGen). • Machine‐learning engineering: PyTorch or TensorFlow, Hugging Face Transformers. • Retrieval & vector search: FAISS, Weaviate, Pinecone, pgvector; embedding selection and index tuning. • Cloud infra: AWS production experience (GPU instances, Bedrock / Vertex AI, EKS, IAM, KMS). • MLOps & DevOps: MLflow / Kubeflow, Docker + Kubernetes, CI/CD, Terraform • Security & compliance: data encryption, RBAC, PII redaction in LLM prompts. • Cost & performance optimisation: token‐usage budgeting, caching, model routing. • Stakeholder communication: ability to defend architectural decisions to CTO, product, and investors.
Posted 2 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Position: We are conducting an in-person hiring drive for the position of Mlops / Data Science in Pune & Bengaluru on 2nd August 2025.Interview Location is mentioned below: Pune – Persistent Systems, Veda Complex, Rigveda-Yajurveda-Samaveda-Atharvaveda Plot No. 39, Phase I, Rajiv Gandhi Information Technology Park, Hinjawadi, Pune, 411057 Bangalore - Persistent Systems, The Cube at Karle Town Center Rd, DadaMastan Layout, Manayata Tech Park, Nagavara, Bengaluru, Karnataka 560024 We are looking for an experienced and talented Data Science to join our growing data competency team. The ideal candidate will have a strong background in working with GEN AI , ML ,LangChain, LangGraph, Mlops Architecture Strategy, Prompt engineering. You will work closely with our data analysts, engineers, and business teams to ensure optimal performance, scalability, and availability of our data pipelines and analytics. Role: Mlops, Data Science Job Location: All PSL Location Experience: 5+ Years Job Type: Full Time Employment What You'll Do: Design, build, and manage scalable ML model deployment pipelines (CI/CD for ML). Automate model training, validation, monitoring, and retraining workflows. Implement model governance, versioning, and reproducibility best practices. Collaborate with data scientists, engineers, and product teams to operationalize ML solutions. Ensure robust monitoring and performance tuning of deployed models Expertise You'll Bring: Strong experience with MLOps tools & frameworks (MLflow, Kubeflow, SageMaker, Vertex AI, etc.). Proficient in containerization (Docker, Kubernetes). Good knowledge of cloud platforms (AWS, Azure, or GCP). Expertise in Python and familiarity with ML libraries (TensorFlow, PyTorch, scikit-learn). Solid understanding of CI/CD, infrastructure as code, and automation tools. Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”
Posted 3 days ago
0 years
0 Lacs
Kozhikode, Kerala, India
On-site
Pfactorial Technologies is a fast-growing AI/ML/NLP company at the forefront of innovation in Generative AI, voice technology, and intelligent automation. We specialize in building next-gen solutions using LLMs, agent frameworks, and custom ML pipelines. Join our dynamic team to work on real-world challenges and shape the future of AI driven systems and smart automation.. We are looking for AI/ML Engineer – LLMs, Voice Agents & Workflow Automation (0–3Yrs Experience ) Experience with LLM integration pipelines (OpenAI, Vertex AI, Hugging Face models) Hands on experience in working with voice agents, TTS, STT, caching mechanisms, and ElevenLabs voice technology Strong understanding of vector databases like Qdrant or Milvus Hands-on experience with Langchain, LlamaIndex, or agent frameworks (e.g., AutoGen, CrewAI) Knowledge of FastAPI, Celery, and orchestration of ML/AI services Familiarity with cloud deployment on GCP, AWS, or Azure Ability to build and fine-tune matching, ranking, or retrieval-based models Developing agentic workflows for automation Implementing NLP pipelines for parsing, summarizing, and communication (e.g., email bots, script generators) Comfortable working with graph-based data representation and integrating with frontend Experience in multi-agent collaboration frameworks like Google Agent2Agent Practical experience in data scraping and enrichment for ML training datasets Understanding of compliance in AI applications 👉 For more updates, follow us on our LinkedIn page! https://in.linkedin.com/company/pfactorial
Posted 3 days ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities 3+ years of experience in implementing analytical solutions using Palantir Foundry. preferably in PySpark and hyperscaler platforms (cloud services like AWS, GCP and Azure) with focus on building data transformation pipelines at scale. Team management: Must have experience in mentoring and managing large teams (20 to 30 people) for complex engineering programs. Candidate should have experience in hiring and nurturing talent in Palantir Foundry. Training: candidate should have experience in creating training programs in Foundry and delivering the same in a hands-on format either offline or virtually. At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. At least 3 years of experience with Foundry services: Data Engineering with Contour and Fusion Dashboarding, and report development using Quiver (or Reports) Application development using Workshop. Exposure to Map and Vertex is a plus Palantir AIP experience will be a plus Hands-on experience in data engineering and building data pipelines (Code/No Code) for ELT/ETL data migration, data refinement and data quality checks on Palantir Foundry. Hands-on experience of managing data life cycle on at least one hyperscaler platform (AWS, GCP, Azure) using managed services or containerized deployments for data pipelines is necessary. Hands-on experience in working & building on Ontology (esp. demonstrable experience in building Semantic relationships). Proficiency in SQL, Python and PySpark. Demonstrable ability to write & optimize SQL and spark jobs. Some experience in Apache Kafka and Airflow is a prerequisite as well. Hands-on experience on DevOps on hyperscaler platforms and Palantir Foundry is necessary. Experience in MLOps is a plus. Experience in developing and managing scalable architecture & working experience in managing large data sets. Opensource contributions (or own repositories highlighting work) on GitHub or Kaggle is a plus. Experience with Graph data and graph analysis libraries (like Spark GraphX, Python NetworkX etc.) is a plus. A Palantir Foundry Certification (Solution Architect, Data Engineer) is a plus. Certificate should be valid at the time of Interview. Experience in developing GenAI application is a plus Mandatory Skill Sets At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. At least 3 years of experience with Foundry services Preferred Skill Sets Palantir Foundry Years Of Experience Required Experience 4 to 7 years ( 3 + years relevant) Education Qualification Bachelor's degree in computer science, data science or any other Engineering discipline. Master’s degree is a plus. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Science Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Palantir (Software) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 3 days ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decision-making and automation. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Position responsibilities and expectations Designing and building analytical /DL/ ML algorithms using Python, R and other statistical tools. Strong data representation and lucid presentation (of analysis/modelling output) using Python, R Markdown, Power Point, Excel etc. Ability to learn new scripting language or analytics platform. Technical Skills required (must have) HandsOn Exposure to Generative AI (Design, development of GenAI application in production) Strong understanding of RAG, Vector Database, Lang Chain and multimodal AI applications. Strong understanding of deploying and optimizing AI application in production. Strong knowledge of statistical and data mining techniques like Linear & Logistic Regression analysis, Decision trees, Bagging, Boosting, Time Series and Non-parametric analysis. Strong knowledge of DL & Neural Network Architectures (CNN, RNN, LSTM, Transformers etc.) Strong knowledge of SQL and R/Python and experience with distribute data/computing tools/IDEs. Experience in advanced Text Analytics (NLP, NLU, NLG). Strong hands-on experience of end-to-end statistical model development and implementation Understanding of LLMOps, ML Ops for scalable ML development. Basic understanding of DevOps and deployment of models into production (PyTorch, TensorFlow etc.). Expert level proficiency algorithm building languages like SQL, R and Python and data visualization tools like Shiny, Qlik, Power BI etc. Exposure to Cloud Platform (Azure or AWS or GCP) technologies and services like Azure AI/ Sage maker/Vertex AI, Auto ML, Azure Index, Azure Functions, OCR, OpenAI, storage, scaling etc. Technical Skills required (Any one or more) Experience in video/ image analytics (Computer Vision) Experience in IoT/ machine logs data analysis Exposure to data analytics platforms like Domino Data Lab, c3.ai, H2O, Alteryx or KNIME Expertise in Cloud analytics platforms (Azure, AWS or Google) Experience in Process Mining with expertise in Celonis or other tools Proven capability in using Generative AI services like OpenAI, Google (Gemini) Understanding of Agentic AI Framework (Lang Graph, Auto gen etc.) Understanding of fine-tuning for pre-trained models like GPT, LLaMA, Claude etc. using LoRA, QLoRA and PEFT technique. Proven capability in building customized models from open-source distributions like Llama, Stable Diffusion Mandatory Skill Sets AI chatbots, Data structures, GenAI object-oriented programming, IDE, API, LLM Prompts, Streamlit Preferred Skill Sets AI chatbots, Data structures, GenAI object-oriented programming, IDE, API, LLM Prompts, Streamlit Years Of Experience Required 3-6 Years Education Qualification BE, B. Tech, M. Tech, M. Stat, Ph.D., M.Sc. (Stats / Maths) Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Doctor of Philosophy, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Chatbots, Data Structures, Generative AI Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, AI Implementation, Analytical Thinking, C++ Programming Language, Communication, Complex Data Analysis, Creativity, Data Analysis, Data Infrastructure, Data Integration, Data Modeling, Data Pipeline, Data Quality, Deep Learning, Embracing Change, Emotional Regulation, Empathy, GPU Programming, Inclusion, Intellectual Curiosity, Java (Programming Language), Learning Agility, Machine Learning {+ 25 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 3 days ago
10.0 years
0 Lacs
Chandigarh, India
On-site
Job Description: 7–10 years of industry experience, with at least 5 years in machine learning roles. Advanced proficiency in Python and common ML libraries: TensorFlow, PyTorch, Scikit-learn. Experience with distributed training, model optimization (quantization, pruning), and inference at scale. Hands-on experience with cloud ML platforms: AWS (SageMaker), GCP (Vertex AI), or Azure ML. Familiarity with MLOps tooling: MLflow, TFX, Airflow, or Kubeflow; and data engineering frameworks like Spark, dbt, or Apache Beam. Strong grasp of CI/CD for ML, model governance, and post-deployment monitoring (e.g., data drift, model decay). Excellent problem-solving, communication, and documentation skills.
Posted 3 days ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decision-making and automation. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Position responsibilities and expectations Designing and building analytical /DL/ ML algorithms using Python, R and other statistical tools. Strong data representation and lucid presentation (of analysis/modelling output) using Python, R Markdown, Power Point, Excel etc. Ability to learn new scripting language or analytics platform. Technical Skills required (must have) HandsOn Exposure to Generative AI (Design, development of GenAI application in production) Strong understanding of RAG, Vector Database, Lang Chain and multimodal AI applications. Strong understanding of deploying and optimizing AI application in production. Strong knowledge of statistical and data mining techniques like Linear & Logistic Regression analysis, Decision trees, Bagging, Boosting, Time Series and Non-parametric analysis. Strong knowledge of DL & Neural Network Architectures (CNN, RNN, LSTM, Transformers etc.) Strong knowledge of SQL and R/Python and experience with distribute data/computing tools/IDEs. Experience in advanced Text Analytics (NLP, NLU, NLG). Strong hands-on experience of end-to-end statistical model development and implementation Understanding of LLMOps, ML Ops for scalable ML development. Basic understanding of DevOps and deployment of models into production (PyTorch, TensorFlow etc.). Expert level proficiency algorithm building languages like SQL, R and Python and data visualization tools like Shiny, Qlik, Power BI etc. Exposure to Cloud Platform (Azure or AWS or GCP) technologies and services like Azure AI/ Sage maker/Vertex AI, Auto ML, Azure Index, Azure Functions, OCR, OpenAI, storage, scaling etc. Technical Skills required (Any one or more) Experience in video/ image analytics (Computer Vision) Experience in IoT/ machine logs data analysis Exposure to data analytics platforms like Domino Data Lab, c3.ai, H2O, Alteryx or KNIME Expertise in Cloud analytics platforms (Azure, AWS or Google) Experience in Process Mining with expertise in Celonis or other tools Proven capability in using Generative AI services like OpenAI, Google (Gemini) Understanding of Agentic AI Framework (Lang Graph, Auto gen etc.) Understanding of fine-tuning for pre-trained models like GPT, LLaMA, Claude etc. using LoRA, QLoRA and PEFT technique. Proven capability in building customized models from open-source distributions like Llama, Stable Diffusion Mandatory Skill Sets AI chatbots, Data structures, GenAI object-oriented programming, IDE, API, LLM Prompts, Streamlit Preferred Skill Sets AI chatbots, Data structures, GenAI object-oriented programming, IDE, API, LLM Prompts, Streamlit Years Of Experience Required 3-6 Years Education Qualification BE, B. Tech, M. Tech, M. Stat, Ph.D., M.Sc. (Stats / Maths) Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Doctor of Philosophy, Bachelor of Engineering, Bachelor of Technology Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Chatbots, Data Structures, Generative AI Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, AI Implementation, C++ Programming Language, Communication, Complex Data Analysis, Data Analysis, Data Infrastructure, Data Integration, Data Modeling, Data Pipeline, Data Quality, Deep Learning, Emotional Regulation, Empathy, GPU Programming, Inclusion, Intellectual Curiosity, Java (Programming Language), Machine Learning, Machine Learning Libraries, Named Entity Recognition, Natural Language Processing (NLP), Natural Language Toolkit (NLTK) {+ 20 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 3 days ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
What You’ll Do Sales and Use Tax Analyst role within the Indirect Tax Center of Excellence, Pune will focus on critical areas such as: US Sales and Use Tax compliance Activities Tax Research Account Reconciliations MIS to tax leadership This is an professional (IC) role. Perform thorough analysis of the Returns and its workings with accuracy. Data retrieval and Reconciliations of the returns with respective ERP ( Oracle, SAP ) & Vertex. Audit Defense including communication with external Auditors. Communications with Business stakeholders for documentation and evidences. Creation of SOP for any new takeovers and support in formalizing the process. Keep updated with the law, rules changes for each state relevant to Eaton. Tax applicability analysis and review for all the Mark View Invoices (Non-PO invoices) Monthly updates and review of the Tax Applicability through Checkpoint ( Services & its tax treatments ) Contribute towards initiatives taken for automation within Compliance process. Meet the targets as per the KPI for Sales and Use tax. Research and respond to sales & use tax questions with prompt and accurate replies Perform activities related to Other Projects. Qualifications Post Graduation in Accounting/ MBA from recognized University 3+ years of experience in the area of US Sales and Use Tax. Skills Working knowledge on Oracle, SAP and other ERP systems preferred. Working knowledge of Vertex Returns or similar compliance software. Good accounting concepts Good verbal and written communication skills. Self-starter. Ability to manage multiple priorities and responsibilities yet still meet time-sensitive deadlines. Attention to detail. Work independently and within a team setting. Ability to deliver quality output under limited supervision. Strong analytical skills, quick learning in ERP like SAP and Oracle for data retrieval.
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough