Jobs
Interviews

1489 Vertex Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking a skilled Platform Engineer to join our Automation Engineering team, bringing expertise in cloud infrastructure automation, DevOps, scripting, and advanced AI/ML practices. The role focuses on integrating generative AI into automation workflows, enhancing operational efficiency, and supporting cloud-first initiatives. Responsibilities Design cloud automation workflows using Infrastructure-as-Code tools such as Terraform or CloudFormation Build scalable frameworks to manage infrastructure provisioning, deployment, and configuration across multiple cloud platforms Create service catalog components compatible with automation platforms like Backstage Integrate generative AI models to improve service catalog functionalities, including automated code generation and validation Architect CI/CD pipelines for automated build, test, and deployment processes Maintain deployment automation scripts utilizing technologies such as Python or Bash Implement generative AI models (e.g., RAG, agent-based workflows) for AIOps use cases like anomaly detection and root cause analysis Employ AI/ML tools such as LangChain, Bedrock, Vertex AI, or Azure AI for advanced generative AI solutions Develop vector databases and document sources using services like Amazon Kendra, OpenSearch, or custom solutions Engineer data pipelines to stream real-time operational insights that support AI-driven automation Build MLOps pipelines to deploy and monitor generative AI models, ensuring optimal performance and avoiding model decay Select appropriate LLM models for specific AIOps use cases and integrate them effectively into workflows Collaborate with cross-functional teams to design and refine automation and AI-driven processes Research emerging tools and technologies to enhance operational efficiency and scalability Requirements Bachelor's or Master's degree in Computer Science, Engineering, or related field 3-8 years of experience in cloud infrastructure automation, DevOps, and scripting Proficiency with Infrastructure-as-Code tools such as Terraform or CloudFormation Expertise in Python and generative AI frameworks like RAG and agent-based workflows Knowledge of cloud-based AI services, including Bedrock, Vertex AI, or Azure AI Familiarity with vector databases like Amazon Kendra, OpenSearch, or custom database solutions Competency in data engineering tasks such as feature engineering, labeling, and real-time data streaming Proven track record in creating and maintaining MLOps pipelines for AI/ML models in production environments Nice to have Background in Flow Engineering tools such as Langraph or platform-specific workflow orchestration tools Understanding of comprehensive AIOps processes to refine cloud-based automation solutions

Posted 4 days ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

We are seeking an experienced Lead Platform Engineer to join our Automation Engineering team. The ideal candidate will excel in cloud infrastructure automation, generative AI, and machine learning, with a strong foundation in DevOps practices and modern scripting tools. This role involves designing cutting-edge AI-driven solutions for AIOps while innovating cloud automation processes to optimize operational efficiency. Responsibilities Design and develop automated workflows for cloud infrastructure provisioning using IaC tools like Terraform Build frameworks to support deployment, configuration, and management across diverse cloud environments Develop and manage service catalog components, ensuring integration with platforms like Backstage Implement GenAI models to enhance service catalog functionality and code quality across automation pipelines Design and implement CI/CD pipelines and maintain CI pipeline code for cloud automation use cases Write scripts to support cloud deployment orchestration using Python, Bash, or other scripting languages Design and deploy generative AI models for AIOps applications such as anomaly detection and predictive maintenance Work with frameworks like LangChain or cloud platforms such as Bedrock, Vertex AI, and Azure AI to deploy RAG workflows Build and optimize vector databases and document sources using tools like OpenSearch, Amazon Kendra, or equivalent solutions Prepare and label data for generative AI models, ensuring scalability and integrity Create agentic workflows using frameworks like Langraph or cloud GenAI platforms such as Bedrock Agents Integrate generative AI models with operational systems and AIOps platforms for enhanced automation Evaluate AI model performance and ensure continuous optimization over time Develop and maintain MLOps pipelines to monitor and mitigate model decay Collaborate with cross-functional teams to drive innovation and improve cloud automation processes Research and recommend new tools and best practices to enhance operational efficiency Requirements Bachelor's or Master's degree in Computer Science, Engineering, or a related field 7+ years of experience in cloud infrastructure automation, scripting, and DevOps Strong proficiency in IaC tools like Terraform, CloudFormation, or similar Expertise in Python, cloud AI frameworks such as LangChain, and generative AI workflows Demonstrated background in developing and deploying AI models such as RAG or transformers Proficiency in building vector databases and document sources using solutions like OpenSearch or Amazon Kendra Competency in preparing and labeling datasets for AI models and optimizing data inputs Familiarity with cloud platforms including AWS, Google Cloud, or Azure Capability to implement MLOps pipelines and monitor AI system performance Nice to have Knowledge of agentic architectures such as React and flow engineering techniques Background in using Bedrock Agents or Langraph for workflow creation Understanding of integrating generative AI into legacy or complex operational systems

Posted 4 days ago

Apply

3.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We are seeking a skilled Platform Engineer to join our Automation Engineering team, bringing expertise in cloud infrastructure automation, DevOps, scripting, and advanced AI/ML practices. The role focuses on integrating generative AI into automation workflows, enhancing operational efficiency, and supporting cloud-first initiatives. Responsibilities Design cloud automation workflows using Infrastructure-as-Code tools such as Terraform or CloudFormation Build scalable frameworks to manage infrastructure provisioning, deployment, and configuration across multiple cloud platforms Create service catalog components compatible with automation platforms like Backstage Integrate generative AI models to improve service catalog functionalities, including automated code generation and validation Architect CI/CD pipelines for automated build, test, and deployment processes Maintain deployment automation scripts utilizing technologies such as Python or Bash Implement generative AI models (e.g., RAG, agent-based workflows) for AIOps use cases like anomaly detection and root cause analysis Employ AI/ML tools such as LangChain, Bedrock, Vertex AI, or Azure AI for advanced generative AI solutions Develop vector databases and document sources using services like Amazon Kendra, OpenSearch, or custom solutions Engineer data pipelines to stream real-time operational insights that support AI-driven automation Build MLOps pipelines to deploy and monitor generative AI models, ensuring optimal performance and avoiding model decay Select appropriate LLM models for specific AIOps use cases and integrate them effectively into workflows Collaborate with cross-functional teams to design and refine automation and AI-driven processes Research emerging tools and technologies to enhance operational efficiency and scalability Requirements Bachelor's or Master's degree in Computer Science, Engineering, or related field 3-8 years of experience in cloud infrastructure automation, DevOps, and scripting Proficiency with Infrastructure-as-Code tools such as Terraform or CloudFormation Expertise in Python and generative AI frameworks like RAG and agent-based workflows Knowledge of cloud-based AI services, including Bedrock, Vertex AI, or Azure AI Familiarity with vector databases like Amazon Kendra, OpenSearch, or custom database solutions Competency in data engineering tasks such as feature engineering, labeling, and real-time data streaming Proven track record in creating and maintaining MLOps pipelines for AI/ML models in production environments Nice to have Background in Flow Engineering tools such as Langraph or platform-specific workflow orchestration tools Understanding of comprehensive AIOps processes to refine cloud-based automation solutions

Posted 4 days ago

Apply

3.0 - 8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

We are seeking a skilled Platform Engineer to join our Automation Engineering team, bringing expertise in cloud infrastructure automation, DevOps, scripting, and advanced AI/ML practices. The role focuses on integrating generative AI into automation workflows, enhancing operational efficiency, and supporting cloud-first initiatives. Responsibilities Design cloud automation workflows using Infrastructure-as-Code tools such as Terraform or CloudFormation Build scalable frameworks to manage infrastructure provisioning, deployment, and configuration across multiple cloud platforms Create service catalog components compatible with automation platforms like Backstage Integrate generative AI models to improve service catalog functionalities, including automated code generation and validation Architect CI/CD pipelines for automated build, test, and deployment processes Maintain deployment automation scripts utilizing technologies such as Python or Bash Implement generative AI models (e.g., RAG, agent-based workflows) for AIOps use cases like anomaly detection and root cause analysis Employ AI/ML tools such as LangChain, Bedrock, Vertex AI, or Azure AI for advanced generative AI solutions Develop vector databases and document sources using services like Amazon Kendra, OpenSearch, or custom solutions Engineer data pipelines to stream real-time operational insights that support AI-driven automation Build MLOps pipelines to deploy and monitor generative AI models, ensuring optimal performance and avoiding model decay Select appropriate LLM models for specific AIOps use cases and integrate them effectively into workflows Collaborate with cross-functional teams to design and refine automation and AI-driven processes Research emerging tools and technologies to enhance operational efficiency and scalability Requirements Bachelor's or Master's degree in Computer Science, Engineering, or related field 3-8 years of experience in cloud infrastructure automation, DevOps, and scripting Proficiency with Infrastructure-as-Code tools such as Terraform or CloudFormation Expertise in Python and generative AI frameworks like RAG and agent-based workflows Knowledge of cloud-based AI services, including Bedrock, Vertex AI, or Azure AI Familiarity with vector databases like Amazon Kendra, OpenSearch, or custom database solutions Competency in data engineering tasks such as feature engineering, labeling, and real-time data streaming Proven track record in creating and maintaining MLOps pipelines for AI/ML models in production environments Nice to have Background in Flow Engineering tools such as Langraph or platform-specific workflow orchestration tools Understanding of comprehensive AIOps processes to refine cloud-based automation solutions

Posted 4 days ago

Apply

3.0 - 8.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

We are seeking a skilled Platform Engineer to join our Automation Engineering team, bringing expertise in cloud infrastructure automation, DevOps, scripting, and advanced AI/ML practices. The role focuses on integrating generative AI into automation workflows, enhancing operational efficiency, and supporting cloud-first initiatives. Responsibilities Design, build, and maintain cloud automation workflows using Infrastructure-as-Code tools such as Terraform or CloudFormation Develop scalable frameworks for managing infrastructure provisioning, deployment, and configuration across multiple cloud platforms Create and integrate service catalog components with automation platforms like Backstage Leverage generative AI models to enhance service catalog capabilities, including automated code generation and validation Architect and implement CI/CD pipelines for automated build, test, and deployment processes Build and maintain deployment automation scripts using technologies such as Python or Bash Design and implement generative AI models (e.g., RAG, agent-based workflows) for AIOps use cases like anomaly detection and root cause analysis Utilize AI/ML tools such as LangChain, Bedrock, Vertex AI, or Azure AI for building advanced generative AI solutions Develop vector databases and document sources using services like Amazon Kendra, OpenSearch, or custom solutions Engineer data pipelines for streaming real-time operational insights to support AI-driven automation Create MLOps pipelines to deploy and monitor generative AI models, ensuring optimal performance and avoiding model decay Evaluate and select appropriate LLM models for specific AIOps use cases, integrating them efficiently into workflows Collaborate with cross-functional teams to design and improve automation and AI-driven processes Continuously research emerging tools and technologies to improve operational efficiency and scalability Requirements Bachelor's or Master's degree in Computer Science, Engineering, or related field 3-8 years of experience in cloud infrastructure automation, DevOps, and scripting Proficiency with Infrastructure-as-Code tools such as Terraform or CloudFormation Expertise in Python and generative AI frameworks like RAG and agent-based workflows Knowledge of cloud-based AI services, including Bedrock, Vertex AI, or Azure AI Familiarity with vector databases like Amazon Kendra, OpenSearch, or custom database solutions Competency in data engineering tasks such as feature engineering, labeling, and real-time data streaming Proven experience in creating and maintaining MLOps pipelines for AI/ML models in production environments Nice to have Familiarity with Flow Engineering tools such as Langraph or platform-specific workflow orchestration tools Understanding of end-to-end AIOps processes to enhance cloud-based automation solutions

Posted 4 days ago

Apply

3.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are seeking a skilled Platform Engineer to join our Automation Engineering team, bringing expertise in cloud infrastructure automation, DevOps, scripting, and advanced AI/ML practices. The role focuses on integrating generative AI into automation workflows, enhancing operational efficiency, and supporting cloud-first initiatives. Responsibilities Design, build, and maintain cloud automation workflows using Infrastructure-as-Code tools such as Terraform or CloudFormation Develop scalable frameworks for managing infrastructure provisioning, deployment, and configuration across multiple cloud platforms Create and integrate service catalog components with automation platforms like Backstage Leverage generative AI models to enhance service catalog capabilities, including automated code generation and validation Architect and implement CI/CD pipelines for automated build, test, and deployment processes Build and maintain deployment automation scripts using technologies such as Python or Bash Design and implement generative AI models (e.g., RAG, agent-based workflows) for AIOps use cases like anomaly detection and root cause analysis Utilize AI/ML tools such as LangChain, Bedrock, Vertex AI, or Azure AI for building advanced generative AI solutions Develop vector databases and document sources using services like Amazon Kendra, OpenSearch, or custom solutions Engineer data pipelines for streaming real-time operational insights to support AI-driven automation Create MLOps pipelines to deploy and monitor generative AI models, ensuring optimal performance and avoiding model decay Evaluate and select appropriate LLM models for specific AIOps use cases, integrating them efficiently into workflows Collaborate with cross-functional teams to design and improve automation and AI-driven processes Continuously research emerging tools and technologies to improve operational efficiency and scalability Requirements Bachelor's or Master's degree in Computer Science, Engineering, or related field 3-8 years of experience in cloud infrastructure automation, DevOps, and scripting Proficiency with Infrastructure-as-Code tools such as Terraform or CloudFormation Expertise in Python and generative AI frameworks like RAG and agent-based workflows Knowledge of cloud-based AI services, including Bedrock, Vertex AI, or Azure AI Familiarity with vector databases like Amazon Kendra, OpenSearch, or custom database solutions Competency in data engineering tasks such as feature engineering, labeling, and real-time data streaming Proven experience in creating and maintaining MLOps pipelines for AI/ML models in production environments Nice to have Familiarity with Flow Engineering tools such as Langraph or platform-specific workflow orchestration tools Understanding of end-to-end AIOps processes to enhance cloud-based automation solutions

Posted 4 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com. Job Description As a Staff Machine Learning Engineer , you will drive AI programs, lead engagements, and independently develop innovative solutions that enhance decision-making, automate workflows, and create growth. You will own the end-to-end development of AI-powered applications, from solution design to deployment, leveraging pre-trained machine learning and generative AI models. You will work closely with cross-functional teams, proactively identifying opportunities to integrate AI capabilities into Experian's products and services while optimizing performance and scalability. Qualifications Experinece working in cloud environment with one of Databricks, Azure or AWS 8+ years experience building data-drive products and solutions Experience leading AI engagements Strong experience with AI APIs (OpenAI, Hugging Face, Google Vertex AI, AWS Bedrock) and fine-tuning models for production use. Deep understanding of machine learning, natural language processing (NLP), and generative AI evaluation techniques.Key Responsibilities Assist in Developing and Deploying Machine Learning Models: Support the development and deployment of machine learning models, including data preprocessing and performance evaluation in Python using sklearn, numpy and other standard libraries. Build and Maintain ML Pipelines: Help build and maintain scalable ML pipelines, and assist in automating model training workflows in Python using MLFlow, Databricks, Sagemaker or equivalent. Collaborate with Cross-Functional Teams: Work with product and data teams to align ML solutions with business needs and objectives. Write Clean and Documented Code: Write clean, well-documented code, following best practices for testing and version control. Use Sphinx and other auto documentation solutions to automate document generation. Support Model Monitoring and Debugging: Assist in monitoring and debugging models to improve their reliability and performance. Participate in Technical Discussions and Knowledge Sharing: Engage in technical discussions, code reviews, and knowledge-sharing sessions to learn and grow within the team. Day-to-Day ActivitiesOn a daily basis, you will work closely with senior ML engineers and data scientists to support various stages of the machine learning lifecycle. Your day-to-day activities will include: Data Preprocessing: Cleaning and preparing data for model training, ensuring data quality and consistency. Model Training: Assisting in the training of machine learning models, experimenting with different algorithms and hyperparameters. Performance Evaluation: Evaluating model performance using appropriate metrics and techniques, and identifying areas for improvement. Pipeline Maintenance: Building and maintaining ML pipelines, ensuring they are scalable and efficient. Code Development: Writing and maintaining clean, well-documented code, following best practices for testing and version control. Model Monitoring: Monitoring deployed models to ensure they are performing as expected, and assisting in debugging any issues that arise. Collaboration: Participating in team meetings, sprint planning, and daily stand-ups to stay aligned with project goals and timelines. Additional Information Our uniqueness is that we truly celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what truly matters; DEI, work/life balance, development, authenticity, engagement, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's strong people first approach is award winning; Great Place To Work™ in 24 countries, FORTUNE Best Companies to work and Glassdoor Best Places to Work (globally 4.4 Stars) to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is a critical part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, color, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together Benefits Experian care for employee's work life balance, health, safety and wellbeing. In support of this endeavor, we offer best-in-class family well-being benefits, enhanced medical benefits and paid time off. Experian Careers - Creating a better tomorrow together Find out what its like to work for Experian by clicking here

Posted 4 days ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

What You’ll Do Design, develop and deploy agent-based AI systems using LLMs Build and scale Retrieval-Augmented Generation pipelines for real-time and offline inference. Develop and optimize training workflows for fine-tuning and adapting models to domain-specific tasks. Collaborate with cross-functional teams to integrate knowledge base into agent frameworks. Drive best practices in AI Engineering, model lifecycle management, and production deployment on Google Cloud (GCP) Implement version control strategies using Git, manage code repositories and ensure best practices in code management. Develop, manage CI/CD pipelines using Jenkins or other relevant tools to streamline deployment and updates. Monitor, evaluate, and improve model performance post- deployment on Google Cloud. Communicate technical findings and insights to non-technical stakeholders. Participate in technical discussions and contribute to strategic planning. What Experience You Need Master's / Bachelors in Computer Science, Artificial Intelligence, Machine Learning, or related field. 7+ years of experience in AI/ML engineering, with a strong focus on LLM-based applications. At least 10+ years of experience in IT overall. Proven experience in building agent-based applications using Gemini, OpenAI or similar models. Deep understanding of RAG systems, vector databases, and knowledge retrieval strategies. Hands-on experience with LangChain and LangGraph frameworks. Solid background in model training, fine-tuning, evaluation and deployment. Strong coding skills in Python and experience with modern MLOps practices. What Could Set You Apart Familiarity with frontend integration of AI agents (Eg. using Angular, Mesop or similar frameworks). Experience with Google Cloud services like BigQuery, Vertex AI, Agent Builder. Exposure to Angular framework

Posted 4 days ago

Apply

7.0 years

0 Lacs

India

Remote

Job Title: Remote ML/AI Developer Location: Remote / Hybrid (Preferred regions: EMEA, USA, UK, Japan) Experience: 2–7 years Type: Full-time / Contract Your Role As an ML/AI Developer , you will design, develop, and deploy machine learning models that solve real-world problems across industries. You’ll collaborate with product teams, data engineers, and backend developers to build smart, scalable solutions—ranging from predictive analytics and recommendation systems to generative AI and natural language processing. Responsibilities Build, train, and optimize machine learning and deep learning models Work with structured and unstructured data for classification, regression, clustering, or NLP tasks Develop APIs and pipelines to deploy models into production environments Collaborate with data engineers to ensure clean, scalable, and usable datasets Conduct model evaluation, tuning, and experimentation Translate business problems into technical ML/AI solutions Stay updated with the latest advancements in AI/ML frameworks and tools Document experiments, model results, and deployment processes Tech Skills We Value Strong knowledge of Python and ML libraries such as scikit-learn , TensorFlow , Keras , PyTorch , or XGBoost Experience with NLP , computer vision , recommendation systems , or generative AI models Familiarity with data preprocessing , feature engineering , and model evaluation techniques Experience with SQL , Pandas , NumPy , and data manipulation tools Exposure to ML Ops , Docker , REST APIs , and CI/CD for model deployment Familiarity with AWS Sagemaker , Google Vertex AI , or Azure ML is a plus Understanding of data privacy, fairness, and ethical AI practices What We’re Looking For 2–5 years of experience in AI/ML development or applied data science A strong foundation in statistics, machine learning theory, and model development Proven experience building and deploying end-to-end ML solutions Excellent problem-solving and analytical skills Ability to work independently and in distributed remote teams Passion for learning and applying new AI technologies What You’ll Get Work on cutting-edge AI projects with innovative startups and global clients 100% remote flexibility with freelance or long-term contract options Access to real-world problems, high-quality datasets, and modern tech stacks Join a collaborative global network of AI/ML engineers and data professionals Growth opportunities in ML Ops, AI product development, and tech leadership

Posted 4 days ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Position Type Full time Type Of Hire Experienced (relevant combo of work and education) Education Desired Bachelor of Technology Travel Percentage 0% What you will be doing Consulting & Strategy Assess client needs and identify opportunities for agentic AI integration Develop strategic roadmaps for deploying autonomous AI agents across business functions Educate stakeholders on the capabilities, risks, and ethical considerations of agentic AI Lead end-to-end delivery of RPA and Gen AI projects across BFS clients Strong domain knowledge of BFS processes (e.g., Lending, Deposits, KYC, AML, loan processing, risk management) Collaborate with business stakeholders to gather requirements, define project scope, and translate them into technical specifications Manage cross-functional teams including developers, data scientists, business analysts, and QA Drive project planning, execution, risk management, and stakeholder communication Evaluate and recommend RPA tools (e.g., UiPath, Automation Anywhere) and Gen AI platforms (e.g., Azure OpenAI, AWS Bedrock, Google Vertex AI) Ensure compliance with BFS industry regulations and data privacy standards Monitor KPIs and ROI of automation and AI initiatives Stay updated with emerging trends in AI, RPA, and BFS technologies Solution Design & Implementation Assist in Design of agentic AI architectures using LLMs, multi-agent systems, and tool-augmented reasoning frameworks Collaborate with data scientists, engineers, and product teams to prototype and deploy AI agents Integrate agents with APIs, databases, and enterprise tools to enable real-world action-taking Evaluation & Optimization Define KPIs and evaluation metrics for agent performance Continuously monitor, test, and refine agent behavior to ensure alignment with business goals Stay current with the latest research and tools in agentic AI, LLMs, and autonomous systems What you Bring Bachelor’s or master’s degree in computer science, Information Technology, or related field 10+ years of experience in IT project management, with at least 3–5 years in RPA and/or Gen AI Strong domain knowledge of BFS processes (e.g., KYC, AML, loan processing, risk management) Hands-on experience with RPA tools (UiPath, Blue Prism, Automation Anywhere) Familiarity with Gen AI models and platforms (LLMs, prompt engineering, fine-tuning) Familiarity with APIs, cloud platforms (AWS, Azure, GCP), and data integration pipelines Proven ability to manage large-scale digital transformation projects Excellent communication, stakeholder management, and leadership skills Preferred Qualifications~ Experience integrating AI solutions with legacy BFS systems Exposure to cloud platforms (Azure, AWS, GCP) Understanding of data governance and AI ethics in BFS Experience with autonomous agents in real-world applications (e.g., customer support, research automation, workflow orchestration) Knowledge of reinforcement learning, planning algorithms, or cognitive architectures Understanding of AI safety, alignment, and interpretability challenges Added Bonus If You Have 1-year of customer service experience 1-year experience working in a high-volume call center Excellent customer service skills that build high levels of customer satisfaction What We Offer You A competitive salary with attractive benefits including private medical and dental coverage insurance A multifaceted job with a high degree of responsibility and a broad spectrum of opportunities A modern work environment and a dedicated and motivated team A broad range of professional education and personal development opportunities A work environment built on collaboration and respect Privacy Statement FIS is committed to protecting the privacy and security of all personal information that we process in order to provide services to our clients. For specific information on how FIS protects personal information online, please see the Online Privacy Notice. Sourcing Model Recruitment at FIS works primarily on a direct sourcing model; a relatively small portion of our hiring is through recruitment agencies. FIS does not accept resumes from recruitment agencies which are not on the preferred supplier list and is not responsible for any related fees for resumes submitted to job postings, our employees, or any other part of our company. #pridepass

Posted 4 days ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Spyne: At Spyne , we are transforming how cars are marketed and sold with cutting-edge Generative AI. What started as a bold idea—using AI-powered visuals to help auto dealers sell faster online—has now evolved into a full-fledged, AI-first automotive retail ecosystem. Backed by $16M in Series A funding from Accel, Vertex Ventures, and other top investors, we’re scaling at breakneck speed: Launched industry-first AI-powered Image, Video & 360° solutions for Automotive dealers Launching Gen AI powered Automotive Retail Suite to power Inventory, Marketing, CRM for dealers Onboarded 1500+ dealers across US, EU and other key markets in the past 2 years of launch Gearing up to onboard 10K+ dealers across global market of 200K+ dealers 150+ members team with near equal split on R&D and GTM Learn more about our products: Spyne AI Products - StudioAI , RetailAI Series A Announcement - CNBC-TV18 , Yourstory More about us - ASOTU , CNBC Awaaz What You Will Do As a Senior Content Writer, you will own the creation, optimization, and management of content that drives awareness, engagement, and conversions. Your words will shape Spyne’s brand voice and strengthen our leadership in AI-powered automotive retail. Key Responsibilities: Write long-form content (blogs, articles, whitepapers, e-books) and short-form content (product descriptions, ad copy, social media captions) tailored to our target audience Revamp and optimize existing content (blogs, website copy, email campaigns) to enhance SEO, engagement, and relevance Collaborate with marketing and product teams to produce content aligned with brand guidelines, industry trends, and campaign objectives Experiment with content formats and approaches, track performance metrics, and continuously optimize for impact What Makes You a Great Fit Strong command of English with exceptional writing, editing, and storytelling skills 3–5 years of experience in content writing or content marketing, preferably in B2B or SaaS Familiarity with SEO principles, keyword research, and content optimization Experience with Google Workspace (Docs, Sheets, Slides) and basic analytics tools Detail-oriented, punctual, and able to thrive in a fast-paced, high-growth environment Why Join Spyne? Be part of a fast-growing AI startup trusted by top global investors High-ownership, zero-politics culture with meritocracy at its core Best-in-class tools and resources—choose the machine and software that enable your best work Work with a team obsessed with excellence for both our customers and our Spynians! 🚀 If you are a content storyteller ready to make an impact in a high-growth AI startup, Spyne is the perfect place for you. Apply now!

Posted 4 days ago

Apply

2.0 years

5 - 9 Lacs

Hyderābād

On-site

DESCRIPTION Amazon is seeking a Senior Tax Analyst to join its income tax provision and reporting team in India. The Amazon tax department is a fast-paced, team-focused, dynamic environment that leverages industry-leading technology to scale with business growth and manage complexity. This position will contribute to Amazon’s worldwide income tax accounting process for interim and annual reporting periods. Preferred Qualifications CPA and MST or equivalent preferred International tax reporting and compliance experience ASC 740 income tax accounting knowledge and experience required Experience working with stock-based compensation arrangements, including ASC 718 and FRS2 income tax accounting Minimum required experience with Microsoft Excel should be intermediate Big 4 and/or combination with technology industry experience preferred Excellent interpersonal and presentation skills to liaise with cross-functional teams and business partners Excellent written and verbal communication skills Ability to operate in a fast paced, ever-changing environment Strong organization skills, able to multitask and meet deadlines Self-starter and team player Experience with Corptax Provision, Oracle, Alteryx, Python, Hyperion Essbase and Cognos are a plus Key job responsibilities Exposure to challenging tax issues facing Amazon from a worldwide perspective Prepare income tax provision calculations for subsidiaries of Amazon’s worldwide group Maintain income tax provision model and supporting schedules Prepare worldwide tax account reconciliations and roll forward analysis Prepare analytics that are communicated to external auditors and finance management team Participate in streamlining and improving income tax reporting processes through automation and standardization, including the opportunity to develop foundational skills/know-how in leading-class tax technology solutions Cross functional collaboration with tax and accounting business partners as well as external auditors BASIC QUALIFICATIONS Bachelor's degree Knowledge of Microsoft Office products and applications at an advanced level Experience working in a large public accounting firm or multi-national corporate tax department PREFERRED QUALIFICATIONS 2+ years of maintaining and operating transaction tax calculation software (e.g. Vertex) experience 4+ years of tax, finance or a related analytical field experience Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Job details IND, TS, Hyderabad Tax Finance and Global Business Services

Posted 4 days ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Ford/GDIA Mission and Scope: At Ford Motor Company, we believe freedom of movement drives human progress. We also believe in providing you with the freedom to define and realize your dreams. With our incredible plans for the future of mobility, we have a wide variety of opportunities for you to accelerate your career potential as you help us define tomorrow’s transportation. Creating the future of smart mobility requires the highly intelligent use of data, metrics, and analytics. That’s where you can make an impact as part of our Global Data Insight & Analytics team. We are the trusted advisers that enable Ford to clearly see business conditions, customer needs, and the competitive landscape. With our support, key decision-makers can act in meaningful, positive ways. Join us and use your data expertise and analytical skills to drive evidence-based, timely decision-making. The Global Data Insights and Analytics (GDI&A) department at Ford Motors Company is looking for qualified people who can develop scalable solutions to complex real-world problems using Machine Learning, Big Data, Statistics, Econometrics, and Optimization. The goal of GDI&A is to drive evidence-based decision making by providing insights from data. Applications for GDI&A include, but are not limited to, Connected Vehicle, Smart Mobility, Advanced Operations, Manufacturing, Supply chain, Logistics, and Warranty Analytics. About the Role: You would be part of FCSD analytics team. As a Data Scientist on the team, you will collaborate within the team and work with business partners to understand business problems and explore data from various sources in GCP-Data Factory, wrangle them to develop solutions using AI/ML algorithms to provide actionable insights that deliver key results to Ford. The potential candidate should have hands-on experience in building statistical/machine learning models adhering to the best practices of development and deployment in cloud environment. This role requires a solid problem-solving skill, business acumen, and passion for leveraging data science/AI skills to drive business results. Responsibilities Job Responsibilities Build an in-depth understanding of the business domain and data sources. Extract, Analyse data from database/data warehouse to gain insights, discover trends and patterns with clear objectives in mind. Design and implement scalable analytical solutions in Google cloud environment. Work closely with Product Owner, Product Manager, Software engineers and Data engineers to build products in agile environment. Operationalize AI/ML/LLM models by integrating with upstream and downstream business processes. Communicate results to business teams through effective presentations. Work with business partners through problem formulation, data management, solutions development, operationalization, and solutions management Identify opportunities to build analytical solutions driving business value, leveraging various data sources. Qualifications Qualifications: At least 2 years of relevant work experience in solving business problems using data science Bachelors/master’s degree in quantitative domain, Statistics, Computer science, Mathematics, Engineering with MBA from a premier institute (BE,MS,MBA, BSc/MSc -Computer science/Statistics) or any other equivalent 2+ years of experience with SQL, Python delivering analytical solutions in production environment. At least 1 year of experience working in Cloud environment (GCP or AWS or Azure) 2+ years of experience in conducting statistical data analysis (EDA, forecasting, clustering, etc.,) and machine learning techniques (Classification/Regression, NLP) Technical Skills: Proficient in BigQuery/SQL, Python Advanced SQL knowledge to handle large data, optimize queries. Working knowledge in GCP environment (Big Query, Vertex AI) to develop and deploy machine Learning models Nice to have: Exposure to Gen AI/LLM Functional Skills: Understanding and formulating business problem statements Convert Business Problem statement into data science problems. Self-motivated with excellent verbal and written skills Highly credible in organizational, time management and decision making. Excellent Problem-Solving and Interpersonal skills

Posted 4 days ago

Apply

3.0 years

2 - 6 Lacs

Gurgaon

On-site

We are seeking a highly skilled AI Engineer with a strong foundation in machine learning, deep learning, cloud platforms , and computer vision to join our innovative tech team. You’ll design and implement scalable AI/ML pipelines, automate workflows, train and optimize models, and deploy solutions on cloud infrastructure. This is an opportunity to shape the future of intelligent systems across industries. Key Responsibilities: Design, develop, and deploy ML/DL models for various applications, including computer vision and predictive analytics. Build data pipelines and model training workflows on cloud platforms such as AWS, Azure, or GCP. Automate model retraining, evaluation, and deployment processes using MLOps best practices. Collaborate with cross-functional teams (data engineers, product managers, developers) to define project requirements and deliver AI-powered features. Develop and fine-tune custom algorithms tailored to specific domain problems. Integrate AI solutions into existing systems using APIs, containers, and cloud-native tools. Conduct data preprocessing, exploratory data analysis, and feature engineering. Research and evaluate the latest AI trends, tools, and frameworks to recommend enhancements. Write clear, maintainable, and efficient code with documentation for reproducibility and scaling. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or related fields. 3+ years of hands-on experience in building and deploying machine learning/deep learning models. Strong programming skills in Python and frameworks like TensorFlow, PyTorch, OpenCV, Scikit-learn. Experience with computer vision libraries (OpenCV, YOLO, Detectron2, etc.). Proficiency in cloud platforms (AWS SageMaker, GCP Vertex AI, or Azure ML Studio). Experience with Docker, Kubernetes, or other orchestration tools. Familiarity with MLOps tools like MLflow, DVC, Kubeflow, or Airflow. Solid understanding of algorithms, data structures, and model optimization techniques. Exposure to RESTful APIs and real-time inference systems. Strong analytical, problem-solving, and communication skills. Nice to Have: Experience with NLP models and transformers (e.g., Hugging Face). Experience deploying models at scale in production environments. Knowledge of CI/CD pipelines for AI applications. Publications or contributions to open-source AI projects. Why Join Us? Job Type: Permanent Pay: ₹20,000.00 - ₹50,000.00 per month Work Location: In person

Posted 4 days ago

Apply

0 years

1 - 2 Lacs

India

On-site

Customer Assistance: Greeting customers and assessing their eyewear needs. Guiding customers in selecting frames and lenses that match their prescription, facial features, and style. Providing expert advice on lens options, coatings, and frame materials. Ensuring a comfortable and accurate fit of eyewear. Technical Skills: Performing basic adjustments and minor repairs on frames. Interpreting prescriptions written by optometrists and ophthalmologists. Measuring patients' pupillary distance, vertex distance, and other measurements for accurate lens placement. Sales and Inventory Management: Processing sales transactions and handling payments. Maintaining accurate records of customer purchases and prescriptions. Managing inventory and restocking merchandise. Communication and Collaboration: Communicating effectively with customers and other optical staff. Collaborating with optometrists and other professionals. Providing exceptional customer service and resolving customer inquiries. Professional Development: Staying up-to-date on the latest eyewear trends and technologies. Participating in ongoing training and development. Job Type: Full-time Pay: ₹15,000.00 - ₹22,000.00 per month Schedule: Day shift Work Location: In person

Posted 4 days ago

Apply

0 years

0 Lacs

Noida

On-site

Role Title: AI Platform Engineer Location: Bangalore (In Person in office when required) Part of the GenAI COE Team Key Responsibilities Platform Development and Evangelism: Build scalable AI platforms that are customer-facing. Evangelize the platform with customers and internal stakeholders. Ensure platform scalability, reliability, and performance to meet business needs. Machine Learning Pipeline Design: Design ML pipelines for experiment management, model management, feature management, and model retraining. Implement A/B testing of models. Design APIs for model inferencing at scale. Proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. LLM Serving and GPU Architecture: Serve as an SME in LLM serving paradigms. Possess deep knowledge of GPU architectures. Expertise in distributed training and serving of large language models. Proficient in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. Model Fine-Tuning and Optimization: Demonstrate proven expertise in model fine-tuning and optimization techniques. Achieve better latencies and accuracies in model results. Reduce training and resource requirements for fine-tuning LLM and LVM models. LLM Models and Use Cases: Have extensive knowledge of different LLM models. Provide insights on the applicability of each model based on use cases. Proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. DevOps and LLMOps Proficiency: Proven expertise in DevOps and LLMOps practices. Knowledgeable in Kubernetes, Docker, and container orchestration. Deep understanding of LLM orchestration frameworks like Flowise, Langflow, and Langgraph. Skill Matrix LLM: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLM Ops: ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI Databases/Datawarehouse: DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery. Cloud Knowledge: AWS/Azure/GCP Dev Ops (Knowledge): Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus Cloud Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert Proficient in Python, SQL, Javascript Job Type: Full-time Work Location: In person

Posted 4 days ago

Apply

5.0 years

0 Lacs

Noida

On-site

About Attentive.ai: Attentive.ai is a fast-growing vertical SaaS start-up, funded by PeakXV (Surge), Infoedge and Vertex Ventures, that provides innovative software solutions for the construction, landscape, and paving industries in the United States. Our mission is to help businesses in this space improve their operations and grow their revenue through our simple & easy-to-use software platform. We're looking for a dynamic and driven leader to lead our Sales Development efforts and build a high-performance outbound engine. Job Description: As the GTM Strategy & Ops, you will be at the heart of our sales, marketing, and customer success motion—designing and driving execution plans that accelerate revenue growth. You will define go-to-market strategies, build operational models, run analytics to measure performance, and collaborate with cross-functional teams to align execution with business goals. This is a high-impact role for someone who enjoys working across growth, operations, data, and product in a fast-paced, high-growth SaaS environment. Roles & Responsibilities : 1. Define and execute GTM strategies across sales, marketing, and customer success. 2. Build dashboards and models to track key metrics across the revenue funnel. 3. Identify growth levers and run data-driven experiments to improve performance. 4. Optimize tools, processes, and workflows across the GTM stack (e.g., CRM, automation). 5. Collaborate with cross-functional teams on strategic initiatives and special projects. 6. Support sales enablement through insights, playbooks, and performance analysis. 7. Drive alignment on ICP, messaging, and lead qualification across GTM functions. Requirements for the Role: 1. 5+ years of experience in B2B SaaS GTM / revenue operations / strategy roles. 2. Experience in founders’ office, strategy consulting, or VC-backed tech startups. 3. Strong understanding of sales, marketing, and CS workflows and tooling. 4. Excellent analytical and problem-solving skills; highly data-driven. 5. Structured communicator with the ability to influence senior stakeholders. 6. Comfortable with ambiguity, bias for action, and a hustle mindset. Why work with us? 1. Opportunity to work directly with founders and leadership on strategic problems. 2. High ownership role with visibility and impact on company-wide decisions. 3. Be part of a rocket ship startup that’s transforming a large, underserved industry. 4. Backed by top-tier VCs: Peak XV (Sequoia), Vertex, InfoEdge.

Posted 4 days ago

Apply

1.0 - 2.0 years

2 - 3 Lacs

Gaya

On-site

Full job description Job Title: LGSF Machine Operator Location: Gaya, Bihar Experience Required: 1-2 Years Industry: Real Estate Employment Type: Full-Time I. Essential Skills & Experience: Machine Operation: o Minimum 1-2 years of experience operating CNC-controlled machinery, preferably in metal fabrication, sheet metal, or roll-forming. o Direct experience with LGSF machines (e.g., Howick, FrameCAD, Scottsdale, Vertex) is highly desirable. o Proficiency in reading and understanding machine operation manuals and technical specifications. Technical Aptitude: o Strong understanding of mechanical and electrical components of machinery. o Ability to perform basic troubleshooting and preventative maintenance on the machine (e.g., checking fluid levels, calibrating sensors, minor adjustments). o Familiarity with various types of steel coils, gauges, and their properties. Computer Literacy: o Proficient in using machine control software (HMI - Human Machine Interface). o Ability to interpret and load CAD/CAM files (e.g., .LGS, .BIM, .DXF) into the machine. o Basic data entry and record-keeping skills. Quality Control: o Experience with precision measurement tools (calipers, tape measures, micrometers). o Ability to perform in-process quality checks and identify deviations from specifications. o Understanding of tolerance limits for LGSF components. Safety Consciousness: o Thorough understanding and adherence to industrial safety regulations, especially regarding machinery operation (e.g., lockout/tagout procedures, machine guarding). o Proficiency in using Personal Protective Equipment (PPE) such as safety glasses, hearing protection, gloves, and safety shoes. Job Type: Full-time Pay: ₹20,000.00 - ₹25,000.00 per month Benefits: Health insurance Experience: total work: 2 years (Preferred) Work Location: In person

Posted 4 days ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Spyne: At Spyne, we are transforming how cars are marketed and sold with cutting-edge Generative AI. What started as a bold idea—using AI-powered visuals to help auto dealers sell faster online—has now evolved into a full-fledged, AI-first automotive retail ecosystem. Backed by $16M in Series A funding from Accel, Vertex Ventures, and other top investors, we’re scaling at breakneck speed: Launched industry-first AI-powered Image, Video & 360° solutions for Automotive dealers Launching Gen AI powered Automotive Retail Suite to power Inventory, Marketing, CRM for dealers Onboarded 1500+ dealers across US, EU and other key markets in the past 2 years of launch Gearing up to onboard 10K+ dealers across global market of 200K+ dealers 150+ members team with near equal split on R&D and GTM Learn more about our products: Spyne AI Products - StudioAI , RetailAI Series A Announcement - CNBC-TV18 , Yourstory More about us - ASOTU , CNBC Awaaz What We’re Looking For: We are seeking a strategic and creative Product Marketing Associate with 2–4 years of B2B SaaS experience. In this role, you will craft product messaging, manage GTM content, and support product launches to drive adoption and sales. You will work closely with Product, Sales, and Customer Success teams to ensure our AI-powered solutions are clearly understood and effectively marketed across channels. What You Will Do 1. Product Marketing & GTM Support Assist in developing product positioning and messaging that resonate with auto and e-commerce businesses Support product launches and feature GTMs, ensuring alignment across Sales, Product, and Marketing Create sales enablement content – pitch decks, case studies, and one-pagers 2. Content Strategy & Execution Manage website content, blogs, and email campaigns to drive traffic and conversions Maintain brand voice consistency across sales collateral and digital channels Collaborate on lead-nurturing campaigns to engage and convert prospects 3. Market & Customer Insights Conduct customer and competitive research to refine messaging and positioning Partner with Sales and Product to gather feedback and improve GTM effectiveness Track industry trends to highlight Spyne’s leadership in automotive SaaS 4. Sales Enablement & Collaboration Work with Sales to equip teams with updated collateral and product narratives Adapt content for different geographies and buyer personas Close the loop by refining messaging based on sales feedback What Will Make You Successful: 2–4 years in Product Marketing, Content Marketing, or B2B SaaS GTM Strong storytelling and writing skills for technical and business audiences Experience with GTM planning, product launches, and sales content creation Hands-on with marketing automation tools (HubSpot, Marketo) and CMS Collaborative, detail-oriented, and data-driven Bonus: Experience in AI, SaaS, or Automotive tech Why Join Spyne? Culture : High ownership, zero politics, execution-first Growth : Be part of a startup scaling from $5M to $20M ARR Learning : Hands-on exposure with GTM leaders and AI-first products 🚀 If you’re an ambitious marketer who loves storytelling, SaaS, and AI-powered innovation, Spyne is your next big move. Apply now!

Posted 4 days ago

Apply

3.0 - 8.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

About the job What makes Techjays an inspiring place to work At Techjays, we are driving the future of artificial intelligence with a bold mission to empower businesses worldwide by helping them build AI solutions that transform industries. As an established leader in the AI space, we combine deep expertise with a collaborative, agile approach to deliver impactful technology that drives meaningful change. Our global team consists of professionals who have honed their skills at leading companies such as Google, Akamai, NetApp, ADP, Cognizant Consulting, and Capgemini. With engineering teams across the globe, we deliver tailored AI software and services to clients ranging from startups to large-scale enterprises. Be part of a company that’s pushing the boundaries of digital transformation. At Techjays, you’ll work on exciting projects that redefine industries, innovate with the latest technologies, and contribute to solutions that make a real-world impact. Join us on our journey to shape the future with AI. We are looking for a detail-oriented and curious AI QA Engineer to join our growing QA team. You will play a critical role in ensuring the quality, safety, and reliability of our AI-powered products and features. If you're passionate about AI, testing complex systems, and driving high standards of quality—this role is for you! Primary Skills: QA Automation, Python, API Testing, AI/ML Testing, Prompt Evaluation, Adversarial Testing, Risk-Based Testing, LLM-as-a-Judge, Model Metrics Validation, Test Strategy. Secondary Skills: CI/CD Integration, Git, Cloud Platforms (AWS/GCP/Azure ML), MLFlow, Postman, Testim, Applitools, Collaboration Tools (Jira, Confluence), Synthetic Data Generation, AI Ethics & Bias Awareness. Experience: 3 - 8 Years Work Location: Coimbatore/ Remote Must-Have Skills: Foundational QA Skills Strong knowledge of test design, defect management, and QA lifecycle . Experience with risk-based testing and QA strategy. AI/ML Knowledge Basic understanding of machine learning workflows , training/inference cycles. Awareness of AI quality challenges : bias, fairness, transparency. Familiarity with AI evaluation metrics: accuracy, precision, recall, F1-score . Hands-on with prompt testing , synthetic data generation , and non-deterministic behavior validation. Technical Capabilities Python programming for test automation and data validation. Hands-on experience with API testing tools (Postman, Swagger, REST clients). Knowledge of test automation tools (e.g., PyTest , Playwright, Selenium). Familiarity with Git and version control best practices. Understanding of CI/CD pipelines and integration testing. Tooling (Preferred) Tools like Diffblue, Testim, Applitools, Kolena, Galileo, MLFlow, Weights & Biases . Basic understanding of cloud-based AI platforms (AWS Sagemaker, Azure ML, GCP Vertex AI). Soft Skills Excellent analytical thinking and attention to detail. Strong collaboration and communication skills to work across cross-functional teams. Proactive and pull-mode work ethic —self-starter who takes ownership. Passion for learning new technologies and contributing to AI quality practices. Roles & Responsibilities: Design, write, and execute test plans and test cases for AI/ML-based applications. Collaborate with data scientists, ML engineers, and developers to understand model behavior and expected outcomes. Perform functional, regression, and exploratory testing on AI components and APIs. Validate model outputs for accuracy, fairness, bias, and explainability . Implement and run adversarial testing , edge cases, and out-of-distribution data scenarios. Conduct prompt testing and evaluation for LLM (Large Language Model)-based applications. Use LLM-as-a-Judge and AI tools to automate evaluation of AI responses where possible. Validate data pipelines , datasets, and ETL workflows. Track model performance metrics such as precision, recall, F1-score , and flag potential degradation. Document defects, inconsistencies, and raise risks proactively with the team. What we offer: Best in packages Paid holidays and flexible paid time away Casual dress code & flexible working environment Medical Insurance covering self & family up to 4 lakhs per person. Work in an engaging, fast-paced environment with ample opportunities for professional development. Diverse and multicultural work environment Be part of an innovation-driven culture that provides the support and resources needed to succeed.

Posted 4 days ago

Apply

10.0 years

15 - 20 Lacs

Jaipur, Rajasthan, India

On-site

We are seeking a cross-functional expert at the intersection of Product, Engineering, and Machine Learning to lead and build cutting-edge AI systems. This role combines the strategic vision of a Product Manager with the technical expertise of a Machine Learning Engineer and the innovation mindset of a Generative AI and LLM expert. You will help define, design, and deploy AI-powered features , train and fine-tune models (including LLMs), and architect intelligent AI agents that solve real-world problems at scale. 🎯 Key Responsibilities 🧩 Product Management: Define product vision, roadmap, and AI use cases aligned with business goals. Collaborate with cross-functional teams (engineering, research, design, business) to deliver AI-driven features. Translate ambiguous problem statements into clear, prioritized product requirements. ⚙️ AI/ML Engineering & Model Development Develop, fine-tune, and optimize ML models, including LLMs (GPT, Claude, Mistral, etc.). Build pipelines for data preprocessing, model training, evaluation, and deployment. Implement scalable ML solutions using frameworks like PyTorch , TensorFlow , Hugging Face , LangChain , etc. Contribute to R&D for cutting-edge models in GenAI (text, vision, code, multimodal). 🤖 AI Agents & LLM Tooling Design and implement autonomous or semi-autonomous AI Agents using tools like AutoGen , LangGraph , CrewAI , etc. Integrate external APIs, vector databases (e.g., Pinecone, Weaviate, ChromaDB), and retrieval-augmented generation (RAG). Continuously monitor, test, and improve LLM behavior, safety, and output quality. 📊 Data Science & Analytics Explore and analyze large datasets to generate insights and inform model development. Conduct A/B testing, model evaluation (e.g., F1, BLEU, perplexity), and error analysis. Work with structured, unstructured, and multimodal data (text, audio, image, etc.). 🧰 Preferred Tech Stack / Tools Languages: Python, SQL, optionally Rust or TypeScript Frameworks: PyTorch, Hugging Face Transformers, LangChain, Ray, FastAPI Platforms: AWS, Azure, GCP, Vertex AI, Sagemaker ML Ops: MLflow, Weights & Biases, DVC, Kubeflow Data: Pandas, NumPy, Spark, Airflow, Databricks Vector DBs: Pinecone, Weaviate, FAISS Model APIs: OpenAI, Anthropic, Google Gemini, Cohere, Mistral Tools: Git, Docker, Kubernetes, REST, GraphQL 🧑‍💼 Qualifications Bachelor’s, Master’s, or PhD in Computer Science, Data Science, Machine Learning, or a related field. 10+ years of experience in core ML, AI, or Data Science roles. Proven experience building and shipping AI/ML products. Deep understanding of LLM architectures, transformers, embeddings, prompt engineering, and evaluation. Strong product thinking and ability to work closely with both technical and non-technical stakeholders. Familiarity with GenAI safety, explainability, hallucination reduction, and prompt testing, computer vision 🌟 Bonus Skills Experience with autonomous agents and multi-agent orchestration. Open-source contributions to ML/AI projects. Prior Startup Or High-growth Tech Company Experience. Knowledge of reinforcement learning, diffusion models, or multimodal AI. Skills: text,claude,vision,hugging face transformers,sagemaker,hallucination reduction,langchain,genai safety,machine learning,data science & analytics,transformers,crewai,gcp,open-source contributions to ml/ai projects,startup,chromadb,graphql,pipelines,diffusion models,llm architectures,prompt engineering,gpt,weaviate,cohere,structured, unstructured, and multimodal data,docker,autogen,ai/ml products,model development,git,ai use,a/b testing,core ml,code,ai/ml engineering & model development,vertex ai,architect intelligent ai agents,tensorflow,bleu,ai-driven features,error analysis,roadmap,typescript,retrieval-augmented generation (rag),model training,multimodal ai,weights & biases,image,generative ai,hugging face,ray,f1,explore and analyze large datasets,spark,kubernetes,data science,product management,autonomous agents,mlflow,multimodal,ai,rest,google gemini,model evaluation,computer vision,mistral,vector databases,sql,engineering,airflow,output quality,pinecone,langgraph,reinforcement learning,pandas,llms,rust,ai-powered features,fastapi,multi-agent orchestration,embeddings,python,aws,ml models,kubeflow,pytorch,azure,dvc,openai,faiss,databricks,audio,ai engineering,numpy,anthropic,define product vision

Posted 4 days ago

Apply

4.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Analyst, Inclusive Innovation & Analytics, Center for Inclusive Growth Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. The Center for Inclusive Growth is the social impact hub at Mastercard. The organization seeks to ensure that the benefits of an expanding economy accrue to all segments of society. Through actionable research, impact data science, programmatic grants, stakeholder engagement and global partnerships, the Center advances equitable and sustainable economic growth and financial inclusion around the world. The Center’s work is at the heart of Mastercard’s objective to be a force for good in the world. Reporting to Vice President, Inclusive Innovation & Analytics, the Analyst, will 1) create and/or scale data, data science, and AI solutions, methodologies, products, and tools to advance inclusive growth and the field of impact data science, 2) work on the execution and implementation of key priorities to advance external and internal data for social strategies, and 3) manage the operations to ensure operational excellence across the Inclusive Innovation & Analytics team. Key Responsibilities Data Analysis & Insight Generation Design, develop, and scale data science and AI solutions, tools, and methodologies to support inclusive growth and impact data science. Analyze structured and unstructured datasets to uncover trends, patterns, and actionable insights related to economic inclusion, public policy, and social equity. Translate analytical findings into insights through compelling visualizations and dashboards that inform policy, program design, and strategic decision-making. Create dashboards, reports, and visualizations that communicate findings to both technical and non-technical audiences. Provide data-driven support for convenings involving philanthropy, government, private sector, and civil society partners. Data Integration & Operationalization Assist in building and maintaining data pipelines for ingesting and processing diverse data sources (e.g., open data, text, survey data). Ensure data quality, consistency, and compliance with privacy and ethical standards. Collaborate with data engineers and AI developers to support backend infrastructure and model deployment. Team Operations Manage team operations, meeting agendas, project management, and strategic follow-ups to ensure alignment with organizational goals. Lead internal reporting processes, including the preparation of dashboards, performance metrics, and impact reports. Support team budgeting, financial tracking, and process optimization. Support grantees and grants management as needed Develop briefs, talking points, and presentation materials for leadership and external engagements. Translate strategic objectives into actionable data initiatives and track progress against milestones. Coordinate key activities and priorities in the portfolio, working across teams at the Center and the business as applicable to facilitate collaboration and information sharing Support the revamp of the Measurement, Evaluation, and Learning frameworks and workstreams at the Center Provide administrative support as needed Manage ad-hoc projects, events organization Qualifications Bachelor’s degree in Data Science, Statistics, Computer Science, Public Policy, or a related field. 2–4 years of experience in data analysis, preferably in a mission-driven or interdisciplinary setting. Strong proficiency in Python and SQL; experience with data visualization tools (e.g., Tableau, Power BI, Looker, Plotly, Seaborn, D3.js). Familiarity with unstructured data processing and robust machine learning concepts. Excellent communication skills and ability to work across technical and non-technical teams. Technical Skills & Tools Data Wrangling & Processing Data cleaning, transformation, and normalization techniques Pandas, NumPy, Dask, Polars Regular expressions, JSON/XML parsing, web scraping (e.g., BeautifulSoup, Scrapy) Machine Learning & Modeling Scikit-learn, XGBoost, LightGBM Proficiency in supervised/unsupervised learning, clustering, classification, regression Familiarity with LLM workflows and tools like Hugging Face Transformers, LangChain (a plus) Visualization & Reporting Power BI, Tableau, Looker Python libraries: Matplotlib, Seaborn, Plotly, Altair Dashboarding tools: Streamlit, Dash Storytelling with data and stakeholder-ready reporting Cloud & Collaboration Tools Google Cloud Platform (BigQuery, Vertex AI), Microsoft Azure Git/GitHub, Jupyter Notebooks, VS Code Experience with APIs and data integration tools (e.g., Airflow, dbt) Ideal Candidate You are a curious and collaborative analyst who believes in the power of data to drive social change. You’re excited to work with cutting-edge tools while staying grounded in the real-world needs of communities and stakeholders. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.

Posted 4 days ago

Apply

12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Over 12 years of extensive experience in AI/ML , with a proven track record of architecting and delivering enterprise-scale machine learning solutions across the Retail and FMCG domains . Demonstrated ability to align AI strategy with business outcomes in areas such as customer experience, dynamic pricing, demand forecasting, assortment planning, and inventory optimization. Deep expertise in Large Language Models (LLMs) and Generative AI , including OpenAI’s GPT family , ChatGPT , and emerging models like DeepSeek . Adept at designing domain-specific use cases such as intelligent product search, contextual recommendation engines, conversational commerce assistants, and automated customer engagement using Retrieval-Augmented Generation (RAG) pipelines. Strong hands-on experience developing and deploying advanced ML models using modern data science stacks including: Python (advanced programming with focus on clean, scalable codebases) TensorFlow and Scikit-learn (for deep learning and classical ML models) NumPy , Pandas (for data wrangling, transformation, and statistical analysis) SQL (for structured data querying, feature engineering, and pipeline optimization) Expert-level understanding of Deep Learning architectures (CNNs, RNNs, Transformers, BERT/GPT), and Natural Language Processing (NLP) techniques such as entity recognition, text summarization, semantic search, and topic modeling – with practical application in retail-focused scenarios like product catalog enrichment, personalized marketing, and voice/text-based customer interactions. Strong data engineering proficiency , with experience designing robust data pipelines, building scalable ETL workflows, and integrating structured and unstructured data from ERP, CRM, POS, and social media platforms. Proven ability to operationalize ML workflows through automated retraining, version control, and model monitoring. Significant experience deploying AI/ML solutions at scale on cloud platforms such as AWS (SageMaker, Bedrock) , Google Cloud Platform (Vertex AI) , and Azure Machine Learning . Skilled in designing cloud-native architectures for low-latency inference, high-volume batch scoring, and streaming analytics. Familiar with containerization (Docker), orchestration (Kubernetes), and CI/CD for ML (MLOps). Ability to lead cross-functional teams , translating technical concepts into business impact, and collaborating with marketing, supply chain, merchandising, and IT stakeholders. Comfortable engaging with executive leadership to influence digital and AI strategies at an enterprise level.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

This role is for one of the Weekday's clients Min Experience: 5 years Location: Remote (India), Bengaluru, Chennai JobType: full-time We are seeking a skilled ML (Data) Platform Engineer to help scale a next-generation AutoML platform. This role sits at the critical intersection of machine learning, data infrastructure, and platform engineering. You will work on systems central to feature engineering, data management, and time series forecasting at scale. This is not your typical ETL role — the position involves building powerful data platforms that support automated model development, experimentation workflows, and high-reliability data lineage systems. If you're passionate about building scalable systems for both ML and analytics use cases, this is a high-impact opportunity. Requirements Key Responsibilities: Design, build, and scale robust data management systems that power AutoML and forecasting platforms. Own and enhance feature stores and associated engineering workflows. Establish and enforce strong data SLAs and build lineage systems for time series pipelines. Collaborate closely with ML engineers, infrastructure, and product teams to ensure platform scalability and usability. Drive key architectural decisions related to data versioning, distribution, and system composability. Contribute to designing reusable platforms to address diverse supply chain challenges. Must-Have Qualifications: Strong experience with large-scale and distributed data systems. Hands-on expertise in ETL workflows, data lineage, and reliability tooling. Solid understanding of ML feature engineering and experience building or maintaining feature stores. Exposure to time series forecasting systems or AutoML platforms. Strong analytical and problem-solving skills, with the ability to deconstruct complex platform requirements. Good-to-Have Qualifications: Familiarity with modern data infrastructure tools such as Apache Iceberg, ClickHouse, or Data Lakes. Product-oriented mindset with an ability to anticipate user needs and build intuitive systems. Experience with building composable, extensible platform components. Previous exposure to AutoML frameworks such as SageMaker, Vertex AI, or equivalent internal ML platforms. Skills: MLOps, Data Engineering, Big Data, ETL, Feature Store, Feature Engineering, AutoML, Forecasting Pipelines, Data Management

Posted 4 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

This role is for one of the Weekday's clients Min Experience: 5 years Location: Remote (India), Bengaluru, Chennai JobType: full-time We are seeking a skilled ML (Data) Platform Engineer to help scale a next-generation AutoML platform. This role sits at the critical intersection of machine learning, data infrastructure, and platform engineering. You will work on systems central to feature engineering, data management, and time series forecasting at scale. This is not your typical ETL role — the position involves building powerful data platforms that support automated model development, experimentation workflows, and high-reliability data lineage systems. If you're passionate about building scalable systems for both ML and analytics use cases, this is a high-impact opportunity. Requirements Key Responsibilities: Design, build, and scale robust data management systems that power AutoML and forecasting platforms. Own and enhance feature stores and associated engineering workflows. Establish and enforce strong data SLAs and build lineage systems for time series pipelines. Collaborate closely with ML engineers, infrastructure, and product teams to ensure platform scalability and usability. Drive key architectural decisions related to data versioning, distribution, and system composability. Contribute to designing reusable platforms to address diverse supply chain challenges. Must-Have Qualifications: Strong experience with large-scale and distributed data systems. Hands-on expertise in ETL workflows, data lineage, and reliability tooling. Solid understanding of ML feature engineering and experience building or maintaining feature stores. Exposure to time series forecasting systems or AutoML platforms. Strong analytical and problem-solving skills, with the ability to deconstruct complex platform requirements. Good-to-Have Qualifications: Familiarity with modern data infrastructure tools such as Apache Iceberg, ClickHouse, or Data Lakes. Product-oriented mindset with an ability to anticipate user needs and build intuitive systems. Experience with building composable, extensible platform components. Previous exposure to AutoML frameworks such as SageMaker, Vertex AI, or equivalent internal ML platforms. Skills: MLOps, Data Engineering, Big Data, ETL, Feature Store, Feature Engineering, AutoML, Forecasting Pipelines, Data Management

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies