Jobs
Interviews

1489 Vertex Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 years

0 Lacs

Hyderābād

On-site

EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking an experienced Lead Platform Engineer to join our Automation Engineering team. The ideal candidate will excel in cloud infrastructure automation, generative AI, and machine learning, with a strong foundation in DevOps practices and modern scripting tools. This role involves designing cutting-edge AI-driven solutions for AIOps while innovating cloud automation processes to optimize operational efficiency. Responsibilities Design and develop automated workflows for cloud infrastructure provisioning using IaC tools like Terraform Build frameworks to support deployment, configuration, and management across diverse cloud environments Develop and manage service catalog components, ensuring integration with platforms like Backstage Implement GenAI models to enhance service catalog functionality and code quality across automation pipelines Design and implement CI/CD pipelines and maintain CI pipeline code for cloud automation use cases Write scripts to support cloud deployment orchestration using Python, Bash, or other scripting languages Design and deploy generative AI models for AIOps applications such as anomaly detection and predictive maintenance Work with frameworks like LangChain or cloud platforms such as Bedrock, Vertex AI, and Azure AI to deploy RAG workflows Build and optimize vector databases and document sources using tools like OpenSearch, Amazon Kendra, or equivalent solutions Prepare and label data for generative AI models, ensuring scalability and integrity Create agentic workflows using frameworks like Langraph or cloud GenAI platforms such as Bedrock Agents Integrate generative AI models with operational systems and AIOps platforms for enhanced automation Evaluate AI model performance and ensure continuous optimization over time Develop and maintain MLOps pipelines to monitor and mitigate model decay Collaborate with cross-functional teams to drive innovation and improve cloud automation processes Research and recommend new tools and best practices to enhance operational efficiency Requirements Bachelor's or Master's degree in Computer Science, Engineering, or a related field 7+ years of experience in cloud infrastructure automation, scripting, and DevOps Strong proficiency in IaC tools like Terraform, CloudFormation, or similar Expertise in Python, cloud AI frameworks such as LangChain, and generative AI workflows Demonstrated background in developing and deploying AI models such as RAG or transformers Proficiency in building vector databases and document sources using solutions like OpenSearch or Amazon Kendra Competency in preparing and labeling datasets for AI models and optimizing data inputs Familiarity with cloud platforms including AWS, Google Cloud, or Azure Capability to implement MLOps pipelines and monitor AI system performance Nice to have Knowledge of agentic architectures such as React and flow engineering techniques Background in using Bedrock Agents or Langraph for workflow creation Understanding of integrating generative AI into legacy or complex operational systems We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)

Posted 3 days ago

Apply

3.0 - 8.0 years

0 Lacs

Hyderābād

On-site

EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking a skilled Platform Engineer to join our Automation Engineering team, bringing expertise in cloud infrastructure automation, DevOps, scripting, and advanced AI/ML practices. The role focuses on integrating generative AI into automation workflows, enhancing operational efficiency, and supporting cloud-first initiatives. Responsibilities Design cloud automation workflows using Infrastructure-as-Code tools such as Terraform or CloudFormation Build scalable frameworks to manage infrastructure provisioning, deployment, and configuration across multiple cloud platforms Create service catalog components compatible with automation platforms like Backstage Integrate generative AI models to improve service catalog functionalities, including automated code generation and validation Architect CI/CD pipelines for automated build, test, and deployment processes Maintain deployment automation scripts utilizing technologies such as Python or Bash Implement generative AI models (e.g., RAG, agent-based workflows) for AIOps use cases like anomaly detection and root cause analysis Employ AI/ML tools such as LangChain, Bedrock, Vertex AI, or Azure AI for advanced generative AI solutions Develop vector databases and document sources using services like Amazon Kendra, OpenSearch, or custom solutions Engineer data pipelines to stream real-time operational insights that support AI-driven automation Build MLOps pipelines to deploy and monitor generative AI models, ensuring optimal performance and avoiding model decay Select appropriate LLM models for specific AIOps use cases and integrate them effectively into workflows Collaborate with cross-functional teams to design and refine automation and AI-driven processes Research emerging tools and technologies to enhance operational efficiency and scalability Requirements Bachelor's or Master's degree in Computer Science, Engineering, or related field 3-8 years of experience in cloud infrastructure automation, DevOps, and scripting Proficiency with Infrastructure-as-Code tools such as Terraform or CloudFormation Expertise in Python and generative AI frameworks like RAG and agent-based workflows Knowledge of cloud-based AI services, including Bedrock, Vertex AI, or Azure AI Familiarity with vector databases like Amazon Kendra, OpenSearch, or custom database solutions Competency in data engineering tasks such as feature engineering, labeling, and real-time data streaming Proven track record in creating and maintaining MLOps pipelines for AI/ML models in production environments Nice to have Background in Flow Engineering tools such as Langraph or platform-specific workflow orchestration tools Understanding of comprehensive AIOps processes to refine cloud-based automation solutions We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)

Posted 3 days ago

Apply

7.0 years

0 Lacs

Gurgaon

On-site

EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking an experienced Lead Platform Engineer to join our Automation Engineering team. The ideal candidate will excel in cloud infrastructure automation, generative AI, and machine learning, with a strong foundation in DevOps practices and modern scripting tools. This role involves designing cutting-edge AI-driven solutions for AIOps while innovating cloud automation processes to optimize operational efficiency. Responsibilities Design and develop automated workflows for cloud infrastructure provisioning using IaC tools like Terraform Build frameworks to support deployment, configuration, and management across diverse cloud environments Develop and manage service catalog components, ensuring integration with platforms like Backstage Implement GenAI models to enhance service catalog functionality and code quality across automation pipelines Design and implement CI/CD pipelines and maintain CI pipeline code for cloud automation use cases Write scripts to support cloud deployment orchestration using Python, Bash, or other scripting languages Design and deploy generative AI models for AIOps applications such as anomaly detection and predictive maintenance Work with frameworks like LangChain or cloud platforms such as Bedrock, Vertex AI, and Azure AI to deploy RAG workflows Build and optimize vector databases and document sources using tools like OpenSearch, Amazon Kendra, or equivalent solutions Prepare and label data for generative AI models, ensuring scalability and integrity Create agentic workflows using frameworks like Langraph or cloud GenAI platforms such as Bedrock Agents Integrate generative AI models with operational systems and AIOps platforms for enhanced automation Evaluate AI model performance and ensure continuous optimization over time Develop and maintain MLOps pipelines to monitor and mitigate model decay Collaborate with cross-functional teams to drive innovation and improve cloud automation processes Research and recommend new tools and best practices to enhance operational efficiency Requirements Bachelor's or Master's degree in Computer Science, Engineering, or a related field 7+ years of experience in cloud infrastructure automation, scripting, and DevOps Strong proficiency in IaC tools like Terraform, CloudFormation, or similar Expertise in Python, cloud AI frameworks such as LangChain, and generative AI workflows Demonstrated background in developing and deploying AI models such as RAG or transformers Proficiency in building vector databases and document sources using solutions like OpenSearch or Amazon Kendra Competency in preparing and labeling datasets for AI models and optimizing data inputs Familiarity with cloud platforms including AWS, Google Cloud, or Azure Capability to implement MLOps pipelines and monitor AI system performance Nice to have Knowledge of agentic architectures such as React and flow engineering techniques Background in using Bedrock Agents or Langraph for workflow creation Understanding of integrating generative AI into legacy or complex operational systems We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)

Posted 3 days ago

Apply

3.0 - 8.0 years

0 Lacs

Gurgaon

On-site

EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking a skilled Platform Engineer to join our Automation Engineering team, bringing expertise in cloud infrastructure automation, DevOps, scripting, and advanced AI/ML practices. The role focuses on integrating generative AI into automation workflows, enhancing operational efficiency, and supporting cloud-first initiatives. Responsibilities Design cloud automation workflows using Infrastructure-as-Code tools such as Terraform or CloudFormation Build scalable frameworks to manage infrastructure provisioning, deployment, and configuration across multiple cloud platforms Create service catalog components compatible with automation platforms like Backstage Integrate generative AI models to improve service catalog functionalities, including automated code generation and validation Architect CI/CD pipelines for automated build, test, and deployment processes Maintain deployment automation scripts utilizing technologies such as Python or Bash Implement generative AI models (e.g., RAG, agent-based workflows) for AIOps use cases like anomaly detection and root cause analysis Employ AI/ML tools such as LangChain, Bedrock, Vertex AI, or Azure AI for advanced generative AI solutions Develop vector databases and document sources using services like Amazon Kendra, OpenSearch, or custom solutions Engineer data pipelines to stream real-time operational insights that support AI-driven automation Build MLOps pipelines to deploy and monitor generative AI models, ensuring optimal performance and avoiding model decay Select appropriate LLM models for specific AIOps use cases and integrate them effectively into workflows Collaborate with cross-functional teams to design and refine automation and AI-driven processes Research emerging tools and technologies to enhance operational efficiency and scalability Requirements Bachelor's or Master's degree in Computer Science, Engineering, or related field 3-8 years of experience in cloud infrastructure automation, DevOps, and scripting Proficiency with Infrastructure-as-Code tools such as Terraform or CloudFormation Expertise in Python and generative AI frameworks like RAG and agent-based workflows Knowledge of cloud-based AI services, including Bedrock, Vertex AI, or Azure AI Familiarity with vector databases like Amazon Kendra, OpenSearch, or custom database solutions Competency in data engineering tasks such as feature engineering, labeling, and real-time data streaming Proven track record in creating and maintaining MLOps pipelines for AI/ML models in production environments Nice to have Background in Flow Engineering tools such as Langraph or platform-specific workflow orchestration tools Understanding of comprehensive AIOps processes to refine cloud-based automation solutions We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)

Posted 3 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking a highly skilled and hands-on AI Architect to lead the design and deployment of next-generation AI systems for our cutting-edge platform. You will be responsible for architecting scalable GenAI and machine learning solutions, establishing MLOps best practices, and ensuring robust security and cost-efficient operations across our AI-powered modules Primary Skills: • System architecture for GenAI: design scalable pipelines using LLMs, RAG, multi‐agent orchestration (LangGraph, CrewAI, AutoGen). • Machine‐learning engineering: PyTorch or TensorFlow, Hugging Face Transformers. • Retrieval & vector search: FAISS, Weaviate, Pinecone, pgvector; embedding selection and index tuning. • Cloud infra: AWS production experience (GPU instances, Bedrock / Vertex AI, EKS, IAM, KMS). • MLOps & DevOps: MLflow / Kubeflow, Docker + Kubernetes, CI/CD, Terraform • Security & compliance: data encryption, RBAC, PII redaction in LLM prompts. • Cost & performance optimisation: token‐usage budgeting, caching, model routing. • Stakeholder communication: ability to defend architectural decisions to CTO, product, and investors.

Posted 3 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Position: We are conducting an in-person hiring drive for the position of Mlops / Data Science in Pune & Bengaluru on 2nd August 2025.Interview Location is mentioned below: Pune – Persistent Systems, Veda Complex, Rigveda-Yajurveda-Samaveda-Atharvaveda Plot No. 39, Phase I, Rajiv Gandhi Information Technology Park, Hinjawadi, Pune, 411057 Bangalore - Persistent Systems, The Cube at Karle Town Center Rd, DadaMastan Layout, Manayata Tech Park, Nagavara, Bengaluru, Karnataka 560024 We are looking for an experienced and talented Data Science to join our growing data competency team. The ideal candidate will have a strong background in working with GEN AI , ML ,LangChain, LangGraph, Mlops Architecture Strategy, Prompt engineering. You will work closely with our data analysts, engineers, and business teams to ensure optimal performance, scalability, and availability of our data pipelines and analytics. Role: Mlops, Data Science Job Location: All PSL Location Experience: 5+ Years Job Type: Full Time Employment What You'll Do: Design, build, and manage scalable ML model deployment pipelines (CI/CD for ML). Automate model training, validation, monitoring, and retraining workflows. Implement model governance, versioning, and reproducibility best practices. Collaborate with data scientists, engineers, and product teams to operationalize ML solutions. Ensure robust monitoring and performance tuning of deployed models Expertise You'll Bring: Strong experience with MLOps tools & frameworks (MLflow, Kubeflow, SageMaker, Vertex AI, etc.). Proficient in containerization (Docker, Kubernetes). Good knowledge of cloud platforms (AWS, Azure, or GCP). Expertise in Python and familiarity with ML libraries (TensorFlow, PyTorch, scikit-learn). Solid understanding of CI/CD, infrastructure as code, and automation tools. Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”

Posted 3 days ago

Apply

0 years

0 Lacs

Kozhikode, Kerala, India

On-site

Pfactorial Technologies is a fast-growing AI/ML/NLP company at the forefront of innovation in Generative AI, voice technology, and intelligent automation. We specialize in building next-gen solutions using LLMs, agent frameworks, and custom ML pipelines. Join our dynamic team to work on real-world challenges and shape the future of AI driven systems and smart automation.. We are looking for AI/ML Engineer – LLMs, Voice Agents & Workflow Automation (0–3Yrs Experience ) Experience with LLM integration pipelines (OpenAI, Vertex AI, Hugging Face models) Hands on experience in working with voice agents, TTS, STT, caching mechanisms, and ElevenLabs voice technology Strong understanding of vector databases like Qdrant or Milvus Hands-on experience with Langchain, LlamaIndex, or agent frameworks (e.g., AutoGen, CrewAI) Knowledge of FastAPI, Celery, and orchestration of ML/AI services Familiarity with cloud deployment on GCP, AWS, or Azure Ability to build and fine-tune matching, ranking, or retrieval-based models Developing agentic workflows for automation Implementing NLP pipelines for parsing, summarizing, and communication (e.g., email bots, script generators) Comfortable working with graph-based data representation and integrating with frontend Experience in multi-agent collaboration frameworks like Google Agent2Agent Practical experience in data scraping and enrichment for ML training datasets Understanding of compliance in AI applications 👉 For more updates, follow us on our LinkedIn page! https://in.linkedin.com/company/pfactorial

Posted 3 days ago

Apply

3.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities 3+ years of experience in implementing analytical solutions using Palantir Foundry. preferably in PySpark and hyperscaler platforms (cloud services like AWS, GCP and Azure) with focus on building data transformation pipelines at scale. Team management: Must have experience in mentoring and managing large teams (20 to 30 people) for complex engineering programs. Candidate should have experience in hiring and nurturing talent in Palantir Foundry. Training: candidate should have experience in creating training programs in Foundry and delivering the same in a hands-on format either offline or virtually. At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. At least 3 years of experience with Foundry services: Data Engineering with Contour and Fusion Dashboarding, and report development using Quiver (or Reports) Application development using Workshop. Exposure to Map and Vertex is a plus Palantir AIP experience will be a plus Hands-on experience in data engineering and building data pipelines (Code/No Code) for ELT/ETL data migration, data refinement and data quality checks on Palantir Foundry. Hands-on experience of managing data life cycle on at least one hyperscaler platform (AWS, GCP, Azure) using managed services or containerized deployments for data pipelines is necessary. Hands-on experience in working & building on Ontology (esp. demonstrable experience in building Semantic relationships). Proficiency in SQL, Python and PySpark. Demonstrable ability to write & optimize SQL and spark jobs. Some experience in Apache Kafka and Airflow is a prerequisite as well. Hands-on experience on DevOps on hyperscaler platforms and Palantir Foundry is necessary. Experience in MLOps is a plus. Experience in developing and managing scalable architecture & working experience in managing large data sets. Opensource contributions (or own repositories highlighting work) on GitHub or Kaggle is a plus. Experience with Graph data and graph analysis libraries (like Spark GraphX, Python NetworkX etc.) is a plus. A Palantir Foundry Certification (Solution Architect, Data Engineer) is a plus. Certificate should be valid at the time of Interview. Experience in developing GenAI application is a plus Mandatory Skill Sets At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. At least 3 years of experience with Foundry services Preferred Skill Sets Palantir Foundry Years Of Experience Required Experience 4 to 7 years ( 3 + years relevant) Education Qualification Bachelor's degree in computer science, data science or any other Engineering discipline. Master’s degree is a plus. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Science Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Palantir (Software) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 3 days ago

Apply

3.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decision-making and automation. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Position responsibilities and expectations Designing and building analytical /DL/ ML algorithms using Python, R and other statistical tools. Strong data representation and lucid presentation (of analysis/modelling output) using Python, R Markdown, Power Point, Excel etc. Ability to learn new scripting language or analytics platform. Technical Skills required (must have) HandsOn Exposure to Generative AI (Design, development of GenAI application in production) Strong understanding of RAG, Vector Database, Lang Chain and multimodal AI applications. Strong understanding of deploying and optimizing AI application in production. Strong knowledge of statistical and data mining techniques like Linear & Logistic Regression analysis, Decision trees, Bagging, Boosting, Time Series and Non-parametric analysis. Strong knowledge of DL & Neural Network Architectures (CNN, RNN, LSTM, Transformers etc.) Strong knowledge of SQL and R/Python and experience with distribute data/computing tools/IDEs. Experience in advanced Text Analytics (NLP, NLU, NLG). Strong hands-on experience of end-to-end statistical model development and implementation Understanding of LLMOps, ML Ops for scalable ML development. Basic understanding of DevOps and deployment of models into production (PyTorch, TensorFlow etc.). Expert level proficiency algorithm building languages like SQL, R and Python and data visualization tools like Shiny, Qlik, Power BI etc. Exposure to Cloud Platform (Azure or AWS or GCP) technologies and services like Azure AI/ Sage maker/Vertex AI, Auto ML, Azure Index, Azure Functions, OCR, OpenAI, storage, scaling etc. Technical Skills required (Any one or more) Experience in video/ image analytics (Computer Vision) Experience in IoT/ machine logs data analysis Exposure to data analytics platforms like Domino Data Lab, c3.ai, H2O, Alteryx or KNIME Expertise in Cloud analytics platforms (Azure, AWS or Google) Experience in Process Mining with expertise in Celonis or other tools Proven capability in using Generative AI services like OpenAI, Google (Gemini) Understanding of Agentic AI Framework (Lang Graph, Auto gen etc.) Understanding of fine-tuning for pre-trained models like GPT, LLaMA, Claude etc. using LoRA, QLoRA and PEFT technique. Proven capability in building customized models from open-source distributions like Llama, Stable Diffusion Mandatory Skill Sets AI chatbots, Data structures, GenAI object-oriented programming, IDE, API, LLM Prompts, Streamlit Preferred Skill Sets AI chatbots, Data structures, GenAI object-oriented programming, IDE, API, LLM Prompts, Streamlit Years Of Experience Required 3-6 Years Education Qualification BE, B. Tech, M. Tech, M. Stat, Ph.D., M.Sc. (Stats / Maths) Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Doctor of Philosophy, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Chatbots, Data Structures, Generative AI Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, AI Implementation, Analytical Thinking, C++ Programming Language, Communication, Complex Data Analysis, Creativity, Data Analysis, Data Infrastructure, Data Integration, Data Modeling, Data Pipeline, Data Quality, Deep Learning, Embracing Change, Emotional Regulation, Empathy, GPU Programming, Inclusion, Intellectual Curiosity, Java (Programming Language), Learning Agility, Machine Learning {+ 25 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 3 days ago

Apply

10.0 years

0 Lacs

Chandigarh, India

On-site

Job Description: 7–10 years of industry experience, with at least 5 years in machine learning roles. Advanced proficiency in Python and common ML libraries: TensorFlow, PyTorch, Scikit-learn. Experience with distributed training, model optimization (quantization, pruning), and inference at scale. Hands-on experience with cloud ML platforms: AWS (SageMaker), GCP (Vertex AI), or Azure ML. Familiarity with MLOps tooling: MLflow, TFX, Airflow, or Kubeflow; and data engineering frameworks like Spark, dbt, or Apache Beam. Strong grasp of CI/CD for ML, model governance, and post-deployment monitoring (e.g., data drift, model decay). Excellent problem-solving, communication, and documentation skills.

Posted 3 days ago

Apply

3.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decision-making and automation. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Position responsibilities and expectations Designing and building analytical /DL/ ML algorithms using Python, R and other statistical tools. Strong data representation and lucid presentation (of analysis/modelling output) using Python, R Markdown, Power Point, Excel etc. Ability to learn new scripting language or analytics platform. Technical Skills required (must have) HandsOn Exposure to Generative AI (Design, development of GenAI application in production) Strong understanding of RAG, Vector Database, Lang Chain and multimodal AI applications. Strong understanding of deploying and optimizing AI application in production. Strong knowledge of statistical and data mining techniques like Linear & Logistic Regression analysis, Decision trees, Bagging, Boosting, Time Series and Non-parametric analysis. Strong knowledge of DL & Neural Network Architectures (CNN, RNN, LSTM, Transformers etc.) Strong knowledge of SQL and R/Python and experience with distribute data/computing tools/IDEs. Experience in advanced Text Analytics (NLP, NLU, NLG). Strong hands-on experience of end-to-end statistical model development and implementation Understanding of LLMOps, ML Ops for scalable ML development. Basic understanding of DevOps and deployment of models into production (PyTorch, TensorFlow etc.). Expert level proficiency algorithm building languages like SQL, R and Python and data visualization tools like Shiny, Qlik, Power BI etc. Exposure to Cloud Platform (Azure or AWS or GCP) technologies and services like Azure AI/ Sage maker/Vertex AI, Auto ML, Azure Index, Azure Functions, OCR, OpenAI, storage, scaling etc. Technical Skills required (Any one or more) Experience in video/ image analytics (Computer Vision) Experience in IoT/ machine logs data analysis Exposure to data analytics platforms like Domino Data Lab, c3.ai, H2O, Alteryx or KNIME Expertise in Cloud analytics platforms (Azure, AWS or Google) Experience in Process Mining with expertise in Celonis or other tools Proven capability in using Generative AI services like OpenAI, Google (Gemini) Understanding of Agentic AI Framework (Lang Graph, Auto gen etc.) Understanding of fine-tuning for pre-trained models like GPT, LLaMA, Claude etc. using LoRA, QLoRA and PEFT technique. Proven capability in building customized models from open-source distributions like Llama, Stable Diffusion Mandatory Skill Sets AI chatbots, Data structures, GenAI object-oriented programming, IDE, API, LLM Prompts, Streamlit Preferred Skill Sets AI chatbots, Data structures, GenAI object-oriented programming, IDE, API, LLM Prompts, Streamlit Years Of Experience Required 3-6 Years Education Qualification BE, B. Tech, M. Tech, M. Stat, Ph.D., M.Sc. (Stats / Maths) Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Doctor of Philosophy, Bachelor of Engineering, Bachelor of Technology Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Chatbots, Data Structures, Generative AI Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, AI Implementation, C++ Programming Language, Communication, Complex Data Analysis, Data Analysis, Data Infrastructure, Data Integration, Data Modeling, Data Pipeline, Data Quality, Deep Learning, Emotional Regulation, Empathy, GPU Programming, Inclusion, Intellectual Curiosity, Java (Programming Language), Machine Learning, Machine Learning Libraries, Named Entity Recognition, Natural Language Processing (NLP), Natural Language Toolkit (NLTK) {+ 20 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 3 days ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

What You’ll Do Sales and Use Tax Analyst role within the Indirect Tax Center of Excellence, Pune will focus on critical areas such as: US Sales and Use Tax compliance Activities Tax Research Account Reconciliations MIS to tax leadership This is an professional (IC) role. Perform thorough analysis of the Returns and its workings with accuracy. Data retrieval and Reconciliations of the returns with respective ERP ( Oracle, SAP ) & Vertex. Audit Defense including communication with external Auditors. Communications with Business stakeholders for documentation and evidences. Creation of SOP for any new takeovers and support in formalizing the process. Keep updated with the law, rules changes for each state relevant to Eaton. Tax applicability analysis and review for all the Mark View Invoices (Non-PO invoices) Monthly updates and review of the Tax Applicability through Checkpoint ( Services & its tax treatments ) Contribute towards initiatives taken for automation within Compliance process. Meet the targets as per the KPI for Sales and Use tax. Research and respond to sales & use tax questions with prompt and accurate replies Perform activities related to Other Projects. Qualifications Post Graduation in Accounting/ MBA from recognized University 3+ years of experience in the area of US Sales and Use Tax. Skills Working knowledge on Oracle, SAP and other ERP systems preferred. Working knowledge of Vertex Returns or similar compliance software. Good accounting concepts Good verbal and written communication skills. Self-starter. Ability to manage multiple priorities and responsibilities yet still meet time-sensitive deadlines. Attention to detail. Work independently and within a team setting. Ability to deliver quality output under limited supervision. Strong analytical skills, quick learning in ERP like SAP and Oracle for data retrieval.

Posted 3 days ago

Apply

10.0 - 16.0 years

0 Lacs

Andhra Pradesh, India

On-site

At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. Those in SAP finance at PwC will specialise in providing consulting services for SAP finance applications. You will analyse client requirements, implement software solutions, and offer training and support for seamless integration and utilisation of SAP finance applications. Working in this area, you will enable clients to optimise financial processes, improve financial reporting, and achieve their strategic objectives. Enhancing your leadership style, you motivate, develop and inspire others to deliver quality. You are responsible for coaching, leveraging team member’s unique strengths, and managing performance to deliver on client expectations. With your growing knowledge of how business works, you play an important role in identifying opportunities that contribute to the success of our Firm. You are expected to lead with integrity and authenticity, articulating our purpose and values in a meaningful way. You embrace technology and innovation to enhance your delivery and encourage others to do the same. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Analyse and identify the linkages and interactions between the component parts of an entire system. Take ownership of projects, ensuring their successful planning, budgeting, execution, and completion. Partner with team leadership to ensure collective ownership of quality, timelines, and deliverables. Develop skills outside your comfort zone, and encourage others to do the same. Effectively mentor others. Use the review of work as an opportunity to deepen the expertise of team members. Address conflicts or issues, engaging in difficult conversations with clients, team members and other stakeholders, escalating where appropriate. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Job Summary - A career in our Managed Services team will give you an opportunity to collaborate with many teams to help our clients implement and operate new capabilities, achieve operational efficiencies, and harness the power of technology. Our Application Evolution Services team will provide you with the opportunity to help organizations harness the power of their enterprise applications by optimizing the technology while driving transformation and innovation to increase business performance. We assist our clients in capitalizing on technology improvements, implementing new capabilities, and achieving operational efficiencies by managing and maintaining their application ecosystems. We help our clients maximize the value of their SAP investment by managing the support and continuous transformation of their solutions in the areas of sales, finance, supply chain, engineering, manufacturing and human capital Minimum Degree Required (BQ) *: Bachelor’s Degree Required Field(s) Of Study (BQ) Preferred Field(s) of Study: Minimum Year(s) of Experience (BQ) *: US Minimum Of 10-16 Years Of Experience Preferred Skills/Certification(s) Preferred: SAP Certification in FICO/CFIN Experience in S/4 HANA (Public Cloud) Exposure to interfaces like ALE/IDOC or EDI/IDOC with little technical knowledge Preferred Knowledge As a manager, you’ll work as part of a team of problem solvers with extensive consulting and industry experience, helping our clients solve their complex business issues from strategy to execution. Specific responsibilities include but are not limited to: Proactively assist in the management of a portfolio of clients, while reporting to Senior Managers and above. Be involved in the financial metrics. Be actively involved in business development activities to help identify and research opportunities on new/existing clients. Contribute to the development of your own and team’s technical acumen. Use data and insights to inform conclusions and support decision-making. Adherence to SLAs, experience in incident management, change management and problem management. Develop new skills and strategies to solve complex technical challenges. Assist in the management and delivering of large projects. Train, coach, and supervise staff to recognize their strengths and encourage them to take ownership of their personal development. Act to resolve issues which prevent the team working effectively. Keep up to date with local and national business and economic issues. Continue to develop internal relationships and the PwC brand. Build a strong team environment that includes client interactions, workstream management, and cross-team collaboration. Actively engage in cross competency work and contribute to COE activities. Demonstrating project management skills including the ability to manage multiple projects simultaneously while being detail oriented. Technical Skills Responsible for planning and executing SAP Implementation / Development / Support activities regarding SAP Finance and Controlling (FI-CO) along with Central Finance (CFIN). Understand client requirements, provide solutions, functional specifications and configure the system accordingly. Ability to configure SAP FI-CO and CFIN, deliver work products / packages confirming to the Client's Standards & Requirements. Integration of the FI-CO module with other SAP modules and with external applications. Hands on experience in configuring / defining the following in the FICO / CFIN: SAP FI – General Ledger Accounting SAP FI – Accounts Receivable & Accounts Payable SAP FI – Asset Accounting SAP FI – Fixed Assets SAP CO – Cost Centers and Profit Centers SAP CO – Internal Orders SAP CO – Product Costing Master Data – GL, FA, CO, Consolidations Treasury Process – Master Data and Transactions Month End Close – Activities and foreign currency valuations Cost Management and Profitability Analysis – Financial Plan Data Upload File, FP&A, Margin Analysis, Overhead Cost Accounting, Universal Allocation Central Finance -Initial Loads Central Finance - Error Cockpit ICMR -Configuration, Matching Methods, Reconciliation Case, Matching Rules/Matching Expressions TAX – Indirect Tax – Tax Engine Vertex, US Sales and Tax Reporting, Exemption Certificate Management, S4 ProCo Alignment Vertex/Alteryx/SAP S4, VAT: Transactional Tax Determination and Tax Accounting in S4 (Non-US VAT in different countries), Electronic Tax Invoicing using SAP DRC (for India and Mexico) using SAP DRC, Indirect Tax Reporting using SAP DRC (Non-US VAT), SAP Deferred tax transfer program. TAX – Direct Tax – Income TAX Accounting (Provision), Income Tax Compliance, Withholding Tax, Tax Technology / Operations. Transfer Pricing – Intercompany Cost Allocations, Intercompany Services, Cost Sharing, Reporting and Analytics Cash Basis Ledger – Data Transfer, Transaction Posting, Reports Interfaces Delivery Lead Experience Constantly looking to identify impediments early, actively working to resolve those impediments, and escalate when needed. Management and tracking of cross team/squad dependencies. Hands-on experience working on reporting and preparing presentations as part of WSR & MSR. Management and tracking of all high-integrity commitments. Provides proactive visibility and effectively communicates delivery targets, commitments and progress. Works to minimize meetings and ceremonies, but when they are needed, they are well-run and efficient. Encourages a culture of team-driven decision making and commitment. Encourages team trust and facilitates team building events. Where appropriate, coaches the teams to improve collaboration and outcomes (coaching is the primary responsibility of teams’ managers) Qualification Proficiency with SAP BTP (Business Technology Platform) Strong understanding of architecture considerations for SAP (cloud, on-premises, hybrid). Experience with SAP BTP security and authorization. Ability to design new architectural frameworks and influence their execution. Good knowledge of SAP S/4HANA architecture and functionality ITIL 4 Certification Soft Skills Self-driven with a can-do attitude, with an excellent communication and client-facing skills Problem-solving mindset and ability to work in a collaborative environment. Strong relationship builder within the organization and with external partners.

Posted 3 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Role Summary : This role is responsible for designing and executing AI-enabled digitization initiatives within HR. Will work closely with HR leaders and cross-functional tech teams to translate manual and semi-automated processes into efficient, data-driven AI-supported workflows. Person need not have HR background—but a passion for process improvement, product thinking, and technical fluency are key. Key Responsibilities: Understand current HR processes and identify areas for automation, AI adoption, and digitization. Collaborate with HR teams to gather requirements and design AI-first process maps (e.g., onboarding, talent acquisition, performance reviews). Build/Prototype automation tools using low-code/no-code or custom-built solutions (ChatGPT API, workflow bots, etc.). Partner with internal tech teams to deploy and scale digitized HR solutions. Ensure successful implementation, adoption, and performance tracking of digitized processes. Maintain documentation of architecture, workflows, and use cases. Manage end-to-end HR tech projects with strong stakeholder communication and timelines. Benchmark best practices in HR tech and AI and bring relevant innovation ideas to the table. Ideal Candidate Profile: Educational Background: B.Tech / B.E / MCA or equivalent in Computer Science or a related technical field. Experience: 3–5 years in tech or product roles with direct exposure to AI/machine learning/automation projects. Strong grasp of AI tools and frameworks , e.g., OpenAI API, Python scripts, RPA (e.g., UiPath), Zapier, Typeform, etc. Proven experience working with cross-functional stakeholders and managing projects end-to-end. Excellent analytical and problem-solving skills, ability to work with ambiguity. Strong interest in improving people-related processes and employee experience. Preferred: Exposure to HR or People Operations systems like ATS, HRMS, L&D platforms is a bonus. Prior experience in a fast-paced product company/startup environment. Understanding of data privacy, compliance, and security best practices. Few of the Tool exposure required Category Tools/Technologies AI & NLP OpenAI API (ChatGPT), LangChain, Azure OpenAI, Google Vertex AI Automation (Low-code/No-code) Zapier, Make (Integromat), Microsoft Power Automate, Workato Form & Workflow Builders Typeform, Jotform, Google Forms + AppSheet, Airtable RPA & Workflow Engines UiPath, Automation Anywhere, Robocorp Programming & Scripting Python (for automation, API integration), JavaScript (optional) Project Management Jira, Notion, Asana, Trello HR Tech (Optional but good to have) Darwinbox, SAP SuccessFactors, Keka, Zoho People, Freshteam API Integration REST APIs, Webhooks, Postman Data Handling Excel (advanced), Google Sheets, Pandas (Python), SQL basics

Posted 3 days ago

Apply

2.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

Remote

Role: Machine Learning Engineer - MLOps Job Overview As a Senior Software Development Engineer, Machine Learning (ML) Operations in the Technology & Engineering division, you will be responsible for enabling PitchBook’s Machine Learning teams and practitioners by providing tools that optimize all aspects of the Machine Learning Development Life Cycle (MLDLC). Your work will support projects in a variety of domains, including Generative AI (GenAI), Large Language Models (LLMs), Natural Language Processing (NLP), Classification, and Regression. Team Overview Your team’s goal will be to reduce friction and time-to-business-value for teams building Artificial Intelligence (AI) solutions at PitchBook. You will be essential in helping to build exceptional AI solutions relied upon and used by thousands of PitchBook customers every day. You will work with PitchBook professionals around the world with the collective goal of delighting our customers and growing our business. While demonstrating a growth mindset, you will be expected to continuously develop your expertise in a way that enhances PitchBook’s AI capabilities in a scalable and repeatable manner. You will be able to solve various common challenges faced in the MLDLC while providing technical guidance to less experienced peers. Outline Of Duties And Responsibilities Serve as a force multiplier for development teams by creating golden paths that remove roadblocks and improve ideation and innovation Collaborate with other engineers, product managers, and internal stakeholders in an Agile environment Design and deliver on projects end-to-end with little to no guidance Provide support to teams building and deploying AI applications by addressing common painpoints in the MLDLC Learn constantly and be passionate about discovering new tools, technologies, libraries, and frameworks (commercial and open source), that can be leveraged to improve PitchBook’s AI capabilities Support the vision and values of the company through role modeling and encouraging desired behaviors. Participate in various cross-functional company initiatives and projects as requested. Contribute to strategic planning in a way that ensures the team is building exceptional products that bring real business value. Evaluate frameworks, vendors, and tools that can be used to optimize processes and costs with minimal guidance. Experience, Skills And Qualifications Degree in Computer Science, Information Systems, Machine Learning, or a similar field preferred (or commensurate experience) +2 years of experience in hands-on development of Machine Learning algorithms +2 years of experience in hands-on deployment of Machine Learning services +2 years of experience supporting the entire MLDLC, including post-deployment operations such as monitoring and maintenance +2 years of experience with Amazon Web Services (AWS) and/or Google Cloud Platform (GCP) Experience with at least 80%: PyTorch, Tensorflow, LangChain, scikit-learn, Redis, Elasticsearch, Amazon SageMaker, Google Vertex AI, Weights & Biases, FastAPI, Prometheus, Grafana, Apache Kafka, Apache Airflow, MLflow, KubeFlow Ability to break large, complex problems into well-defined steps, ensuring iterative development and continuous improvement Experience in cloud-native delivery, with a deep practical understanding of containerization technologies such as Kubernetes and Docker, and the ability to manage these across different regions. Proficiency in GitOps and creation/management of CI/CD pipelines Demonstrated experience building and using SQL/NoSQL databases Demonstrated experience with Python (Java is a plus) and other relevant programming languages and tools. Excellent problem-solving skills with a focus on innovation, efficiency, and scalability in a global context. Strong communication and collaboration skills, with the ability to engage effectively with internal customers across various cultures and regions. Ability to be a team player who can also work independently Experience working across multiple development teams is a plus Working Conditions The job conditions for this position are in a standard office setting. Employees in this position use PC and phone on an on-going basis throughout the day. Limited corporate travel may be required to remote offices or other business meetings and events. Morningstar India is an equal opportunity employer Morningstar’s hybrid work environment gives you the opportunity to work remotely and collaborate in-person each week. We’ve found that we’re at our best when we’re purposely together on a regular basis, at least three days each week. A range of other benefits are also available to enhance flexibility as needs change. No matter where you are, you’ll have tools and resources to engage meaningfully with your global colleagues. I10_MstarIndiaPvtLtd Morningstar India Private Ltd. (Delhi) Legal Entity

Posted 3 days ago

Apply

0 years

0 Lacs

India

On-site

You might be a fit if you have ● 5 + yrs production ML / data-platform engineering (Python or Go/Kotlin). ● Deployed agentic or multi-agent systems (e.g., micro-policy nets, bandit ensembles) and reinforcement-learning pipelines at scal (ad budget, recommender, or game AI). ● Fluency with BigQuery / Snowflake SQL & ML plus streaming (Kafka / Pub/Sub). ● Hands-on LLM fine-tuning using LoRA/QLoRA and proven prompt-engineering skills (system / assist hierarchies, few-shot, prompt compression). ● Comfort running GPU & CPU model serving on GCP (Vertex AI, GKE, or bare-metal K8s). ● Solid causal-inference experience (CUPED, diff-in-diff, synthetic control, uplift). ● CI/CD, IaC (Terraform or Pulumi) & observability chops (Prometheus, Grafana). ● Bias toward shipping working software over polishing research papers. Bonus points for: ● Postal/geo datasets, ad-tech, or martech domain exposure. ● Packaging RL models as secure micro-services. ● VPC-SC, NIST, or SOC-2 controls in a regulated data environment. ● Green-field impact – architect the learning stack from scratch. ● Moat-worthy data – 260 M+ US consumer graph tying offline & online behavior. ● Tight feedback loops – your models go live in weeks, optimizing large amounts of marketing spend daily.

Posted 3 days ago

Apply

8.0 years

0 Lacs

Greater Kolkata Area

Remote

AI / Generative AI Engineer Location: Remote ( Pan India ). Job Type: Fulltime. NOTE: "Only immediate joiners or candidates with a notice period of 15 days or less will be We are seeking a highly skilled and motivated AI/Generative AI Engineer to join our innovative team. The ideal candidate will have a strong background in designing, developing, and deploying artificial intelligence and machine learning models, with a specific focus on cutting-edge Generative AI technologies. This role requires hands-on experience with one or more major cloud platforms (Google Cloud Platform GCP, Amazon Web Services AWS) and/or modern data platforms (Databricks, Snowflake). You will be instrumental in building and scaling AI solutions that drive business value and transform user experiences. Key Responsibilities Design and Development : Design, build, train, and deploy scalable and robust AI/ML models, including traditional machine learning algorithms and advanced Generative AI models (e.g., Large Language Models LLMs, diffusion models). Develop and implement algorithms for tasks such as natural language processing (NLP), text generation, image synthesis, speech recognition, and forecasting. Work extensively with LLMs, including fine-tuning, prompt engineering, retrieval-augmented generation (RAG), and evaluating their performance. Develop and manage data pipelines for data ingestion, preprocessing, feature engineering, and model training, ensuring data quality and integrity. Platform Expertise Leverage cloud AI/ML services on GCP (e.g., Vertex AI, AutoML, BigQuery ML, Model Garden, Gemini), AWS (e.g., SageMaker, Bedrock, S3), Databricks, and/or Snowflake to build and deploy solutions. Architect and implement AI solutions ensuring scalability, reliability, security, and cost-effectiveness on the chosen platform(s). Optimize data storage, processing, and model serving components within the cloud or data platform ecosystem. MLOps And Productionization Implement MLOps best practices for model versioning, continuous integration/continuous deployment (CI/CD), monitoring, and lifecycle management. Deploy models into production environments and ensure their performance, scalability, and reliability. Monitor and optimize the performance of AI models in production, addressing issues related to accuracy, speed, and resource utilization. Collaboration And Innovation Collaborate closely with data scientists, software engineers, product managers, and business stakeholders to understand requirements, define solutions, and integrate AI capabilities into applications and workflows. Stay current with the latest advancements in AI, Generative AI, machine learning, and relevant cloud/data platform technologies. Lead and participate in the ideation and prototyping of new AI applications and systems. Ensure AI solutions adhere to ethical standards, responsible AI principles, and regulatory requirements, addressing issues like data privacy, bias, and fairness. Documentation And Communication Create and maintain comprehensive technical documentation for AI models, systems, and processes. Effectively communicate complex AI concepts and results to both technical and non-technical audiences. Required Qualifications 8+ years of experience with software development in one or more programming languages, and with data structures/algorithms/Data Architecture. 3+ years of experience with state of the art GenAI techniques (e.g., LLMs, Multi-Modal, Large Vision Models) or with GenAI-related concepts (language modeling, computer vision). 3+ years of experience with ML infrastructure (e.g., model deployment, model evaluation, optimization, data processing, debugging). Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, Data Science, or a related technical field. Proven experience as an AI Engineer, Machine Learning Engineer, or a similar role. Strong programming skills in Python. Familiarity with other languages like Java, Scala, or R is a plus. Solid understanding of machine learning algorithms (supervised, unsupervised, reinforcement learning), deep learning concepts (e.g., CNNs, RNNs, Transformers), and statistical modeling. Hands-on experience with developing and deploying Generative AI models and techniques, including working with Large Language Models (LLMs like GPT, BERT, LLaMA, etc.). Proficiency in using common AI/ML frameworks and libraries such as TensorFlow, PyTorch, scikit-learn, Keras, Hugging Face Transformers, LangChain, etc. Demonstrable experience with at least one of the following cloud/data platforms: GCP: Experience with Vertex AI, BigQuery ML, Google Cloud Storage, and other GCP AI/ML services. AWS: Experience with SageMaker, Bedrock, S3, and other AWS AI/ML services. Databricks: Experience building and scaling AI/ML solutions on the Databricks Lakehouse Platform, including MLflow. Snowflake: Experience leveraging Snowflake for data warehousing, data engineering for AI/ML workloads, and Snowpark. Experience with data engineering, including data acquisition, cleaning, transformation, and building ETL/ELT pipelines. Knowledge of MLOps tools and practices for model deployment, monitoring, and management. Familiarity with containerization technologies like Docker and orchestration tools like Kubernetes. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. (ref:hirist.tech)

Posted 4 days ago

Apply

9.0 - 13.0 years

0 Lacs

haryana

On-site

The role of Tax + L9 (Consultant) in the Strategy & Consulting Global Network CFO & Enterprise Value team requires a seasoned professional with over 9 years of experience in the domain. As a Team Lead/Consultant, you will be responsible for driving strategic initiatives and managing business transformations to deliver value-driven solutions leveraging your expertise. Your primary responsibilities will include providing strategic advisory services, conducting market research, and developing data-driven recommendations to enhance business performance. Collaborating with CFOs and finance organizations, you will help craft and implement strategies focused on digital disruption, new operating models, and best practices to drive market differentiation. In this role, you are expected to lead by example and drive solutions independently. Your proficiency in Microsoft PowerPoint, spreadsheets, and Power BI applications will be crucial, along with your ability to work effectively with multiple business process stakeholders. Strong analytical and problem-solving skills, excellent communication abilities, and cross-cultural competence are essential qualities to excel in this dynamic consulting environment. You will be required to have relevant experience in the domain, with hands-on experience in integration and tool implementations across platforms such as VAT, GST, SUT, WHT, Digital Compliance Reporting, SAP or Oracle ERP, and various tax technologies like Vertex O Series, OneSource, and SOVOS. Your deep understanding of tax technology landscape, trends, and architecture, along with experience in transformation projects, will be instrumental in achieving project success. Joining our team offers you an opportunity to work on transformative projects with key G2000 clients, collaborate with industry experts, and shape innovative solutions using emerging technologies. You will have access to personalized training modules to enhance your consulting acumen, industry knowledge, and capabilities, while contributing to a culture committed to equality and boundaryless collaboration. This role presents a unique opportunity for career growth, leadership exposure, and the chance to work on innovative projects in a fast-paced environment. If you are a driven professional with a passion for strategy and consulting, we invite you to be a part of our dynamic team at Accenture.,

Posted 4 days ago

Apply

15.0 years

0 Lacs

India

Remote

About Us QuillBot is an AI-powered writing platform on a mission to reimagine writing. QuillBot provides over 50 million monthly active uses students, professionals, and educators with free online writing and research tools to help them become more effective, productive, and confident. The QuillBot team was built on the idea that learning how to write and use that knowledge is empowering. They want to automate the more time-consuming parts of writing so that users can focus on their craft. Whether you're writing essays, social media posts, or emails, QuillBot has your back. It has an array of productivity-enhancing tools that are already changing the way the world writes. In the recent chain of events, we were also acquired by CourseHero, which is a 15-year-old Ed-Tech unicorn based out of California, now known as Learneo. Overview QuillBot is looking for a hands-on MLOps Manager to lead and scale our AI Engineering & MLOps function. This role blends deep technical execution (60%) with team and cross-functional collaboration (40%), and is ideal for someone who thrives in a dual IC + strategic lead position. You'll work closely with Research, Platform, Infra, and Product teams — not only to deploy models reliably, but also to accelerate experimentation, training, and iteration cycles. From infra support for large-scale model training to scaling low-latency inference systems in production, you'll be at the heart of how AI ships at QuillBot. Responsibilities Own the full ML lifecycle: from training infra and experiment tracking to deployment, observability, and optimization. Work closely with researchers to remove friction in training, evaluation, and finetuning workflows. Guide and mentor a small, mature team of engineers (3–4), while still contributing as an individual contributor. Drive performance optimization (latency, throughput, cost efficiency), model packaging, and runtime reliability. Build robust systems for CI/CD, versioning, rollback, A/B testing, monitoring, and alerting. Ensure scalable, secure, and compliant AI infrastructure across training and inference environments. Collaborate with cloud and AI providers (e.g., AWS, GCP, OpenAI) as needed to integrate tooling, optimize costs, and unlock platform capabilities. Contribute to other GenAI and cross-functional AI initiatives as needed, beyond core MLOps responsibilities. Contribute to architectural decisions, roadmap planning, and documentation of our AI engineering stack. Champion automation, DevOps/MLOps best practices, and technical excellence across the ML lifecycle. Qualifications 5+ years of strong experience in MLOps, ML/AI Engineering. Solid understanding of ML/DL fundamentals and applied experience in model deployment and training infra. Proficient with cloud-native ML tooling (e.g., GCP, Vertex AI, Kubernetes). Comfortable working on both training-side infra and inference-side systems. Good to have experience with model optimization techniques (e.g., quantization, distillation, FasterTransformer, TensorRT-LLM). Proven ability to lead complex technical projects end-to-end with minimal oversight. Strong collaboration and communication skills — able to work cross-functionally and drive technical clarity. Ownership mindset — comfortable making decisions and guiding others in ambiguous problem spaces." Benefits & Perks Competitive salary, stock options & annual bonus Medical coverage Life and accidental insurance Vacation & leaves of absence (menstrual, flexible, special, and more!) Developmental opportunities through education & developmental reimbursements & professional workshops Maternity & parental leave Hybrid & remote model with flexible working hours On-site & remote company events throughout the year Tech & WFH stipends & new hire allowances Employee referral program Premium access to QuillBot Benefits and benefit amounts differ by region. A comprehensive list applicable to your region will be provided in your interview process. Research shows that candidates from underrepresented backgrounds often don't apply for roles if they don't meet all the criteria. We strongly encourage you to apply if you're interested: we'd love to learn how you can amplify our team with your unique experience! This role is eligible for hire in India. We are a virtual-first company and have employees dispersed throughout the United States, Canada, India and the Netherlands. We have a market-based pay structure that varies by location. The base pay for this position is dependent on multiple factors, including candidate experience and expertise, and may vary from the amounts listed. You may also be eligible to participate in our bonus program and may be offered benefits, and other types of compensation. #QuillBot Equal Employment Opportunity Statement (EEO) We are an equal opportunity employer and value diversity and inclusion within our company. We will consider all qualified applicants without regard to race, religion, color, national origin, sex, gender identity, gender expression, sexual orientation, age, marital status, veteran status, or ability status. We will ensure that individuals who are differently abled are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment as provided to other applicants or employees. Please contact us to request accommodation.

Posted 4 days ago

Apply

15.0 years

0 Lacs

India

Remote

About Us QuillBot is an AI-powered writing platform on a mission to reimagine writing. QuillBot provides over 50 million monthly active uses students, professionals, and educators with free online writing and research tools to help them become more effective, productive, and confident. The QuillBot team was built on the idea that learning how to write and use that knowledge is empowering. They want to automate the more time-consuming parts of writing so that users can focus on their craft. Whether you're writing essays, social media posts, or emails, QuillBot has your back. It has an array of productivity-enhancing tools that are already changing the way the world writes. In the recent chain of events, we were also acquired by CourseHero, which is a 15-year-old Ed-Tech unicorn based out of California, now known as Learneo. Overview QuillBot is looking for a hands-on MLOps Manager to lead and scale our AI Engineering & MLOps function. This role blends deep technical execution (60%) with team and cross-functional collaboration (40%), and is ideal for someone who thrives in a dual IC + strategic lead position. You'll work closely with Research, Platform, Infra, and Product teams — not only to deploy models reliably, but also to accelerate experimentation, training, and iteration cycles. From infra support for large-scale model training to scaling low-latency inference systems in production, you'll be at the heart of how AI ships at QuillBot. Responsibilities Own the full ML lifecycle: from training infra and experiment tracking to deployment, observability, and optimization. Work closely with researchers to remove friction in training, evaluation, and finetuning workflows. Guide and mentor a small, mature team of engineers (3–4), while still contributing as an individual contributor. Drive performance optimization (latency, throughput, cost efficiency), model packaging, and runtime reliability. Build robust systems for CI/CD, versioning, rollback, A/B testing, monitoring, and alerting. Ensure scalable, secure, and compliant AI infrastructure across training and inference environments. Collaborate with cloud and AI providers (e.g., AWS, GCP, OpenAI) as needed to integrate tooling, optimize costs, and unlock platform capabilities. Contribute to other GenAI and cross-functional AI initiatives as needed, beyond core MLOps responsibilities. Contribute to architectural decisions, roadmap planning, and documentation of our AI engineering stack. Champion automation, DevOps/MLOps best practices, and technical excellence across the ML lifecycle. Qualifications 5+ years of strong experience in MLOps, ML/AI Engineering. Solid understanding of ML/DL fundamentals and applied experience in model deployment and training infra. Proficient with cloud-native ML tooling (e.g., GCP, Vertex AI, Kubernetes). Comfortable working on both training-side infra and inference-side systems. Good to have experience with model optimization techniques (e.g., quantization, distillation, FasterTransformer, TensorRT-LLM). Proven ability to lead complex technical projects end-to-end with minimal oversight. Strong collaboration and communication skills — able to work cross-functionally and drive technical clarity. Ownership mindset — comfortable making decisions and guiding others in ambiguous problem spaces." Benefits & Perks Competitive salary, stock options & annual bonus Medical coverage Life and accidental insurance Vacation & leaves of absence (menstrual, flexible, special, and more!) Developmental opportunities through education & developmental reimbursements & professional workshops Maternity & parental leave Hybrid & remote model with flexible working hours On-site & remote company events throughout the year Tech & WFH stipends & new hire allowances Employee referral program Premium access to QuillBot Benefits and benefit amounts differ by region. A comprehensive list applicable to your region will be provided in your interview process. Research shows that candidates from underrepresented backgrounds often don't apply for roles if they don't meet all the criteria. We strongly encourage you to apply if you're interested: we'd love to learn how you can amplify our team with your unique experience! This role is eligible for hire in India. We are a virtual-first company and have employees dispersed throughout the United States, Canada, India and the Netherlands. We have a market-based pay structure that varies by location. The base pay for this position is dependent on multiple factors, including candidate experience and expertise, and may vary from the amounts listed. You may also be eligible to participate in our bonus program and may be offered benefits, and other types of compensation. #Learneo Equal Employment Opportunity Statement (EEO) We are an equal opportunity employer and value diversity and inclusion within our company. We will consider all qualified applicants without regard to race, religion, color, national origin, sex, gender identity, gender expression, sexual orientation, age, marital status, veteran status, or ability status. We will ensure that individuals who are differently abled are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment as provided to other applicants or employees. Please contact us to request accommodation. About Learneo Learneo is a platform of builder-driven businesses, including Course Hero, CliffsNotes, LitCharts, Quillbot, Symbolab, and Scribbr, all united around a shared mission of supercharging productivity and learning for everyone. We attract and scale high growth businesses built and run by visionary entrepreneurs. Each team innovates independently but has a unique opportunity to collaborate, experiment, and grow together, and they are supported by centralized corporate operations functions, including HR, Finance and Legal.

Posted 4 days ago

Apply

5.0 - 8.0 years

15 - 30 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

We are seeking a skilled Platform Engineer to join our Automation Engineering team, bringing expertise in cloud infrastructure automation, DevOps, scripting, and advanced AI/ML practices. The role focuses on integrating generative AI into automation workflows, enhancing operational efficiency, and supporting cloud-first initiatives. Responsibilities Design cloud automation workflows using Infrastructure-as-Code tools such as Terraform or CloudFormation Build scalable frameworks to manage infrastructure provisioning, deployment, and configuration across multiple cloud platforms Create service catalog components compatible with automation platforms like Backstage Integrate generative AI models to improve service catalog functionalities, including automated code generation and validation Architect CI/CD pipelines for automated build, test, and deployment processes Maintain deployment automation scripts utilizing technologies such as Python or Bash Implement generative AI models (e.g., RAG, agent-based workflows) for AIOps use cases like anomaly detection and root cause analysis Employ AI/ML tools such as LangChain, Bedrock, Vertex AI, or Azure AI for advanced generative AI solutions Develop vector databases and document sources using services like Amazon Kendra, OpenSearch, or custom solutions Engineer data pipelines to stream real-time operational insights that support AI-driven automation Build MLOps pipelines to deploy and monitor generative AI models, ensuring optimal performance and avoiding model decay Select appropriate LLM models for specific AIOps use cases and integrate them effectively into workflows Collaborate with cross-functional teams to design and refine automation and AI-driven processes Research emerging tools and technologies to enhance operational efficiency and scalability Requirements Bachelor's or Master's degree in Computer Science, Engineering, or related field 3-8 years of experience in cloud infrastructure automation, DevOps, and scripting Proficiency with Infrastructure-as-Code tools such as Terraform or CloudFormation Expertise in Python and generative AI frameworks like RAG and agent-based workflows Knowledge of cloud-based AI services, including Bedrock, Vertex AI, or Azure AI Familiarity with vector databases like Amazon Kendra, OpenSearch, or custom database solutions Competency in data engineering tasks such as feature engineering, labeling, and real-time data streaming Proven track record in creating and maintaining MLOps pipelines for AI/ML models in production environments Nice to have Background in Flow Engineering tools such as Langraph or platform-specific workflow orchestration tools Understanding of comprehensive AIOps processes to refine cloud-based automation solutions

Posted 4 days ago

Apply

5.0 - 8.0 years

15 - 30 Lacs

Chennai

Work from Office

We are seeking a skilled Platform Engineer to join our Automation Engineering team, bringing expertise in cloud infrastructure automation, DevOps, scripting, and advanced AI/ML practices. The role focuses on integrating generative AI into automation workflows, enhancing operational efficiency, and supporting cloud-first initiatives. Responsibilities Design, build, and maintain cloud automation workflows using Infrastructure-as-Code tools such as Terraform or CloudFormation Develop scalable frameworks for managing infrastructure provisioning, deployment, and configuration across multiple cloud platforms Create and integrate service catalog components with automation platforms like Backstage Leverage generative AI models to enhance service catalog capabilities, including automated code generation and validation Architect and implement CI/CD pipelines for automated build, test, and deployment processes Build and maintain deployment automation scripts using technologies such as Python or Bash Design and implement generative AI models (e.g., RAG, agent-based workflows) for AIOps use cases like anomaly detection and root cause analysis Utilize AI/ML tools such as LangChain, Bedrock, Vertex AI, or Azure AI for building advanced generative AI solutions Develop vector databases and document sources using services like Amazon Kendra, OpenSearch, or custom solutions Engineer data pipelines for streaming real-time operational insights to support AI-driven automation Create MLOps pipelines to deploy and monitor generative AI models, ensuring optimal performance and avoiding model decay Evaluate and select appropriate LLM models for specific AIOps use cases, integrating them efficiently into workflows Collaborate with cross-functional teams to design and improve automation and AI-driven processes Continuously research emerging tools and technologies to improve operational efficiency and scalability Requirements Bachelor's or Master's degree in Computer Science, Engineering, or related field 3-8 years of experience in cloud infrastructure automation, DevOps, and scripting Proficiency with Infrastructure-as-Code tools such as Terraform or CloudFormation Expertise in Python and generative AI frameworks like RAG and agent-based workflows Knowledge of cloud-based AI services, including Bedrock, Vertex AI, or Azure AI Familiarity with vector databases like Amazon Kendra, OpenSearch, or custom database solutions Competency in data engineering tasks such as feature engineering, labeling, and real-time data streaming Proven experience in creating and maintaining MLOps pipelines for AI/ML models in production environments Nice to have Familiarity with Flow Engineering tools such as Langraph or platform-specific workflow orchestration tools Understanding of end-to-end AIOps processes to enhance cloud-based automation solutions

Posted 4 days ago

Apply

5.0 - 8.0 years

15 - 30 Lacs

Coimbatore

Work from Office

We are seeking a skilled Platform Engineer to join our Automation Engineering team, bringing expertise in cloud infrastructure automation, DevOps, scripting, and advanced AI/ML practices. The role focuses on integrating generative AI into automation workflows, enhancing operational efficiency, and supporting cloud-first initiatives. Responsibilities Design, build, and maintain cloud automation workflows using Infrastructure-as-Code tools such as Terraform or CloudFormation Develop scalable frameworks for managing infrastructure provisioning, deployment, and configuration across multiple cloud platforms Create and integrate service catalog components with automation platforms like Backstage Leverage generative AI models to enhance service catalog capabilities, including automated code generation and validation Architect and implement CI/CD pipelines for automated build, test, and deployment processes Build and maintain deployment automation scripts using technologies such as Python or Bash Design and implement generative AI models (e.g., RAG, agent-based workflows) for AIOps use cases like anomaly detection and root cause analysis Utilize AI/ML tools such as LangChain, Bedrock, Vertex AI, or Azure AI for building advanced generative AI solutions Develop vector databases and document sources using services like Amazon Kendra, OpenSearch, or custom solutions Engineer data pipelines for streaming real-time operational insights to support AI-driven automation Create MLOps pipelines to deploy and monitor generative AI models, ensuring optimal performance and avoiding model decay Evaluate and select appropriate LLM models for specific AIOps use cases, integrating them efficiently into workflows Collaborate with cross-functional teams to design and improve automation and AI-driven processes Continuously research emerging tools and technologies to improve operational efficiency and scalability Requirements Bachelor's or Master's degree in Computer Science, Engineering, or related field 3-8 years of experience in cloud infrastructure automation, DevOps, and scripting Proficiency with Infrastructure-as-Code tools such as Terraform or CloudFormation Expertise in Python and generative AI frameworks like RAG and agent-based workflows Knowledge of cloud-based AI services, including Bedrock, Vertex AI, or Azure AI Familiarity with vector databases like Amazon Kendra, OpenSearch, or custom database solutions Competency in data engineering tasks such as feature engineering, labeling, and real-time data streaming Proven experience in creating and maintaining MLOps pipelines for AI/ML models in production environments Nice to have Familiarity with Flow Engineering tools such as Langraph or platform-specific workflow orchestration tools Understanding of end-to-end AIOps processes to enhance cloud-based automation solutions

Posted 4 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking a Data Scientist with a strong background in enterprise-scale machine learning, deep expertise in LLMs and Generative AI, and a clear understanding of the evolving Agentic AI ecosystemThe ideal candidate has hands-on experience developing predictive models, recommendation systems, and LLM-powered solutions, and is passionate about leveraging cutting-edge AI to solve complex enterprise challenges. This role will involve working closely with product, engineering, and business teams to design, build, and deploy impactful AI solutions that are both technically robust and business-aligned. The Core Responsibilities For The Job Include The Following ML and Predictive Systems Development: Design, develop, and deploy enterprise-grade machine learning models for recommendations, predictions, and personalization use cases. Work on problems such as churn prediction, intelligent routing, anomaly detection, and behavior modeling. Leverage techniques in supervised, unsupervised, and reinforcement learning as needed based on business context. LLMs And Generative AI Build and fine-tune LLM-based solutions (e. g., GPT, LLaMA, Claude, or open-source models) for tasks such as summarization, semantic search, document understanding, and copilots. Deliver production-ready GenAI projects, applying techniques like RAG (Retrieval-Augmented Generation), prompt engineering, fine-tuning, and vector search (e. g., FAISS, Pinecone, Weaviate). Collaborate with engineering to embed LLM workflows into enterprise applications, ensuring scalability and performance. Agentic AI And Ecosystem Engagement Contribute thought leadership and experimentation around Agentic AI architectures, task orchestration, memory management, tool integration, and decision autonomy. Stay ahead of trends in the open-source and commercial LLM/AI space, including LangChain, AutoGen, DSPy, and ADK-based systems. Develop internal PoCs or evaluate frameworks to assess viability for enterprise use. Collaboration And Delivery Work with cross-functional teams to identify AI opportunities and define technical roadmaps. Translate business needs into data science problems, define success metrics, and communicate results to stakeholders. Ensure model governance, monitoring, and explainability for AI systems in production. Requirements Master's or PhD in Computer Science, Data Science, Statistics, or related field. 5-8 years of experience in data science and ML, with strong enterprise project delivery experience. Proven success in building and deploying ML models and recommendation systems at scale. 2+ projects delivered involving LLMs and Generative AI, with hands-on experience in one or more of: OpenAI, Hugging Face Transformers, LangChain, Vector DBs, or model fine-tuning. Advanced Python programming skills and experience with ML libraries (e. g., Scikit-learn, XGBoost, PyTorch, TensorFlow). Experience with cloud-based ML/AI platforms (e. g., Vertex AI, AWS SageMaker, Azure ML). Strong understanding of system architecture, APIs, data pipelines, and model integration patterns. Preferred Qualifications Experience with Agentic AI frameworks and orchestration systems (LangChain, AutoGen, ADK, CrewAI). Familiarity with prompt optimization, tool chaining, task planning, and autonomous agents. Working knowledge of MLOps best practices, including model versioning, CI/CD for ML, and model monitoring. Strong communication skills and ability to advocate for AI-driven solutions across technical and non-technical teams. Regular follower of AI research, open-source trends, and GenAI product developments. This job was posted by Akshay Kumar Arumulla from Softility.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About Attentive.ai Attentive.ai is a fast-growing vertical SaaS start-up, funded by PeakXV (Surge), Infoedge and Vertex Ventures, that provides innovative software solutions for the construction, landscape, and paving industries in the United States. Our mission is to help businesses in this space improve their operations and grow their revenue through our simple & easy-to-use software platform. We're looking for a dynamic and driven leader to lead our Sales Development efforts and build a high-performance outbound engine. Job Description As the GTM Strategy & Ops, you will be at the heart of our sales, marketing, and customer success motion—designing and driving execution plans that accelerate revenue growth. You will define go-to-market strategies, build operational models, run analytics to measure performance, and collaborate with cross-functional teams to align execution with business goals. This is a high-impact role for someone who enjoys working across growth, operations, data, and product in a fast-paced, high-growth SaaS environment. Roles & Responsibilities Define and execute GTM strategies across sales, marketing, and customer success. Build dashboards and models to track key metrics across the revenue funnel. Identify growth levers and run data-driven experiments to improve performance. Optimize tools, processes, and workflows across the GTM stack (e.g., CRM, automation). Collaborate with cross-functional teams on strategic initiatives and special projects. Support sales enablement through insights, playbooks, and performance analysis. Drive alignment on ICP, messaging, and lead qualification across GTM functions. Requirements For The Role 5+ years of experience in B2B SaaS GTM / revenue operations / strategy roles. Experience in founders’ office, strategy consulting, or VC-backed tech startups. Strong understanding of sales, marketing, and CS workflows and tooling. Excellent analytical and problem-solving skills; highly data-driven. Structured communicator with the ability to influence senior stakeholders. Comfortable with ambiguity, bias for action, and a hustle mindset. Why work with us? Opportunity to work directly with founders and leadership on strategic problems. High ownership role with visibility and impact on company-wide decisions. Be part of a rocket ship startup that’s transforming a large, underserved industry. Backed by top-tier VCs: Peak XV (Sequoia), Vertex, InfoEdge.

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies