Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job description Job Title: MLOps Engineer Company: Aaizel International Technologies Pvt. Ltd. Location: Gurugram Experience Required: 6+ Years Employment Type: Full-Time About Aaizeltech Aaizeltech is a deep-tech company building AI/ML-powered platforms, scalable SaaS applications, and intelligent embedded systems. We are seeking a Senior MLOps Engineer to lead the architecture, deployment, automation, and scaling of infrastructure and ML systems across multiple product lines. Role Overview This role requires strong expertise and hands-on MLOps experience. You will architect and manage cloud infrastructure, CI/CD systems, Kubernetes clusters, and full ML pipelines—from data ingestion to deployment and drift monitoring. Key Responsibilities MLOps Responsibilities: Collaborate with data scientists to operationalize ML workflows. Build complete ML pipelines with Airflow, Kubeflow Pipelines, or Metaflow. Deploy models using KServe, Seldon Core, BentoML, TorchServe, or TF Serving. Package models into Docker containers using Flask or FastAPI or Django for APIs. Automated dataset versioning & model tracking via DVC and MLflow. Setup model registries and ensure reproducibility and audit trails. Implement model monitoring for: (i) Data drift and schema validation (using tools like Evidently AI, Alibi Detect). (ii) Performance metrics (accuracy, precision, recall). (iii) Infrastructure metrics (latency, throughput, memory usage). Implement event-driven retraining workflows triggered by drift alerts or data freshness. Schedule GPU workloads on Kubernetes and manage resource utilization for ML jobs. Design and manage secure, scalable infrastructure using AWS, GCP, or Azure. Build and maintain CI/CD pipelines using Jenkins, GitLab CI, GitHub Actions, or AWS DevOps. Write and manage Infrastructure as Code using Terraform, Pulumi, or CloudFormation. Automated configuration management with Ansible, Chef, or SaltStack. Manage Docker containers and advanced Kubernetes resources (Helm, StatefulSets, CRDs, DaemonSets). Implement robust monitoring and alerting stacks: Prometheus, Grafana, CloudWatch, Datadog, ELK, or Loki. Must-Have Skills Advanced expertise in Linux administration, networking, and shell scripting. Strong knowledge of Docker, Kubernetes, and container security. Hands-on experience with IaC tools like Terraform and configuration management like Ansible. Proficient in cloud-native services: IAM, EC2, EKS/GKE/AKS, S3, VPCs, Load Balancing, Secrets Manager. Mastery of CI/CD tools (e.g., Jenkins, GitLab, GitHub Actions). Familiarity with SaaS architecture, distributed systems, and multi-env deployments. Proficiency in Python for scripting and ML-related deployments. Experience integrating monitoring, alerting, and incident management workflows. Strong understanding of DevSecOps, security scans (e.g., Trivy, SonarQube, Snyk) and secrets management tools (Vault, SOPS). Experience with GPU orchestration and hybrid on-prem + cloud environments. Nice-to-Have Skills Knowledge of GitOps workflows (e.g., ArgoCD, FluxCD). Experience with Vertex AI, SageMaker Pipelines, or Triton Inference Server. Familiarity with Knative, Cloud Run, or serverless ML deployments. Exposure to cost estimation, rightsizing, and usage-based autoscaling. Understanding of ISO 27001, SOC2, or GDPR-compliant ML deployments. Knowledge of RBAC for Kubernetes and ML pipelines. Who You'll Work With AI/ML Engineers, Backend Developers, Frontend Developers, QA Team Product Owners, Project Managers, and external Government or Enterprise Clients How to Apply If you are passionate about embedded systems and excited to work on next-generation technologies, we would love to hear from you. Please send your resume and a cover letter outlining your relevant experience to hr@aaizeltech.com or bhavik@aaizeltech.com or anju@aaizeltech.com (Contact No- 7302201247) Show more Show less
Posted 2 months ago
5.0 years
15 Lacs
Calicut
On-site
Key Responsibilities ● Prompt Design: Craft and continuously improve prompts for OpenAI, Anthropic and other foundation models using few-shot, chain-of-thought, context-tuning and other techniques for text-analytics and reasoning use cases. ● Evals & Experiments – Develop an evals strategy and build automated eval suites (precision, recall, cost, downstream impact) that run in CI and in production. Build and maintain data sets. ● Prompt Library Management – Stand up a versioned prompt repo, integrate context-injection patterns, and automate rollout/rollback. ● Drift & Performance Monitoring – Detect and guard against context or model shifts ● Context Injection: Use Retrieval augmented generation (RAG) and vector search to inject context information to generate contextually accurate and grounded model responses. Use MCP to manage the way context is assembled, updated and passed to LLM Qualifications ● 5+ Years of AI software experience. Experience with pre-LLM AI tech counts. ● Proven success using LLMs for text analytics or reasoning (not chatbots, style transfer, or safety tuning alone). ● Mastery of prompt-engineering techniques (few-shot, CoT, context, etc.) and hands-on with OpenAI / Anthropic APIs. ● Experience with fine-tuning or adapting foundation models via RLHF, instruction tuning, or domain-specific datasets. ● Proven experience using LLMs via APIs or local deployment (including OpenAI, Claude, Llama) ● Experience building evaluation pipelines for production scale implementations using toolkits like AWS bedrock. ● Strong eval chops—comfortable building custom benchmarks or using tools like OpenAI Evals, Braintrust, etc. ● Solid Python, plus the engineering rigor to wire up automated eval pipelines, data viewers, and model-selection logic. ● Strong written and verbal communicator who can explain trade-offs to both engineers and product leaders Job Type: Full-time Pay: Up to ₹1,500,000.00 per year Benefits: Provident Fund Schedule: Morning shift Supplemental Pay: Performance bonus Experience: LLM: 4 years (Preferred) LLMs for text analytics or reasoning: 3 years (Preferred) AI: 2 years (Preferred) Work Location: In person
Posted 2 months ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Core Purpose @ Tyroo : To create a positive impact in the world by helping businesses scale anywhere, be more successful, compounding wealth and generating resources for communities they serve to experience dignity and respect. Problem Statement: APAC is poised to contribute 30% of global digital ad market while Digital Media businesses struggle to scale it beyond 5% of their global revenue due to lack of effective In-market sales + ad product localisation capabilities. We at Tyroo are building Largest Monetisation Platform of Digital Media in APAC by 2027 Tyroo is preferred APAC Market entry & Expansion Partners for global internet companies looking to grow in Asia. Currently we are partners to large internet companies (Commission Junction, Outbrain, Criteo, Zemanta, TCL, Phillips, Hisense etc.) through exclusive monetisation or technology relationships. Are you ready to join a fast-growing, hyper-focused company? Core Customers: - Digital Media Businesses we help monetize and be successful i.e. our partners or publishers are our core customers. - Publisher NPS is our north star metric which we aim to enhance and improve every single day. - Key markets for our growth are spread across Greater China, Korea, Japan, ANZ, South East Asia, Middle East(KSA & UAE) and India with active focused teams managed from Singapore HQ. Reporting to: CEO (with dotted line to COO) Function: Cross-functional Strategy, Ops Cadence, Revenue Program Management Level: 3–6 years experience About The Role : As Strategic Ops Lead, you’ll work directly with the CEO and leadership team to drive business rhythm, ensure execution of key strategic initiatives, and translate high-level goals into measurable outcomes across teams. You’ll be the connective tissue between strategy and execution—bringing structure, clarity, and accountability to how Tyroo scales. This role is ideal for someone who has worked in strategy consulting, business operations, or founder’s office roles—and is now looking to build and scale high-leverage commercial initiatives in a fast-moving adtech business. Key Responsibilities Strategic Cadence + Board Alignment - Own the KPI dashboard tied to board-approved AOP (input/output tracking) - Run weekly/monthly business reviews with leadership and BD teams - Prepare strategic content, analysis, and talking points for CEO/Board updates 2. Initative Execution & PMO - Drive cross-functional execution of strategic priorities (e.g. India GTM, Tyroo.TV launch, CJ setup) - Track, escalate, and unblock key projects with clear owners + timelines - Build systems to reduce execution drift and increase internal accountability 3 . Commercial Programs + Rev Ops - Work with revenue teams (CJ, Tyroo, TV) to identify performance gaps and interventions - Partner with Finance/RevOps to turn reporting into insights and actions - Coordinate sales & publisher BD priorities against AMJ and JAS targets 4 . Strategic support to CEO & COO - Be a thought partner in refining GTM models, market entry plans, and strategic bets - Support in modeling, narrative-building, and external communication - Operate like a 1-person SWAT team when needed for high-impact initiatives Ideal Profile - 3–6 years in strategy consulting, business operations, or founder’s office roles - Strong analytical + communication skills; high comfort with dashboards & decks - Structured thinker who loves ambiguity but knows how to bring clarity - Proven ability to run cadences, cross-team alignment, and internal PMO - High trust profile—confidentiality, speed, and CEO-level proximity experience - Bonus: Experience in adtech, affiliate, or B2B SaaS environments Why Would you Apply? - High visibility: Work directly with founder/CEO + exec team - Strategic exposure: Own projects that move revenue, product, and market entry - Career growth: Step into a leadership pipeline role in one of the fastest-growing adtech businesses in Asia Interview Process First Round - HR - Culture Fitment , Skills evaluation Project round - Strategy Note, Problem Statement Modelling Final Round - Founders Round - Interaction/discussion on Project Round Show more Show less
Posted 2 months ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job description Job Title: MLOps Engineer Company: Aaizel International Technologies Pvt. Ltd. Location: On Site Experience Required: 6+ Years Employment Type: Full-Time About Aaizeltech Aaizeltech is a deep-tech company building AI/ML-powered platforms, scalable SaaS applications, and intelligent embedded systems. We are seeking a Senior MLOps Engineer to lead the architecture, deployment, automation, and scaling of infrastructure and ML systems across multiple product lines. Role Overview This role requires strong expertise and hands-on MLOps experience. You will architect and manage cloud infrastructure, CI/CD systems, Kubernetes clusters, and full ML pipelines—from data ingestion to deployment and drift monitoring. Key Responsibilities MLOps Responsibilities: Collaborate with data scientists to operationalize ML workflows. Build complete ML pipelines with Airflow, Kubeflow Pipelines, or Metaflow. Deploy models using KServe, Seldon Core, BentoML, TorchServe, or TF Serving. Package models into Docker containers using Flask or FastAPI or Django for APIs. Automated dataset versioning & model tracking via DVC and MLflow. Setup model registries and ensure reproducibility and audit trails. Implement model monitoring for: (i) Data drift and schema validation (using tools like Evidently AI, Alibi Detect). (ii) Performance metrics (accuracy, precision, recall). (iii) Infrastructure metrics (latency, throughput, memory usage). Implement event-driven retraining workflows triggered by drift alerts or data freshness. Schedule GPU workloads on Kubernetes and manage resource utilization for ML jobs. Design and manage secure, scalable infrastructure using AWS, GCP, or Azure. Build and maintain CI/CD pipelines using Jenkins, GitLab CI, GitHub Actions, or AWS DevOps. Write and manage Infrastructure as Code using Terraform, Pulumi, or CloudFormation. Automated configuration management with Ansible, Chef, or SaltStack. Manage Docker containers and advanced Kubernetes resources (Helm, StatefulSets, CRDs, DaemonSets). Implement robust monitoring and alerting stacks: Prometheus, Grafana, CloudWatch, Datadog, ELK, or Loki. Must-Have Skills Advanced expertise in Linux administration, networking, and shell scripting. Strong knowledge of Docker, Kubernetes, and container security. Hands-on experience with IaC tools like Terraform and configuration management like Ansible. Proficient in cloud-native services: IAM, EC2, EKS/GKE/AKS, S3, VPCs, Load Balancing, Secrets Manager. Mastery of CI/CD tools (e.g., Jenkins, GitLab, GitHub Actions). Familiarity with SaaS architecture, distributed systems, and multi-env deployments. Proficiency in Python for scripting and ML-related deployments. Experience integrating monitoring, alerting, and incident management workflows. Strong understanding of DevSecOps, security scans (e.g., Trivy, SonarQube, Snyk) and secrets management tools (Vault, SOPS). Experience with GPU orchestration and hybrid on-prem + cloud environments. Nice-to-Have Skills Knowledge of GitOps workflows (e.g., ArgoCD, FluxCD). Experience with Vertex AI, SageMaker Pipelines, or Triton Inference Server. Familiarity with Knative, Cloud Run, or serverless ML deployments. Exposure to cost estimation, rightsizing, and usage-based autoscaling. Understanding of ISO 27001, SOC2, or GDPR-compliant ML deployments. Knowledge of RBAC for Kubernetes and ML pipelines. Who You'll Work With AI/ML Engineers, Backend Developers, Frontend Developers, QA Team Product Owners, Project Managers, and external Government or Enterprise Clients How to Apply If you are passionate about embedded systems and excited to work on next-generation technologies, we would love to hear from you. Please send your resume and a cover letter outlining your relevant experience to hr@aaizeltech.com or bhavik@aaizeltech.com or anju@aaizeltech.com (Contact No- 7302201247) Show more Show less
Posted 2 months ago
4.0 years
0 Lacs
India
Remote
Job Post :- AI/ML Engineer Experience - 4+ years Location - Remote Key Responsibilities: Design, build, and maintain ML infrastructure on GCP using tools such as Vertex AI, GKE, Dataflow, BigQuery, and Cloud Functions. Develop and automate ML pipelines for model training, validation, deployment, and monitoring using tools like Kubeflow Pipelines, TFX, or Vertex AI Pipelines. Work with Data Scientists to productionize ML models and support experimentation workflows. Implement model monitoring and alerting for drift, performance degradation, and data quality issues. Manage and scale containerized ML workloads using Kubernetes (GKE) and Docker. Set up CI/CD workflows for ML using tools like Cloud Build, Bitbucket, Jenkins, or similar. Ensure proper security, versioning, and compliance across the ML lifecycle. Maintain documentation, artifacts, and reusable templates for reproducibility and auditability. Having GCP MLE Certification is Plus Show more Show less
Posted 2 months ago
2.0 - 4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Navtech is looking for a AI/ML Engineer to join our growing data science and machine learning team. In this role, you will be responsible for building, deploying, and maintaining machine learning models and pipelines that power intelligent products and data-driven decisions. Working as an AI/ML Engineer at Navtech, you will : Design, develop, and deploy machine learning models for classification, regression, clustering, recommendations, or NLP tasks. Clean, preprocess, and analyze large datasets to extract meaningful insights and features. Work closely with data engineers to develop scalable and reliable data pipelines. Experiment with different algorithms and techniques to improve model performance. Monitor and maintain production ML models, including retraining and model drift detection. Collaborate with software engineers to integrate ML models into applications and services. Document processes, experiments, and decisions for reproducibility and transparency. Stay current with the latest research and trends in machine learning and AI. Who Are We Looking for Exactly ? 2- 4 years of hands-on experience in building and deploying ML models in real-world applications. Strong knowledge of Python and ML libraries such as Scikit-learn, TensorFlow, PyTorch, XGBoost, or similar. Experience with data preprocessing, feature engineering, and model evaluation techniques. Solid understanding of ML concepts such as supervised and unsupervised learning, overfitting, regularization, etc. Experience working with Jupyter, pandas, NumPy, and visualization libraries like Matplotlib or Seaborn. Familiarity with version control (Git) and basic software engineering practices. You consistently demonstrate strong verbal and written communication skills as well as strong analytical and problem-solving abilities You should have a masters degree /Bachelors (BS) in computer science, Software Engineering, IT, Technology Management or related degrees and throughout education in English medium. Well REALLY Love You If You Have knowledge of cloud platforms (AWS, Azure, GCP) and ML services (SageMaker, Vertex AI, etc.) Have knowledge of GenAI prompting and hosting of LLMs. Have experience with NLP libraries (spaCy, Hugging Face Transformers, NLTK). Have familiarity with MLOps tools and practices (MLflow, DVC, Kubeflow, etc.). Have exposure to deep learning and neural network architectures. Have knowledge of REST APIs and how to serve ML models (e.g., Flask, FastAPI, Docker). Why Navtech? Performance review and Appraisal Twice a year. Competitive pay package with additional bonus & benefits. Work with US, UK & Europe based industry renowned clients for exponential technical growth. Medical Insurance cover for self & immediate family. Work with a culturally diverse team from different us : Navtech is a premier IT software and Services provider. Navtechs mission is to increase public cloud adoption and build cloud-first solutions that become trendsetting platforms of the future. We have been recognized as the Best Cloud Service Provider at GoodFirms for ensuring good results with quality services. Here, we strive to innovate and push technology and service boundaries to provide best-in-class technology solutions to our clients at scale. We deliver to our clients globally from our state-of-the-art design and development centers in the US & Hyderabad. Were a fast-growing company with clients in the United States, UK, and Europe. We are also a certified AWS partner. You will join a team of talented developers, quality engineers, product managers whose mission is to impact above 100 million people across the world with technological services by the year 2030. (ref:hirist.tech) Show more Show less
Posted 2 months ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Applications Development Senior Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Mathematics & Statistics: Advanced knowledge of probability, statistics and linear algebra. Expertise in statistical modelling, hypothesis testing and experimental design. Machine Learning and AI: 4+ years of hands-on experience with GenAI application with RAG approach, Vector databases, LLM’s. Hands on experience with LLMs (Google Gemini, Open AI, Llama etc.), LangChain, LlamaIndex, LlamaIndex for context-augmented generative AI, and Hugging Face Transformers, Knowledge graph, and Vector Databases. Advanced knowledge of RAG techniques is required, including expertise in hybrid search methods, multi-vector retrieval, Hypothetical Document Embeddings (HyDE), self-querying, query expansion, re-ranking, and relevance filtering etc. Strong Proficiency in Python and deep learning frameworks such as TensorFlow, PyTorch, scikit-learn, Scipy, Pandas and high-level APIs like Keras is essential. Advanced NLP skills, including Named Entity Recognition (NER), Dependency Parsing, Text Classification, and Topic Modeling. In-depth experience with supervised, unsupervised and reinforcement learning algorithms. Proficiency with machine learning libraries and frameworks (e.g. scikit-learn, TensorFlow, PyTorch etc.) Knowledge of deep learning, natural language processing (NLP). Hands-on experience with Feature Engineering, Exploratory Data Analysis. Familiarity and experience with Explainable AI, Model monitoring, Data/ Model Drift. Proficiency in programming languages such as Python. Experience with relational (SQL) and Vector databases. Skilled in Data wrangling, cleaning and preprocessing large datasets. Experience with natural language processing (NLP) and natural language generation (NLG). ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 2 months ago
20.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Who is Forcepoint? Forcepoint simplifies security for global businesses and governments. Forcepoint’s all-in-one, truly cloud-native platform makes it easy to adopt Zero Trust and prevent the theft or loss of sensitive data and intellectual property no matter where people are working. 20+ years in business. 2.7k employees. 150 countries. 11k+ customers. 300+ patents. If our mission excites you, you’re in the right place; we want you to bring your own energy to help us create a safer world. All we’re missing is you! Marketing Automation Specialist Core Job Duties & Responsibilities Marketing Automation Define and prioritize optimizations for HubSpot including campaign setup, execution, tracking, lead management, scoring, and routing, nurture, targeting and audience segmentation. Build and monitor the backend automation of landing pages, emails, webinars, segmentation, ABM, nurture programs, UTM parameters, events, reporting, workflows, and digital programs Ensure successful integration and strategic utilization of tools and technologies Support marketing campaigns with effective data segmentation, audience analysis and list building Understand and implement lead routing rules in LeanData Familiarity with Allocadia or another budgeting tool, configuration and integration with SF & SAP a plus Webchat tool Drift is an important lead generation tool that knowledge of and ability to regularly tune is important The BDR team uses Outreach – an outbound cadence and calendar tool Bizible, although currently in house, will be replaced by Hubspot analytics Knowledge of the 6sense ABM Platform to automate in profile prospect identification and refine our outbound efforts would be helpful Database Management Manage the marketing database including data cleansing, augmentation, and normalization Understand and report on health of Hubspot database (life cycle, lead status, industry, titles, personas, gaps/areas for growth) Perform list imports and identify areas of improvement Understand and ensure compliance in accordance with global privacy standards/laws Process and Documentation Provide training to marketing and sales teams on marketing operations processes, tools, and technologies Optimize, scale, and document processes following marketing and sales best practices. Job Qualifications Bachelor’s degree required Marketing experience with 3-5 years of marketing automation experience preferably Hubspot Hubspot certified associate or expert status is a plus Experience establishing and maintaining system documentation and training materials Demonstrated Excel experience (v-lookups, pivot tables, deduping, basic data analysis) Comprehensive understanding of the end-to-end process of the B2B marketing funnel Experience working with Salesforce or other CRM (preferred) Strong team player that works well with counterparts from various functions/departments (regional and global marketing teams, sales operations, business intelligence, and IT) Strong analytical and problem-solving skills Demonstrated ability to build new processes and workflows to scale Effective time management skills and the ability to juggle multiple projects and tight deadlines Proactive, takes initiative, and asks the right questions Maintain a data-driven approach through continual analysis and optimization Don’t meet every single qualification? Studies show people are hesitant to apply if they don’t meet all requirements listed in a job posting. Forcepoint is focused on building an inclusive and diverse workplace – so if there is something slightly different about your previous experience, but it otherwise aligns and you’re excited about this role, we encourage you to apply. You could be a great candidate for this or other roles on our team. The policy of Forcepoint is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to affirmatively seek to advance the principles of equal employment opportunity. Forcepoint is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by sending an email to recruiting@forcepoint.com. Applicants must have the right to work in the location to which you have applied. Show more Show less
Posted 2 months ago
5.0 years
4 - 7 Lacs
Cochin
On-site
5 - 7 Years 1 Opening Kochi Role description UST is looking for Adobe Marketo Engineer (Marketing Automation Specialist) with below requirements: Seeking a highly motivated and detail-oriented Marketing Automation Specialist with hands-on experience in Adobe Marketo and Salesforce CRM integration. You will be responsible for designing, executing, and optimizing multi-channel marketing campaigns, managing lead lifecycles, and ensuring seamless data flow between Marketo and Salesforce to drive business growth and marketing ROI. Key Responsibilities: Marketo Campaign Management: Design, build, and execute email campaigns, nurture programs, landing pages, and forms within Adobe Marketo. Integration & Data Management: Maintain and optimize the integration between Marketo and Salesforce, ensuring accurate and timely data synchronization, lead scoring, and campaign attribution. Lead Lifecycle Management: Build and manage lead scoring models, lead routing rules, and workflows that align marketing and sales efforts. Reporting & Analytics: Collaborate with stakeholders to track campaign performance, provide insights on funnel metrics, and recommend data-driven improvements. Qualifications: 3+ years of experience in marketing automation, preferably in B2B SaaS or tech environments. Proven experience with Adobe Marketo (certification a plus). Strong working knowledge of Salesforce CRM and how it integrates with Marketo. Familiarity with campaign attribution, lead scoring models, and lifecycle stages. Ability to troubleshoot sync issues and perform data hygiene tasks. Proficient in using tokens, segmentation, smart lists, and reporting in Marketo. Understanding of HTML/CSS for email formatting (preferred). Excellent communication, project management, and collaboration skills. Preferred Tools & Skills: Marketo Certified Expert (MCE) Experience with Salesforce Process Builder / Flows Familiarity with other MarTech tools like Bizible, Drift, ZoomInfo, or Salesloft Knowledge of SQL or reporting tools (e.g., Tableau, Power BI) is a plus Skills Adobe Marketo and Salesforce CRM integration About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 2 months ago
2.0 years
0 Lacs
Hyderābād
On-site
We are seeking a highly skilled Data Scientist (LLM/Gen AI Engineer) to join our AI and Machine Learning team. In this role, you will focus on the research, development, and deployment of Large Language Models (LLMs) and Generative AI solutions to solve real-world problems and enhance intelligent applications. You will work closely with cross-functional teams including data scientists, machine learning engineers, and product managers to build scalable, production-ready Gen AI systems. Key Responsibilities: Research, fine-tune, and deploy Large Language Models (LLMs) such as GPT, LLaMA, Mistral, or similar. Design and implement end-to-end Gen AI pipelines for tasks such as summarization, question answering, code generation, and retrieval-augmented generation (RAG). Preprocess and curate large datasets for training and evaluation of language models. Optimize models for efficiency, accuracy, and scalability using techniques like quantization, distillation, and model pruning. Integrate LLM-based solutions with existing products, APIs, and user-facing applications. Evaluate model performance using metrics like BLEU, ROUGE, perplexity, and human evaluation. Stay up-to-date with the latest trends in AI research, LLM architecture, and Gen AI tools. Collaborate with engineers to scale models in production and monitor model drift or degradation. Document experiments, model behaviors, and results to guide reproducibility and future development. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, Machine Learning, or related field. 2–5+ years of experience working as a Data Scientist, ML Engineer, or AI Researcher. Hands-on experience with Large Language Models (OpenAI GPT, PaLM, Claude, LLaMA, Mistral, etc.). Proficient in Python and popular ML/NLP libraries (e.g., PyTorch, TensorFlow, Hugging Face Transformers, LangChain). Deep understanding of NLP, tokenization, embeddings, transformers, and attention mechanisms. Familiarity with prompt engineering and fine-tuning techniques (LoRA, PEFT, etc.). Experience deploying models using cloud platforms (AWS, GCP, Azure) and container tools (Docker, Kubernetes). Strong analytical, problem-solving, and communication skills. Job Types: Full-time, Permanent Pay: Up to ₹30,000.00 per month Benefits: Flexible schedule Schedule: Day shift Ability to commute/relocate: Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Mention your current location Work Location: In person
Posted 2 months ago
0 years
0 Lacs
Haryana
On-site
Er du interesseret i at drive de administrative HR-processer og kerneopgaver, såsom rekruttering, lønforhandling og ledelsessparring? Er du hertil en god formidler, som kan servicere og rådgive ledere og understøtte en sammenhængende og stabil drift i en alsidig organisation med stor politisk bevågenhed? Så kan det være dig, vi søger som vores nye kollega pr. 1. september 2025 Dine ansvarsområder Du skal løse alle dele af de personaleadministrative opgaver i tæt samarbejde med dine kolleger i HR-enheden. Du bliver en del af et team, hvor samarbejde og løbende kollegasparring er en naturlig del af hverdagen. Du får i kraft af din rolle en bred snitflade på tværs af kriminalforsorgen. Opgaverne er mange og varierede, men kan bl.a. bestå af: Servicering og rådgivning af lederne i alle ansættelsesprocessens faser inden for både overenskomstansættelser samt tjenestemandsansættelser. Dette indebærer blandt andet ydelse af sparring vedrørende indhold af stillingsopslag, varetagelse af lønforhandling med de forhandlingsberettigede organisationer, overenskomst- og aftalefortolkning samt udarbejdelse af ansættelsesbreve. En bred vifte af personaleadministrative HR-opgaver, såsom rokeringsskrivelser, seniorordninger, barsels- og orlovsadministration, partshøringer og lignende. Medvirken til løbende udvikling og optimering af arbejdsgange og processer samt bidragelse til sikring af høj datakvalitet i ledelsesinformation til brug for den løbende ledelsessparring. Alt efter dine kompetencer, interesser og ambitioner, vil det desuden være muligt at prøve kræfter med andre større og mindre projekter, leverancer og opgaver inden for HR-området. Du kan endvidere blive ansvarlig for varetagelse af de administrative opgaver vedr. vores elever, som er under uddannelse til bl.a. fængselsbetjent og værkmester. Om dig Du har en relevant uddannelse, gerne suppleret med flere års HR-erfaring. Det er især en fordel, hvis du har erfaring med overenskomst- og aftalefortolkning, lønforhandlinger og samarbejdet med fagforeninger og tillidsrepræsentanter, samt med løn- og ansættelsesvilkår på statens område. Du har god talforståelse, og du har flair for anvendelse af nye it-systemer og optimering af arbejdsgange. Du kan sætte dig ind i bekendtgørelser, overenskomster og andre regelsæt, og du tager imod nye opgaver med nysgerrighed og interesse. Det skal falde dig naturligt at yde målrettet service samt at levere høj kvalitet i opgaveløsningen. Du kommunikerer tydeligt og konstruktivt, såvel mundtligt som skriftligt. Løn- og ansættelsesvilkår Ansættelse sker i henhold til gældende overenskomst. Vi har fleksibel arbejdstid, og du har mulighed for at arbejde hjemme, når det passer med opgaver, da vi sætter pris på et stærkt fagligt fællesskab og tæt kollegial kontakt. Ansættelsesområdet er Kriminalforsorgen, og tjenestestedet er Områdekontoret i Aarhus, Mosevej 5B, 8240 Risskov. Du vil skulle påregne lejlighedsvise besøg på institutioner i hele Kriminalforsorgen Vest. Der er et antal stillinger ledig til besættelse med en gennemsnitlig arbejdstid på 37 timer om ugen. Stillingerne er til besættelse den 1. september 2025 eller efter nærmere aftale. Ansøgning Du søger stillingen via linket ”Søg stillingen” Din ansøgning med CV, dokumentation for uddannelse og øvrige relevante bilag skal være modtaget senest den 19. juni 2025. Der udvælges kandidater umiddelbart efter ansøgningsfristen, og der indkaldes til samtale i uge 25, 2025. Det forventes at afholde første samtale den 24. juni 2025 og 2. samtale i uge 27, 2025. Det er en forudsætning for ansættelse, at du i forbindelse med samtalen giver dit samtykke til, at Kriminalforsorgen må indhente og vurdere oplysninger om dig i det Centrale Kriminalregister. Inden ansættelse skal du ligeledes medsende dokumentation for CV, f.eks. ansættelsesbeviser. Hvis du indkaldes til samtale, skal du til samtalen medbringe billedidentifikation, f.eks. pas eller kørekort. Hvis du har yderligere spørgsmål, er du velkommen til at kontakte Enhedsleder Jeanne Dybbøl på tlf. nr. 7255 8258, pr. e-mail: Jeanne.Dybbol@krfo.dk. Om Kriminalforsorgen Vest Kriminalforsorgen Vest er ét af to geografiske kriminalforsorgsområder på landsplan og har cirka 2.300 ansatte. Områdekontoret er en stabsfunktion for vores fængsler, arresthuse, afdelinger i frihed, herunder fodlænkeafdelinger, udslusningsfængsler og et udrejsecenter, der geografisk er placeret i Jylland og på Fyn. Områdekontoret varetager opgaver i tæt kontakt med institutionerne indenfor bl.a. HR og personaleadministration, økonomistyring, daglig kapacitets- og belægsstyring, klientsagsbehandling, resocialisering og sikkerhed. For yderligere information om kriminalforsorgen henvises til hjemmesiden www.kriminalforsorgen.dk. Alle interesserede, uanset personlig baggrund, opfordres til at søge.
Posted 2 months ago
10.0 years
0 Lacs
Ghaziabad, Uttar Pradesh, India
On-site
Urgent Hiring || Principal Thermal Engineer || Ghaziabad Profile: Principal Thermal Engineer Location : Sahibabad next to Ghaziabad Experience: 10+ years Salary : Max 20 LPA (Depend on the interview) Key Responsibilities: Thermal System Design & Optimization: Perform advanced thermal calculations to optimize heat exchangers, cooling towers, and energy recovery systems. Develop thermodynamic models (Rankine, Organic Rankine, Brayton, Refrigeration cycles) to enhance system efficiency. Utilize CFD and FEA simulations for heat transfer, pressure drop, and flow distribution analysis. Conduct real-time performance monitoring and diagnostics for industrial thermal systems. Drive continuous improvement initiatives in thermal management, reducing energy losses. Waste Heat Recovery & Thermal Audits: Lead comprehensive thermal audits, evaluating waste heat potential and energy savings opportunities. Develop and implement waste heat recovery systems for industrial processes. Assess and optimize heat-to-power conversion strategies for enhanced energy utilization. Conduct feasibility studies for thermal energy storage and process integration. Heat Exchangers & Cooling Tower Performance: Design and analyze heat exchangers (shell & tube, plate, finned, etc.) for optimal heat transfer efficiency. Enhance cooling tower performance, focusing on heat rejection, drift loss reduction, and water treatment strategies. Oversee component selection, performance evaluation, and failure analysis for industrial cooling systems. Troubleshoot thermal inefficiencies and recommend design modifications. Material Selection & Engineering Compliance: Guide material selection for high-temperature and high-pressure thermal applications. Evaluate thermal conductivity, corrosion resistance, creep resistance, and mechanical properties. - Ensure all designs adhere to TEMA, ASME, API, CTI (Cooling Technology Institute), and industry standards. Leadership & Innovation: Lead multi-disciplinary engineering teams to develop cutting-edge thermal solutions. Collaborate with manufacturing, R&D, and operations teams for process improvement. Provide technical mentorship and training to junior engineers. Stay ahead of emerging technologies in heat transfer, renewable energy, and thermal system efficiency. Required Skills & Qualifications: Bachelor's/Master's/PhD in Mechanical Engineering, Thermal Engineering, or a related field. 10+ years of industry experience, specializing in thermal calculations, heat exchanger design, and waste heat recovery. Expertise in heat transfer, mass transfer, thermodynamics, and fluid mechanics. Hands-on experience with thermal simulation tools (ANSYS Fluent, Aspen Plus, MATLAB, COMSOL, EES). Strong background in thermal audits, cooling tower performance enhancement, and process heat recovery. Experience in industrial energy efficiency, power plant optimization, and heat recovery applications. In-depth knowledge of high-temperature alloys, corrosion-resistant materials, and structural analysis. Strong problem-solving skills with a research-driven and analytical mindset. Ability to lead projects, manage teams, and drive technical innovation. Preferred Qualifications: Experience in power plants, ORC (Organic Rankine Cycle) systems, and industrial energy recovery projects. Expertise in advanced material engineering for high-performance thermal systems. Publications or patents in heat transfer, waste heat recovery, or energy efficiency technologies. Compensation & Benefits: Competitive salary based on expertise and industry standards. Performance-based incentives and growth opportunities. Health and insurance benefits. Opportunities for leadership and R&D involvement. (Expert in Thermal Calculations & Waste Heat Recovery) Show more Show less
Posted 2 months ago
3.0 years
0 Lacs
Kochi, Kerala, India
On-site
Role Description UST is looking for Adobe Marketo Engineer (Marketing Automation Specialist) with below requirements: Seeking a highly motivated and detail-oriented Marketing Automation Specialist with hands-on experience in Adobe Marketo and Salesforce CRM integration. You will be responsible for designing, executing, and optimizing multi-channel marketing campaigns, managing lead lifecycles, and ensuring seamless data flow between Marketo and Salesforce to drive business growth and marketing ROI. Key Responsibilities Marketo Campaign Management: Design, build, and execute email campaigns, nurture programs, landing pages, and forms within Adobe Marketo. Integration & Data Management: Maintain and optimize the integration between Marketo and Salesforce, ensuring accurate and timely data synchronization, lead scoring, and campaign attribution. Lead Lifecycle Management: Build and manage lead scoring models, lead routing rules, and workflows that align marketing and sales efforts. Reporting & Analytics: Collaborate with stakeholders to track campaign performance, provide insights on funnel metrics, and recommend data-driven improvements. Qualifications 3+ years of experience in marketing automation, preferably in B2B SaaS or tech environments. Proven experience with Adobe Marketo (certification a plus). Strong working knowledge of Salesforce CRM and how it integrates with Marketo. Familiarity with campaign attribution, lead scoring models, and lifecycle stages. Ability to troubleshoot sync issues and perform data hygiene tasks. Proficient in using tokens, segmentation, smart lists, and reporting in Marketo. Understanding of HTML/CSS for email formatting (preferred). Excellent communication, project management, and collaboration skills. Preferred Tools & Skills Marketo Certified Expert (MCE) Experience with Salesforce Process Builder / Flows Familiarity with other MarTech tools like Bizible, Drift, ZoomInfo, or Salesloft Knowledge of SQL or reporting tools (e.g., Tableau, Power BI) is a plus Skills Adobe Marketo and Salesforce CRM integration Show more Show less
Posted 2 months ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About BNP Paribas India Solutions Established in 2005, BNP Paribas India Solutions is a wholly owned subsidiary of BNP Paribas SA, European Union’s leading bank with an international reach. With delivery centers located in Bengaluru, Chennai and Mumbai, we are a 24x7 global delivery center. India Solutions services three business lines: Corporate and Institutional Banking, Investment Solutions and Retail Banking for BNP Paribas across the Group. Driving innovation and growth, we are harnessing the potential of over 10000 employees, to provide support and develop best-in-class solutions. About BNP Paribas Group BNP Paribas is the European Union’s leading bank and key player in international banking. It operates in 65 countries and has nearly 185,000 employees, including more than 145,000 in Europe. The Group has key positions in its three main fields of activity: Commercial, Personal Banking & Services for the Group’s commercial & personal banking and several specialised businesses including BNP Paribas Personal Finance and Arval; Investment & Protection Services for savings, investment, and protection solutions; and Corporate & Institutional Banking, focused on corporate and institutional clients. Based on its strong diversified and integrated model, the Group helps all its clients (individuals, community associations, entrepreneurs, SMEs, corporates and institutional clients) to realize their projects through solutions spanning financing, investment, savings and protection insurance. In Europe, BNP Paribas has four domestic markets: Belgium, France, Italy, and Luxembourg. The Group is rolling out its integrated commercial & personal banking model across several Mediterranean countries, Turkey, and Eastern Europe. As a key player in international banking, the Group has leading platforms and business lines in Europe, a strong presence in the Americas as well as a solid and fast-growing business in Asia-Pacific. BNP Paribas has implemented a Corporate Social Responsibility approach in all its activities, enabling it to contribute to the construction of a sustainable future, while ensuring the Group's performance and stability Commitment to Diversity and Inclusion At BNP Paribas, we passionately embrace diversity and are committed to fostering an inclusive workplace where all employees are valued, respected and can bring their authentic selves to work. We prohibit Discrimination and Harassment of any kind and our policies promote equal employment opportunity for all employees and applicants, irrespective of, but not limited to their gender, gender identity, sex, sexual orientation, ethnicity, race, colour, national origin, age, religion, social status, mental or physical disabilities, veteran status etc. As a global Bank, we truly believe that inclusion and diversity of our teams is key to our success in serving our clients and the communities we operate in. About Business Line/Function The Credit Administration team in ISPL is responsible in Documentation control to ensure that terms & conditions of Facility Documents conform to credit decision and legal requirement. Validate fulfilment of conditions precedent, control limit availability in systems (Atlas II and CAT) in conformity to credit decisions and documentations, follow up on condition subsequent, documents/temporary waiver To certify/validate the risk data integrity in risk systems, Atlas II, RGM, CAT, and CRF in conformity with credit decision and of risk data related to Guarantors and guarantee records To regularly report and monitor any anomalies in legal documentation and expiry date of legal documentation. To alert Risk Corporate, Business Head and Management on any indication of drift of risk profile during the control missions. Documentation control to ensure that terms & conditions of Facility Documents conform to credit decision and legal requirement. Validate fulfilment of conditions precedent, control limit availability in systems (Atlas II and CAT) in conformity to credit decisions and documentations, follow up on condition subsequent, documents/temporary waiver To certify/validate the integrity of risk data for borrowers & guarantors in risk systems, Atlas II, RGM, CAT, MCA and CRF in conformity with credit decision & documentation. To alert Risk Corporate, Business Head and Management on any indication of drift of risk profile during the control missions Contribute to successful delivery of systems and control reports enhancement with adequate UATs Contribute to successful delivery of systems and control reports enhancement with adequate UATs Promote and contribute to the implementation of a common culture and approach within CTM, and promote individual initiative, autonomy and versatility Job Title Credit Documentation Officer Date 2025 Department CTM Location: Mumbai/Chennai Business Line / Function CIB ITO Reports To (Direct) Manager Grade (if applicable) (Functional) Number Of Direct Reports Nil Directorship / Registration Position Purpose: The position is located in ISPL (India) and report to Manager CTM is responsible for ensure credit approvals and credit reviews are updated/documented in a timely manner. Responsibilities Preparation of credit and security documents in line with the credit approvals Documentation control to ensure that terms & conditions of Facility Documents conform to credit decision and legal requirement. Validate fulfilment of conditions precedent, control limit availability in systems (Atlas II and CAT) in conformity to credit decisions and documentations, follow up on condition subsequent, documents/temporary waiver Validate the framework of required covenants in the ‘Covenant Manager’ tool, against executed credit agreements To certify/validate the risk data integrity in risk systems, Atlas II, RGM, CAT, and CRF in conformity with credit decision and of risk data related to Guarantors and guarantee records To regularly report and monitor any anomalies in legal documentation and expiry date of legal documentation. To alert Risk Corporate, Business Head and Management on any indication of drift of risk profile during the control missions Contribute to successful delivery of systems and control reports enhancement with adequate UATs Promote and contribute to the implementation of a common culture and approach within CTM, and promote individual initiative, autonomy and versatility After reception of signed legal documents In cases where required, prepare additional documents (e.g. registration form for security documents, tenancy consent letter), invoice for set-up fees, memo to relevant department to arrange retrocession payment, etc. Collection and follow-up of various fees such as renewal, commitment and upfront fees,. Input of the guarantee data in the Received Guarantee Module and keep up to date the data for all credit facilities Input covenant framework and testing result in the Covenant Manager tool Input required audit financial statement for both customers and guarantors and follow up with Business/ CA to ensure receipt of required audit financial statement Analyse credit committee decisions and ensure credit review updates are transposed and updated into respective systems. Ensure control checklists are adequately prepared. Monitoring of genuine excess or exceptions for financing métiers products Ensure all monitoring and data control Reports (excess, covenants, utilisations, wathclist ETC…) are adequately prepared, tracked and monitored Data maintenance in the systems Input and update credit risk data in bank accounting system and FX limits based on credit approval output. Input and update overdraft rates in bank accounting system based on credit approval output. Maintenance of monthly KPI reporting/ dashboard Input required audit financial statement for both customers and guarantors and follow up with Business/ CA to ensure receipt of required audit financial statement Ad-hoc tasks Contribute to successful delivery of systems and control reports enhancement with adequate UATs Participate and contribute local/regional/global projects Contribution to Regulatory/IG audits Participate and contribute to BCP Contributing Responsibilities To alert Risk Corporate, Business Head and Management on any indication of drift of risk profile during the control missions Promote and contribute to the implementation of a common culture and approach within ITO 3C, and promote individual initiative, autonomy and versatility Possess a culture of accountability and discipline for management of credit risk data quality Clear understanding of data definition in order to secure the data quality Direct contribution to BNPP operational permanent control framework. Comply with regulatory requirements and internal guidelines Technical & Behavioral Competencies Good understanding of risk concepts and methodologies. Good understanding of Financial products for banks/ FIs Knowledge of credit processes Good understanding of transaction workflows, booking concept and booking system Being familiar with local regulations Commitment to the role and capacity to meticulously implement the ITO 3C Mission Statement Excellent written and verbal communication skills Excellent attention to detail Strong interpersonal skills Good time-management skills Autonomy and capacity to take initiative Capacity to remain objective and independent in order to fulfil the control role required Behavior Work under pressure Accurate and attention to detail/ rigor Committed and Motivated by a strong sense of accountability and care about delivering. Team player and Collaborative Ability to collaborate/ Teamwork Active listening Ability to deliver/ results driven Specific Qualifications (if Required) Degree in Banking & Finance or equivalent qualification in Banking sector with more than 5 years relevant experience in preparation and review of legal credit and security documentation on structured deals, control and monitoring function of corporate credit Skills Referential Behavioural Skills: (Please select up to 4 skills) Attention to detail / rigor Ability to collaborate / Teamwork Ability to deliver / Results driven Creativity & Innovation / Problem solving Active Listening Transversal Skills: (Please select up to 5 skills) Analytical ability Ability To Develop Others & Improve Their Skills Ability to manage / facilitate a meeting, seminar, committee, training… Ability to develop and adapt a process Ability to inspire others and generate people's commitment. Ability to setup relevant performance indicators. Ability to anticipate business/ strategic evolution. Show more Show less
Posted 2 months ago
5.0 years
0 Lacs
India
Remote
Job Title: MLOps Engineer (Remote) Experience: 5+ Years Location: Remote Type: Full-time About the Role: We are seeking an experienced MLOps Engineer to design, implement, and maintain scalable machine learning infrastructure and deployment pipelines. You will work closely with Data Scientists and DevOps teams to operationalize ML models, optimize performance, and ensure seamless CI/CD workflows in cloud environments (Azure ML/AWS/GCP). Key Responsibilities: ✔ ML Model Deployment: Containerize ML models using Docker and deploy on Kubernetes Build end-to-end ML deployment pipelines for TensorFlow/PyTorch models Integrate with Azure ML (or AWS SageMaker/GCP Vertex AI) ✔ CI/CD & Automation: Implement GitLab CI/CD pipelines for automated testing and deployment Manage version control using Git and enforce best practices ✔ Monitoring & Performance: Set up Prometheus + Grafana dashboards for model performance tracking Configure alerting systems for model drift, latency, and errors Optimize infrastructure for scalability and cost-efficiency ✔ Collaboration: Work with Data Scientists to productionize prototypes Document architecture and mentor junior engineers Skills & Qualifications: Must-Have: 5+ years in MLOps/DevOps, with 6+ years total experience Expertise in Docker, Kubernetes, CI/CD (GitLab CI/CD), Linux Strong Python scripting for automation (PySpark a plus) Hands-on with Azure ML (or AWS/GCP) for model deployment Experience with ML model monitoring (Prometheus, Grafana, ELK Stack) Nice-to-Have: Knowledge of MLflow, Kubeflow, or TF Serving Familiarity with NVIDIA Triton Inference Server Understanding of data pipelines (Airflow, Kafka) Why Join Us? 💻 100% Remote with flexible hours 🚀 Work on cutting-edge ML systems at scale 📈 Competitive salary + growth opportunities Show more Show less
Posted 2 months ago
0.0 years
0 Lacs
Hyderabad, Telangana
On-site
We are seeking a highly skilled Data Scientist (LLM/Gen AI Engineer) to join our AI and Machine Learning team. In this role, you will focus on the research, development, and deployment of Large Language Models (LLMs) and Generative AI solutions to solve real-world problems and enhance intelligent applications. You will work closely with cross-functional teams including data scientists, machine learning engineers, and product managers to build scalable, production-ready Gen AI systems. Key Responsibilities: Research, fine-tune, and deploy Large Language Models (LLMs) such as GPT, LLaMA, Mistral, or similar. Design and implement end-to-end Gen AI pipelines for tasks such as summarization, question answering, code generation, and retrieval-augmented generation (RAG). Preprocess and curate large datasets for training and evaluation of language models. Optimize models for efficiency, accuracy, and scalability using techniques like quantization, distillation, and model pruning. Integrate LLM-based solutions with existing products, APIs, and user-facing applications. Evaluate model performance using metrics like BLEU, ROUGE, perplexity, and human evaluation. Stay up-to-date with the latest trends in AI research, LLM architecture, and Gen AI tools. Collaborate with engineers to scale models in production and monitor model drift or degradation. Document experiments, model behaviors, and results to guide reproducibility and future development. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, Machine Learning, or related field. 2–5+ years of experience working as a Data Scientist, ML Engineer, or AI Researcher. Hands-on experience with Large Language Models (OpenAI GPT, PaLM, Claude, LLaMA, Mistral, etc.). Proficient in Python and popular ML/NLP libraries (e.g., PyTorch, TensorFlow, Hugging Face Transformers, LangChain). Deep understanding of NLP, tokenization, embeddings, transformers, and attention mechanisms. Familiarity with prompt engineering and fine-tuning techniques (LoRA, PEFT, etc.). Experience deploying models using cloud platforms (AWS, GCP, Azure) and container tools (Docker, Kubernetes). Strong analytical, problem-solving, and communication skills. Job Types: Full-time, Permanent Pay: Up to ₹30,000.00 per month Benefits: Flexible schedule Schedule: Day shift Ability to commute/relocate: Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Mention your current location Work Location: In person
Posted 2 months ago
4.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Trusted by many of the largest companies globally, Accertify is the leading digital platform assessing risk across the entire customer journey, from Account Monitoring and Payment Risk to Refund Fraud and Dispute Management. Accertify helps maximize revenues and user experience while minimizing loss and customer friction. We offer ultra-fast decision-making and precise control, ensuring our customers are always confidently in the driver’s seat and ready to #MoveAtTheSpeedOfRight. Be in the driver’s seat of your career as a Senior Machine Learning Engineer with the industry leader- and build a career you can trust. Key Responsibilities Design, develop, test, deploy, and maintain scalable, secure software for batch and real-time ML pipelines, in a high-throughput and low-latency production environment Optimize pipelines to be distributed, parallel, and/or multithreaded Collaborate with cross-functional teams to troubleshoot and optimize data pipelines Demonstrate subject matter expertise and ownership for your team’s services Present results to stakeholders and executives as needed Collaborate with the Data Science, Big Data, and DevOps teams Elevate team performance through mentoring, training, code reviews, and unit testing Take ownership of initiatives, propose solutions, and be accountable for projects from inception to completion. Minimum Qualifications Bachelor's degree in computer science or equivalent 4+ years of experience of software engineering in an enterprise setting Experience in an object-oriented language such as Python or Java Strong Relational database skills in SQL such as JDBC/Oracle or PostgreSQL Experience in deploying ML models in production and familiarity with MLOps Experience working as a part of a larger engineering team, documenting code, performing peer code reviews, writing unit tests, and following an SDLC 2+ years of experience in Spark and/or NoSQL databases Preferred Qualifications High volume transactional environment experience Understanding statistics and Machine Learning concepts such as traditional ML algorithms, Generative ML algorithms, Concept drift and decay, A|B Testing etc. Big Data components/frameworks such as HDFS, Spark 3.0 etc. Additional Details Candidates based in Delhi NCR, India, would work in a hybrid (3 days in-office per week) capacity from the Accertify office located in Gurgaon, Haryana. Visa Sponsorship: Employment eligibility to work for Accertify in India is required, as Accertify will not pursue Visa sponsorship for this position Show more Show less
Posted 2 months ago
3.0 years
16 - 20 Lacs
Cuttack, Odisha, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 months ago
3.0 years
16 - 20 Lacs
Bhubaneswar, Odisha, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 months ago
3.0 years
16 - 20 Lacs
Kolkata, West Bengal, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 months ago
3.0 years
16 - 20 Lacs
Raipur, Chhattisgarh, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 months ago
3.0 years
16 - 20 Lacs
Guwahati, Assam, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 months ago
3.0 years
16 - 20 Lacs
Ranchi, Jharkhand, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 months ago
3.0 years
16 - 20 Lacs
Jamshedpur, Jharkhand, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 months ago
3.0 years
16 - 20 Lacs
Amritsar, Punjab, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France