Jobs
Interviews

666 Drift Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 9.0 years

0 Lacs

Andhra Pradesh, India

On-site

At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in data science and machine learning engineering at PwC will focus on leveraging advanced analytics and machine learning techniques to extract insights from large datasets and drive data-driven decision making. You will work on developing predictive models, conducting statistical analysis, and creating data visualisations to solve complex business problems. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Role Overview We are seeking a Senior Associate – AI Engineer / MLOps / LLMOps with a passion for building resilient, cloud-native AI systems. In this role, you’ll collaborate with data scientists, researchers, and product teams to build infrastructure, automate pipelines, and deploy models that power intelligent applications at scale. If you enjoy solving real-world engineering challenges at the convergence of AI and software systems, this role is for you. Key Responsibilities Architect and implement AI/ML/GenAI pipelines, automating end-to-end workflows from data ingestion to model deployment and monitoring. Develop scalable, production-grade APIs and services using FastAPI, Flask, or similar frameworks for AI/LLM model inference. Design and maintain containerized AI applications using Docker and Kubernetes. Operationalize Large Language Models (LLMs) and other GenAI models via cloud-native deployment (e.g., Azure ML, AWS Sagemaker, GCP Vertex AI). Manage and monitor model performance post-deployment, applying concepts of MLOps and LLMOps including model versioning, A/B testing, and drift detection. Build and maintain CI/CD pipelines for rapid and secure deployment of AI solutions using tools such as GitHub Actions, Azure DevOps, GitLab CI. Implement security, governance, and compliance standards in AI pipelines. Optimize model serving infrastructure for speed, scalability, and cost-efficiency. Collaborate with AI researchers to translate prototypes into robust production-ready solutions. Required Skills & Experience 4 to 9 years of hands-on experience in AI/ML engineering, MLOps, or DevOps for data science products. Bachelor's degree in Computer Science, Engineering, or related technical field (BE/BTech/MCA). Strong software engineering foundation with hands-on experience in Python, Shell scripting, and familiarity with ML libraries (scikit-learn, transformers, etc.). Experience deploying and maintaining LLM-based applications, including prompt orchestration, fine-tuned models, and agentic workflows. Deep understanding of containerization and orchestration (Docker, Kubernetes, Helm). Experience with CI/CD pipelines, infrastructure-as-code tools (Terraform, CloudFormation), and automated deployment practices. Proficiency in cloud platforms: Azure (preferred), AWS, or GCP – including AI/ML services (e.g., Azure ML, AWS Sagemaker, GCP Vertex AI). Experience managing and monitoring ML lifecycle (training, validation, deployment, feedback loops). Solid understanding of APIs, microservices, and event-driven architecture. Experience with model monitoring/orchestration tools (e.g, Kubeflow, MLflow). Exposure to LLMOps-specific orchestration tools such as LangChain, LangGraph, Haystack, or PromptLayer. Experience with serverless deployments (AWS Lambda, Azure Functions) and GPU-enabled compute instances. Knowledge of data pipelines using tools like Apache Airflow, Prefect, or Azure Data Factory. Exposure to logging and observability tools like ELK stack, Azure Monitor, or Datadog. Good to Have Experience implementing multi-model architecture, serving GenAI models alongside traditional ML models. Knowledge of data versioning tools like DVC, Delta Lake, or LakeFS. Familiarity with distributed systems and optimizing inference pipelines for throughput and latency. Experience with infrastructure cost monitoring and optimization strategies for large-scale AI workloads. It would be great if the candidate has exposure to full-stack ML/DL. Soft Skills & Team Expectations Strong communication and documentation skills; ability to clearly articulate technical concepts to both technical and non-technical audiences. Demonstrated ability to work independently as well as collaboratively in a fast-paced environment. A builder's mindset with a strong desire to innovate, automate, and scale. Comfortable in an agile, iterative development environment. Willingness to mentor junior engineers and contribute to team knowledge growth. Proactive in identifying tech stack improvements, security enhancements, and performance bottlenecks.

Posted 3 weeks ago

Apply

0 years

6 - 8 Lacs

Hyderābād

On-site

Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Principal Consultant - MLOps Engineer! In this role, lead the automation and orchestration of our machine learning infrastructure and CI/CD pipelines on public cloud (preferably AWS). This role is essential for enabling scalable, secure, and reproducible deployments of both classical AI/ML models and Generative AI solutions in production environments. Responsibilities Develop and maintain CI/CD pipelines for AI/GenAI models on AWS using GitHub Actions and CodePipeline. (Not Limited to) Automate infrastructure provisioning using IAC. (Terraform, Bicep Etc) Any cloud platform - Azure or AWS Package and deploy AI/ GenAI models on (SageMaker, Lambda, API Gateway). Write Python scripts for automation, deployment, and monitoring. Engaging in the design, development and maintenance of data pipelines for various AI use cases Active contribution to key deliverables as part of an agile development team Set up model monitoring, logging, and alerting (e.g., drift, latency, failures). Ensure model governance, versioning, and traceability across environments. Collaborating with others to source, analyse , test and deploy data processes Experience in GenAI project Qualifications we seek in you! Minimum Qualifications experience with MLOps practices. Degree/qualification in Computer Science or a related field, or equivalent work experience Experience developing, testing, and deploying data pipelines Strong Python programming skills. Hands-on experience in deploying 2 - 3 AI/ GenAI models in AWS. Familiarity with LLM APIs (e.g., OpenAI, Bedrock) and vector databases. Clear and effective communication skills to interact with team members, stakeholders and end users Preferred Qualifications/ Skills Experience with Docker-based deployments. Exposure to model monitoring tools (Evidently, CloudWatch). Familiarity with RAG stacks or fine-tuning LLMs. Understanding of GitOps practices. Knowledge of governance and compliance policies, standards, and procedures Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Principal Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 10, 2025, 6:48:24 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time

Posted 3 weeks ago

Apply

2.0 years

2 - 6 Lacs

Gurgaon

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a highly skilled and experienced Data Scientist to join our team based in Bangalore. As a Data Scientist, you will play a critical role in developing and implementing AI/ML solutions. You will be responsible for utilizing your expertise in Python, SQL, and various frameworks like Scikit Learn, TensorFlow, and PyTorch to deliver production-grade AI/ML projects. The ideal candidate will have 2+ years of experience in Data Science, AI/ML, and a solid understanding of mathematical and statistical concepts. Experience in the US Healthcare domain and knowledge of Big Data and data streaming technologies are desirable. Primary Responsibilities: Develop and implement AI/ML solutions using Python, pandas, numpy, and SQL Utilize frameworks such as Scikit Learn, TensorFlow, and PyTorch to build and deploy models Perform exploratory data analysis (EDA), statistical analysis, and feature engineering to uncover insights and patterns in data Build, tune, and evaluate machine learning models for predictive and prescriptive analytics Conduct drift analysis to monitor model performance and ensure accuracy over time Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions Work on the full life cycle of AI/ML projects, including data preparation, model development, tuning, and deployment Ensure the scalability, reliability, and efficiency of AI/ML solutions in production environments Stay updated with the latest advancements in AI/ML techniques and tools, and identify opportunities to apply them to enhance existing solutions Document and communicate findings, methodologies, and insights to technical and non-technical stakeholders Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor's or Master's degree in Computer Science, Data Science, Statistics, or a related field 10+ years of overall experience with 5+ years of experience in Data Science, AI/ML, or a similar role Hands-on experience in delivering production-grade AI/ML projects Experience with the full life cycle of AI/ML projects, including EDA, model development, tuning, and drift analysis Solid understanding of mathematical and statistical concepts Solid programming skills in Python, with experience in pandas, numpy, and SQL Proficiency in frameworks such as Scikit Learn, TensorFlow, and PyTorch Proven excellent problem-solving and analytical thinking skills Proven solid communication and collaboration skills to work effectively in a team environment Preferred Qualifications: Knowledge of Big Data technologies (PySpark, Hadoop) and data streaming tools (Kafka, etc.) Familiarity with the US Healthcare domain At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #nic

Posted 3 weeks ago

Apply

0 years

0 Lacs

Bhubaneshwar, Odisha, India

On-site

Company Description DRIFT MEDIA is a design, digital marketing, and product video ad production company founded in 2022. We are dedicated to providing high-quality content, design, and digital marketing services that help clients grow their online presence effectively. Our services include graphic design, 2D/3D animation, digital marketing, app development, and website design. Client satisfaction and personalized interaction are our top priorities. DRIFT MEDIA was founded by Aditya Prasad Das and co-founded by Swati Smita Patra, both certified digital marketers with extensive experience. Role Description This is a full-time on-site role for a Business Development Executive located in Bhubaneshwar. The Business Development Executive will be responsible for identifying new business opportunities, generating leads, managing client accounts, and maintaining strong communication with potential and existing clients. Daily tasks will include reaching out to prospective clients, presenting our services, and developing growth strategies to increase revenue. Qualifications Skills in New Business Development and Lead Generation Strong skills in Communication and Account Management Experience in Business strategy and development Excellent interpersonal and negotiation skills Ability to work independently and within a team Bachelor's degree in Business Administration, Marketing, or related field Experience in the digital marketing industry is a plus. Hiring Creative Minds Only! Position:- Executive Motion Designer Experience:- 1yr+ Salary:- Industry Standard + High Incentives Location:- Patia, Bhubaneswar, Odisha. Work Mode:- Work from Office (Because we believe in team building) .. We are a growing team that enables the members to share their decisions and suggestions on the projects to even cater more growth for the business or clients. .. Disclaimer:- We only appreciate super creative people on the team. Apply For Job:- Send your CV- contact.driftmedia@gmail.com Give a Call- 7735664732

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: HubSpot Operations Specialist – Sales & Marketing Automation Location: Hybrid (NOIDA/Gurugram) Experience: 4+ Years (HubSpot Enterprise – Sales, Marketing, and Content Hub) Qualification: Graduate/Postgraduate in Marketing, Business, or IT Company: Brand Pipal (A subsidiary of NLB Services) About Brand Pipal Brand Pipal is the performance-driven, storytelling-powered marketing agency within NLB Services. From employer branding to full-funnel digital marketing and B2B growth strategy, we deliver scalable outcomes for some of the world’s fastest-growing organizations. Our edge lies in combining creative energy with data precision—and at the heart of it is our commitment to leveraging the best tools and platforms to unlock growth. Role Overview We’re seeking a Senior HubSpot Operations Specialist with hands-on experience on HubSpot Enterprise , particularly in aligning sales workflows, marketing automation, AI integrations, and content ops at scale. The ideal candidate will lead platform setup and optimization to support sales enablement, marketing efficiency, and intelligent automation. This is a strategic and executional role that blends technical HubSpot expertise with growth marketing thinking —you’ll own processes from pipeline configuration to AI-enhanced campaigns and reporting. Key Responsibilities Sales Enablement & CRM Optimization Map and mirror sales processes in HubSpot including deal stages, pipeline structuring, and team-based routing. Ensure clean, deduplicated, and accurate data migration and hygiene across lifecycle stages. Set up and optimize lead capture forms with smart fields and behavior-based triggers. Build automated sequences for follow-ups, lead nurturing, and internal notifications. Integrate email tracking and calendar tools for seamless outreach and meeting scheduling. Track sales agent performance through custom reports and dashboards , forecasting deal closures and pipeline health. Design and implement lead scoring models based on behavior, engagement, and source. Marketing Automation & Content Execution Create workflows for lifecycle emails, personalized campaigns, and AI-enhanced nurture flows. Lead the use of HubSpot AI (e.g., Breeze) to automate: Email generation Smart content recommendations Deal forecasting Lead prioritization Workflow decision branches Drive SEO and data insights with integrated reporting and analytics dashboards. Set up AI chatbots, content recommendation modules, and personalization tokens for better engagement. Collaborate with design, content, and performance teams to create scalable, multi-touch campaigns. AI Integration and Custom Workflow Engineering Use HubSpot’s AI Workflows and Breeze to define triggers, suggest actions, and personalize experiences. Build and test AI-enhanced automation for: Email generation and task creation Customer segmentation and tagging Smart decision trees based on user behavior Optional: integrate with third-party AI tools (ChatGPT via Zapier, Drift, Clearbit, Clay) to enrich workflows and automate growth ops. Analytics, Reporting & Insights Build custom dashboards across Sales and Marketing hubs for executive reporting and daily operations. Track metrics such as: Lead-to-deal velocity Email engagement Pipeline movement Campaign ROI Sales rep productivity Present insights and recommendations to improve efficiency, conversion, and alignment. Must-Have Skills 4+ years of deep hands-on experience with HubSpot Enterprise (Sales, Marketing, and Content Hub). Expertise in: Sales pipeline configuration Deal forecasting and sales enablement Workflow automation and lead scoring AI-powered content and marketing operations Strong understanding of inbound marketing, lead lifecycle management, and customer journey automation. Comfortable creating complex reports and collaborating cross-functionally with sales, content, and digital teams. Analytical mindset with a passion for clean data, scalable systems, and smart automation. Desirable Skills HubSpot Certifications: Marketing Software, Sales Hub Implementation, Workflow Automation, AI in Marketing. Experience working with global B2B teams, SaaS clients, or high-volume content operations. Familiarity with tools like Drift, Clay, Zapier, Clearbit, ChatGPT, or Salesforce integrations. Why Join Brand Pipal Work at the forefront of AI-led marketing operations. Help shape sales and marketing automation for global brands. Be part of a collaborative, fast-paced, performance-first team that blends creativity and tech. Build scalable systems that truly impact revenue, growth, and customer engagement.

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

India

Remote

Sutherland is seeking an attentive and analytical person to join us as Data Scientist. Interested suitable candidates please send resume to priti.kumari@sutherlandglobal.com Exp-7+ Years Notice period-Immediate Location-WFH/Hyderabad Job Description Serve as the AI & data strategist, discovering and onboarding new data sources and shaping end-to-end pipelines that span traditional analytics, machine learning (ML), deep learning (DL), natural-language processing (NLP), generative AI (GenAI), and agentic AI. Partner with engineering to design, build, and harden production-grade data and AI products (e.g., RAG search, LLM-powered assistants, anomaly-detection services). Run analytical experiments—classification, forecasting, LLM fine-tuning, prompt-engineering A/B tests—to solve business problems across multiple domains. Collect, clean, and enrich large structured, semi-structured, and unstructured datasets (text, image, audio, sensor, graph). Design, train, and validate classical ML models, DL architectures (CNN, RNN, Transformers), and instruction-tuned LLMs; measure drift and conduct error analysis to drive iterative improvement. Implement retrieval-augmented generation and agentic task-planning pipelines that combine vector search, function calling, and tool invocation. Document findings, publish reusable components, and mentor junior data scientists and ML engineers. Qualifications Experience 7+ Years delivering data-driven solutions, with demonstrable depth in at least two of: ML, DL, NLP, GenAI, agentic AI. Hands-on proficiency in Python (pandas, NumPy, scikit-learn), DL frameworks (TensorFlow or PyTorch), and LLM / GenAI toolkits (Hugging Face Transformers, LangChain/LangGraph, Google ADK). Proven track record in feature engineering, pattern recognition, predictive modeling, and LLM fine-tuning / evaluation. Experience optimizing GPU / TPU workloads and scaling distributed training or inference. Familiarity with vector databases (FAISS, Chroma, Pinecone) and distributed data systems (Spark, BigQuery, Snowflake). · Nice to have: Text Analytics experience · Nice to have: Advanced Degree or Certification in Statistics

Posted 3 weeks ago

Apply

25.0 years

0 Lacs

Kochi, Kerala, India

On-site

Company Overview Milestone Technologies is a global IT managed services firm that partners with organizations to scale their technology, infrastructure and services to drive specific business outcomes such as digital transformation, innovation, and operational agility. Milestone is focused on building an employee-first, performance-based culture and for over 25 years, we have a demonstrated history of supporting category-defining enterprise clients that are growing ahead of the market. The company specializes in providing solutions across Application Services and Consulting, Digital Product Engineering, Digital Workplace Services, Private Cloud Services, AI/Automation, and ServiceNow. Milestone culture is built to provide a collaborative, inclusive environment that supports employees and empowers them to reach their full potential. Our seasoned professionals deliver services based on Milestone’s best practices and service delivery framework. By leveraging our vast knowledge base to execute initiatives, we deliver both short-term and long-term value to our clients and apply continuous service improvement to deliver transformational benefits to IT. With Intelligent Automation, Milestone helps businesses further accelerate their IT transformation. The result is a sharper focus on business objectives and a dramatic improvement in employee productivity. Through our key technology partnerships and our people-first approach, Milestone continues to deliver industry-leading innovation to our clients. With more than 3,000 employees serving over 200 companies worldwide, we are following our mission of revolutionizing the way IT is deployed. Job Overview The Endpoint Support Analyst will provide critical day to day support for Windows and Mac devices. The analyst will work on ServiceNow user tickets and basic project work. The candidate must be self-motivated and autonomous, proficient in communication both written and verbal, and experienced with ITIL processes. Troubleshoot issues with Windows and Mac device enrollment using Microsoft Autopilot and JAMF over the air provisioning. Support OS patching, Microsoft Office, and Google Chrome changes, and review and solve hardware driver issues. Monitor and remediate compliance and configuration drift using reports, proactive remediation scripts, and integrated analytics tools such as Log Analytics. Understanding of Group Policy Objects (GPOs) and Conditional Access policies Research and resolves systemic issues and problems with software and hardware on Windows and Mac systems. Follows escalation procedures when appropriate to resolve processing problems and user problems in a timely manner and meet service levels and other standards for the job Collaborate with the Service Desk and other L1 teams to identify systemic issues and coordinate investigation and solution implementation. Completes project assignments and ad-hoc project needs commensurate with job expectations. Basic Qualifications Bachelor’s degree and 2 years of Information Systems experience OR Associate’s degree and 4 years of Information Systems experience OR High school diploma / GED and 8 years of Information Systems experience Preferred Qualifications 4+ years providing end-user support in a multi-system environment including issue resolution, upgrades/patching, and general management across PC, Mac, Tablet, Smartphones, VDIs and peripherals Working knowledge of MS Office Suite and Browser management required. PowerShell, python or other scripting tools would be very helpful 2+ years working with Intune, JAMF, ServiceNow, and NextThink or 1e Tachyon. Working knowledge of Agile methodology Ability to address rapidly changing priorities in a fast-paced environment Familiar with ITIL-based processes and the use of ServiceNow or similar management platform Excellent communication, interpersonal skills, and writing skills with ability to understand customer needs Passionate about customer service and how it can transform businesses Excellent project management skills and ability to multitask with ease Compensation Estimated Pay Range: Exact compensation and offers of employment are dependent on circumstances of each case and will be determined based on job-related knowledge, skills, experience, licenses or certifications, and location. Our Commitment to Diversity & Inclusion At Milestone we strive to create a workplace that reflects the communities we serve and work with, where we all feel empowered to bring our full, authentic selves to work. We know creating a diverse and inclusive culture that champions equity and belonging is not only the right thing to do for our employees but is also critical to our continued success. Milestone Technologies provides equal employment opportunity for all applicants and employees. All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of race, color, religion, gender, gender identity, marital status, age, disability, veteran status, sexual orientation, national origin, or any other category protected by applicable federal and state law, or local ordinance. Milestone also makes reasonable accommodations for disabled applicants and employees. We welcome the unique background, culture, experiences, knowledge, innovation, self-expression and perspectives you can bring to our global community. Our recruitment team is looking forward to meeting you.

Posted 3 weeks ago

Apply

3.0 years

16 - 20 Lacs

Ghaziabad, Uttar Pradesh, India

Remote

Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

3.0 years

16 - 20 Lacs

Noida, Uttar Pradesh, India

Remote

Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

3.0 years

16 - 20 Lacs

Agra, Uttar Pradesh, India

Remote

Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title: Business Analyst Lead – Generative AI Experience: 7–15 Years Location: Bangalore Designation Level: Lead Role Overview: We are looking for a Business Analyst Lead with a strong grounding in Generative AI to bridge the gap between innovation and business value. In this role, you'll drive adoption of GenAI tools (LLMs, RAG systems, AI agents) across enterprise functions, aligning cutting-edge capabilities with practical, measurable outcomes. Key Responsibilities: 1. GenAI Strategy & Opportunity Identification Collaborate with cross-functional stakeholders to identify high-impact Generative AI use cases (e.g., AI-powered chatbots, content generation, document summarization, synthetic data). Lead cost-benefit analyses (e.g., fine-tuning open-source models vs. adopting commercial LLMs like GPT-4 Enterprise). Evaluate ROI and adoption feasibility across departments. 2. Requirements Engineering for GenAI Projects Define and document both functional and non-functional requirements tailored to GenAI systems: Accuracy thresholds (e.g., hallucination rate under 5%) Ethical guardrails (e.g., PII redaction, bias mitigation) Latency SLAs (e.g., <2 seconds response time) Develop prompt engineering guidelines, testing protocols, and iteration workflows. 3. Stakeholder Collaboration & Communication Translate technical GenAI concepts into business-friendly language. Manage expectations on probabilistic outputs and incorporate validation workflows (e.g., human-in-the-loop review). Use storytelling and outcome-driven communication (e.g., “Automated claims triage reduced handling time by 40%.”) 4. Business Analysis & Process Modeling Create advanced user story maps for multi-agent workflows (AutoGen, CrewAI). Model current and future business processes using BPMN to reflect human-AI collaboration. 5. Tools & Technical Proficiency Hands-on experience with LangChain, LlamaIndex for LLM integration. Knowledge of vector databases, RAG architectures, LoRA-based fine-tuning. Experience using Azure OpenAI Studio, Google Vertex AI, Hugging Face. Data validation using SQL and Python; exposure to synthetic data generation tools (e.g., Gretel, Mostly AI). 6. Governance & Performance Monitoring Define KPIs for GenAI performance: Token cost per interaction User trust scores Automation rate and model drift tracking Support regulatory compliance with audit trails and documentation aligned with EU AI Act and other industry standards. Required Skills & Experience: 7–10 years of experience in business analysis or product ownership, with recent focus on Generative AI or applied ML. Strong understanding of the GenAI ecosystem and solution lifecycle from ideation to deployment. Experience working closely with data science, engineering, product, and compliance teams. Excellent communication and stakeholder management skills, with a focus on enterprise environments. Preferred Qualifications: Certification in Business Analysis (CBAP/PMI-PBA) or AI/ML (e.g., Coursera/Stanford/DeepLearning.ai) Familiarity with compliance and AI regulations (GDPR, EU AI Act). Experience in BFSI, healthcare, telecom, or other regulated industries.

Posted 3 weeks ago

Apply

4.0 - 10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We are seeking a Senior/Lead DevOps Engineer – Databricks with strong experience in Azure Databricks to design, implement, and optimize Databricks infrastructure, CI/CD pipelines, and ML model deployment. The ideal candidate will be responsible for Databricks environment setup, networking, cluster management, access control, CI/CD automation, model deployment, asset bundle management, and monitoring. This role requires hands-on experience with DevOps best practices, infrastructure automation, and cloud-native architectures. Required Skills & Experience • 4 to 10 years of experience in DevOps with a strong focus on Azure Databricks. • Hands-on experience with Azure networking, VNET integration, and firewall rules. • Strong knowledge of Databricks cluster management, job scheduling, and optimization. • Expertise in CI/CD pipeline development for Databricks and ML models using Azure DevOps, Terraform, or GitHub Actions. • Experience with Databricks Asset Bundles (DAB) for packaging and deployment. • Proficiency in RBAC, Unity Catalog, and workspace access control. • Experience with Infrastructure as Code (IaC) tools like Terraform, ARM Templates, or Bicep. • Strong scripting skills in Python, Bash, or PowerShell. • Familiarity with monitoring tools (Azure Monitor, Prometheus, or Datadog). Preferred Qualifications • Databricks Certified Associate/Professional Administrator or equivalent certification. • Experience with AWS or GCP Databricks in addition to Azure. • Knowledge of Delta Live Tables (DLT), Databricks SQL, and MLflow. • Exposure to Kubernetes (AKS, EKS, or GKE) for ML model deployment. Roles & Responsibilities Key Responsibilities 1. Databricks Infrastructure Setup & Management • Configure and manage Azure Databricks workspaces, networking, and security. • Set up networking components like VNET integration, private endpoints, and firewall configurations. • Implement scalability strategies for efficient resource utilization. • Ensure high availability, resilience, and security of Databricks infrastructure. 2. Cluster & Capacity Management • Manage Databricks clusters, including autoscaling, instance selection, and performance tuning. • Optimize compute resources to minimize costs while maintaining performance. • Implement cluster policies and governance controls. 3. User & Access Management • Implement RBAC (Role-Based Access Control) and IAM (Identity and Access Management) for users and services. • Manage Databricks Unity Catalog and enforce workspace-level access controls. • Define and enforce security policies across Databricks workspaces. 4. CI/CD Automation for Databricks & ML Models • Develop and manage CI/CD pipelines for Databricks Notebooks, Jobs, and ML models using Azure DevOps, GitHub Actions, or Jenkins. • Automate Databricks infrastructure deployment using Terraform, ARM Templates, or Bicep. • Implement automated testing, version control, and rollback strategies for Databricks workloads. • Integrate Databricks Asset Bundles (DAB) for standardized and repeatable Databricks deployments. 5. Databricks Asset Bundle Management • Implement Databricks Asset Bundles (DAB) to package, version, and deploy Databricks workflows efficiently. • Automate workspace configuration, job definitions, and dependencies using DAB. • Ensure traceability, rollback, and version control of deployed assets. • Integrate DAB with CI/CD pipelines for seamless deployment. 6. ML Model Deployment & Monitoring • Deploy ML models using Databricks MLflow, Azure Machine Learning, or Kubernetes (AKS). • Optimize model performance and enable real-time inference. • Implement model monitoring, drift detection, and automated retraining pipelines. 7. Monitoring, Troubleshooting & Performance Optimization • Set up Databricks monitoring and logging using Azure Monitor, Datadog, or Prometheus. • Analyze cluster performance metrics, audit logs, and cost insights to optimize workloads. • Troubleshoot Databricks infrastructure, pipelines, and deployment issues.

Posted 3 weeks ago

Apply

4.0 - 10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

We are seeking a Senior/Lead DevOps Engineer – Databricks with strong experience in Azure Databricks to design, implement, and optimize Databricks infrastructure, CI/CD pipelines, and ML model deployment. The ideal candidate will be responsible for Databricks environment setup, networking, cluster management, access control, CI/CD automation, model deployment, asset bundle management, and monitoring. This role requires hands-on experience with DevOps best practices, infrastructure automation, and cloud-native architectures. Required Skills & Experience • 4 to 10 years of experience in DevOps with a strong focus on Azure Databricks. • Hands-on experience with Azure networking, VNET integration, and firewall rules. • Strong knowledge of Databricks cluster management, job scheduling, and optimization. • Expertise in CI/CD pipeline development for Databricks and ML models using Azure DevOps, Terraform, or GitHub Actions. • Experience with Databricks Asset Bundles (DAB) for packaging and deployment. • Proficiency in RBAC, Unity Catalog, and workspace access control. • Experience with Infrastructure as Code (IaC) tools like Terraform, ARM Templates, or Bicep. • Strong scripting skills in Python, Bash, or PowerShell. • Familiarity with monitoring tools (Azure Monitor, Prometheus, or Datadog). Preferred Qualifications • Databricks Certified Associate/Professional Administrator or equivalent certification. • Experience with AWS or GCP Databricks in addition to Azure. • Knowledge of Delta Live Tables (DLT), Databricks SQL, and MLflow. • Exposure to Kubernetes (AKS, EKS, or GKE) for ML model deployment. Roles & Responsibilities Key Responsibilities 1. Databricks Infrastructure Setup & Management • Configure and manage Azure Databricks workspaces, networking, and security. • Set up networking components like VNET integration, private endpoints, and firewall configurations. • Implement scalability strategies for efficient resource utilization. • Ensure high availability, resilience, and security of Databricks infrastructure. 2. Cluster & Capacity Management • Manage Databricks clusters, including autoscaling, instance selection, and performance tuning. • Optimize compute resources to minimize costs while maintaining performance. • Implement cluster policies and governance controls. 3. User & Access Management • Implement RBAC (Role-Based Access Control) and IAM (Identity and Access Management) for users and services. • Manage Databricks Unity Catalog and enforce workspace-level access controls. • Define and enforce security policies across Databricks workspaces. 4. CI/CD Automation for Databricks & ML Models • Develop and manage CI/CD pipelines for Databricks Notebooks, Jobs, and ML models using Azure DevOps, GitHub Actions, or Jenkins. • Automate Databricks infrastructure deployment using Terraform, ARM Templates, or Bicep. • Implement automated testing, version control, and rollback strategies for Databricks workloads. • Integrate Databricks Asset Bundles (DAB) for standardized and repeatable Databricks deployments. 5. Databricks Asset Bundle Management • Implement Databricks Asset Bundles (DAB) to package, version, and deploy Databricks workflows efficiently. • Automate workspace configuration, job definitions, and dependencies using DAB. • Ensure traceability, rollback, and version control of deployed assets. • Integrate DAB with CI/CD pipelines for seamless deployment. 6. ML Model Deployment & Monitoring • Deploy ML models using Databricks MLflow, Azure Machine Learning, or Kubernetes (AKS). • Optimize model performance and enable real-time inference. • Implement model monitoring, drift detection, and automated retraining pipelines. 7. Monitoring, Troubleshooting & Performance Optimization • Set up Databricks monitoring and logging using Azure Monitor, Datadog, or Prometheus. • Analyze cluster performance metrics, audit logs, and cost insights to optimize workloads. • Troubleshoot Databricks infrastructure, pipelines, and deployment issues.

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Grid Dynamics Hiring GEN AI Architect Experience: 10- 16 Years Notice period: Immediate - 30 Days Location : Hyderabad, Bangalore Job Description : Key responsibilities As an GEN AI Expert [4/6+ years of relevant exp on NLP, CV and LLMs], you will be responsible for designing, building, and fine-tuning NLP models and large language model (LLM) agents to solve business challenges. You will play a key role in creating intuitive and efficient model designs that enhance user experiences and business processes. The position demands strong design skills, hands-on coding expertise, advanced proficiency in Python development, specialized knowledge in LLM agent design and development, and exceptional debugging capabilities. Model & Agent Design: Conceptualize and design robust NLP solutions and LLM agents tailored to specific business needs, with a focus on user experience, interactivity, latency, failover and functionality. Hands-on Coding: Write,test, and maintain clean, efficient, and scalable code for NLP models and AI agents, with a strong emphasis on Python programming. Build high quality multi-modal & multi-agents applications/frameworks Knowledge on input/output token utilization, prioritization and consumption w.r.t AI agents Performance Monitoring: Monitor, optimize LLM agents, implementing model explainability, handling model drift, and ensuring robustness. Research Implementation:Ability to read, comprehend, and implement AI Agent research papers into practical solutions. Stay abreast of the latest academic and industry research to apply cutting-edge methodologies and techniques. Debugging & Issue Resolution: Proactively identify, diagnose, and resolve issues related to AI agents, including model inaccuracies, performance bottlenecks, and system integration problems. Utilize debugging tools and techniques to troubleshoot complex problems in model behavior, data inconsistencies, and deployment errors. Innovation and Research:Stay updated with the latest advancements in AI agents technologies,experimenting with new techniques and tools to enhance agent capabilities and performance. Continuous Learning: Adaptability to unlearn outdated practices, patterns, technologies and quickly learn and implement new technologies & papers as the ML world evolves. Maintain a proactive approach to staying current with emerging trends and technologies in Agent based solutions (Text & Multi Modal). Clear understanding of tool usage and structured outputs in agents Clear understanding of speculative decoding and AST-Code RAG Clear understanding of Streaming and Sync/Async processing Clear understanding of embedding models and their limitations Tech stack required: Programming languages: Python Public Cloud: Azure Frameworks: Vector Databases such as Milvus, Qdrant/ ChromaDB, or usage of CosmosDB or MongoDB as Vector stores. Knowledge of AI Orchestration, AI evaluation and Observability Tools. Knowledge of Guardrails strategy for LLM. Knowledge on Arize or any other ML/LLM observability tool. Experience: Experience in building functional platforms using ML, CV, LLM platforms. Experience in evaluating and monitoring AI platforms in production. About Grid : Grid Dynamics (NASDAQ: GDYN) is a leading provider of technology consulting, platform and product engineering, and advanced analytics services. Fusing technical vision with business acumen, we enable positive business outcomes for enterprise companies undergoing business transformation by solving their most pressing technical challenges. A key differentiator for Grid Dynamics is our 7+ years of experience and leadership in enterprise AI, supported by profound expertise and ongoing investment in data, analytics, cloud & DevOps, application modernization, and customer experience. Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the Americas, Europe, and India. Follow us on LinkedIn.

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Panaji, Goa, India

On-site

About the Project We are seeking a highly skilled and pragmatic AI/ML Engineer to join the team building "a Stealth Prop-tech startup," a groundbreaking digital real estate platform in Dubai. This is a complex initiative to build a comprehensive ecosystem integrating long-term sales, short-term stays, and advanced technologies including AI/ML, data analytics, Web3/blockchain, and conversational AI. You will be responsible for operationalizing the machine learning models that power our most innovative features, ensuring they are scalable, reliable, and performant. This is a crucial engineering role in a high-impact project, offering the chance to build the production infrastructure for cutting-edge AI in the PropTech space. Job Summary As an AI/ML Engineer, you will bridge the gap between data science and software engineering. You will be responsible for taking the models developed by our data scientists and deploying them into our production environment. Your work will involve building robust data pipelines, creating scalable training and inference systems, and developing the MLOps infrastructure to monitor and maintain our models. You will collaborate closely with data scientists, backend developers, and product managers to ensure our AI-driven features are delivered efficiently and reliably to our users. Key Responsibilities Design, build, and maintain scalable infrastructure for training and deploying machine learning models at scale. Operationalize ML models, including the "TruValue UAE" AVM and the property recommendation engine, by creating robust, low-latency APIs for production use. Develop and manage data pipelines (ETL) to feed our machine learning models with clean, reliable data for both training and real-time inference. Implement and manage the MLOps lifecycle, including CI/CD for models, versioning, monitoring for model drift, and automated retraining. Optimize the performance of machine learning models for speed and cost-efficiency in a cloud environment. Collaborate with backend engineers to seamlessly integrate ML services with the core platform architecture. Work with data scientists to understand model requirements and provide engineering expertise to improve model efficacy and feasibility. Build the technical backend for the AI-powered chatbot, integrating it with NLP services and the core platform data. Required Skills and Experience 3-5+ years of experience in a Software Engineering, Machine Learning Engineering, or related role. A Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. Strong software engineering fundamentals with expert proficiency in Python. Proven experience deploying machine learning models into a production environment on a major cloud platform (AWS, Google Cloud, or Azure). Hands-on experience with ML frameworks such as TensorFlow, PyTorch, and Scikit-learn. Experience building and managing data pipelines using tools like Apache Airflow, Kubeflow Pipelines, or cloud-native solutions. Collaborate with cross-functional teams to integrate AI solutions into products. Experience with cloud platforms (AWS, Azure, GCP) and containerization (Docker) and orchestration (Kubernetes). Preferred Qualifications Experience in the PropTech (Property Technology) or FinTech sectors is highly desirable. Direct experience with MLOps tools and platforms (e.g., MLflow, Kubeflow, AWS SageMaker, Google AI Platform). Familiarity with big data technologies (e.g., Spark, BigQuery, Redshift). Experience building real-time machine learning inference systems. Strong understanding of microservices architecture. Experience working in a collaborative environment with data scientists.

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Kochi, Kerala, India

On-site

Job Title: AI Lead – Generative AI & ML Systems Key Responsibilities Generative AI Development Design and implement LLM-powered solutions and generative AI models for use cases such as predictive analytics, automation workflows, anomaly detection, and intelligent systems. · RAG & LLM Applications Build and deploy Retrieval-Augmented Generation (RAG) pipelines, structured generation systems, and chat-based assistants tailored to business operations. Full AI Lifecycle Management Lead the complete AI lifecycle—from data ingestion and preprocessing to model design, training, testing, deployment, and continuous monitoring. · Optimization & Scalability Develop high-performance AI/LLM inference pipelines, applying techniques like quantization, pruning, batching, and model distillation to support real-time and memory-constrained environments. MLOps & CI/CD Automation Automate training and deployment workflows using Terraform, GitLab CI, GitHub Actions, or Jenkins, integrating model versioning, drift detection, and compliance monitoring. Cloud & Deployment Deploy and manage AI solutions using AWS, Azure, or GCP with containerization tools like Docker and Kubernetes. AI Governance & Compliance Ensure model/data governance and adherence to regulatory and ethical standards in production AI deployments. Stakeholder Collaboration Work cross-functionally with product managers, data scientists, and engineering teams to align AI outputs with real-world business goals. Required Skills & Qualifications Bachelor’s degree (B.Tech or higher) in Computer Science, IT, or a related field is required. 8-12 Year exp- from the Ai team with overall experience in Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) solution development. Minimum 2+ years of hands-on experience in Generative AI and LLM-based solutions, including prompt engineering, fine-tuning, Retrieval-Augmented Generation (RAG) pipelines with full CI/CD integration, monitoring, and observability pipelines, with 100% independent contribution. Proven expertise in both open-source and proprietary Large Language Models (LLMs), including LLaMA, Mistral, Qwen, GPT, Claude, and BERT. Expertise in C/C++ & Python programming with relevant ML/DL libraries including TensorFlow, PyTorch, and Hugging Face Transformers. Experience deploying scalable AI systems in containerized environments using Docker and Kubernetes. Deep understanding of the MLOps/LLMOps lifecycle, including model versioning, deployment automation, performance monitoring, and drift detection. Familiarity with CI/CD pipelines (GitHub Actions, GitLab CI, Jenkins) and DevOps for ML workflows. Working knowledge of Infrastructure-as-Code (IaC) tools like Terraform for cloud resource provisioning and reproducible ML pipelines. Hands-on experience with cloud platforms (AWS, GCP, Azure) and container orchestration (Docker, Kubernetes). Designed and documented High-Level Design (HLD) and Low-Level Design (LLD) for ML/GenAI systems, covering data pipelines, model serving, vector search, and observability layers. Documentation included component diagrams, network architecture, CI/CD workflows, and tabulated system designs. Provisioned and managed ML infrastructure using Terraform, including compute clusters, vector databases, and LLM inference endpoints across AWS, GCP, and Azure. Experience beyond notebooks: shipped models with logging, tracing, rollback mechanisms, and cost control strategies. Hands-on ownership of production-grade LLM workflows, not limited to experimentation. Full CI/CD integration, monitoring, and observability pipelines, with 100% independent contribution. Preferred Qualifications (Good To Have) Experience with LangChain, LlamaIndex, AutoGen, CrewAI, OpenAI APIs, or building modular LLM agent workflows. Exposure to multi-agent orchestration, tool-augmented reasoning, or Autonomous AI agents and agentic communication patterns with orchestration. Experience deploying ML/GenAI systems in regulated environments, with established governance, compliance, and Responsible AI frameworks. Familiarity with AWS data and machine learning services, including Amazon SageMaker, AWS Bedrock, ECS/EKS, and AWS Glue, for building scalable, secure data pipelines and deploying end-to-end AI/ML workflows.

Posted 3 weeks ago

Apply

0.0 - 2.0 years

2 - 5 Lacs

Pune, Maharashtra

Remote

AI/ML Engineer – Junior Location: Pune, Maharashtra Experience: 1–2 years ‍Employment: Full-time Role Overview: Join our AI/ML team to build and deploy generative and traditional ML models—from ideation and data preparation to production pipelines and performance optimization. You’ll solve real problems, handle data end-to-end, navigate the AI development lifecycle, and contribute to both model innovation and operational excellence. Key Responsibilities: ● Full AI/ML Lifecycle: Engage from problem scoping through data collection, modeling, deployment, monitoring, and iteration. ● Generative & ML Models: Build and fine-tune transformer-based LLMs (like GPT, BERT) both commercial as well as local, GANs, diffusion models; also develop traditional ML models for classification, regression, etc. Experience with DL models for Computer vision like CNN, R-CNN, etc is a plus. ● Data Engineering: Clean, label, preprocess, augment, and version datasets. Build ETL pipelines and features for model training. Experience with libraries like pandas, numpy, nltk, etc. ● Model Deployment & MLOps: Containerize models (Docker), deploy APIs/microservices, implement CI/CD for ML, monitor performance and drift . ● Troubleshooting & Optimization: Analyze errors, handle overfitting/underfitting, hallucinations, class imbalance, latency concerns; tune model performance. ● Collaboration: Partner with project managers, DevOps, backend engineers, and senior ML staff to integrate AI features. ● Innovation & Research: Stay current with GenAI (prompt techniques, RAG, LangChain, LLM models), test new architectures, contribute insights. ● Documentation: Maintain reproducible experiments, write clear docs, follow best practices Required Skills: ● Bachelor’s in CS, AI, Data Science, or related field. ● 1–2 years in ML/AI roles; hands-on with both generative and traditional models. ● Proficient in Python and ML frameworks (PyTorch, TensorFlow, Hugging Face, scikit-learn). ● Strong understanding of AI project lifecycle and MLOps principles . ● Experience in data workflows: preprocessing, feature engineering, dataset management. ● Familiarity with Docker, REST APIs, Git, and cloud platforms (AWS/GCP/Azure). ● Sharp analytical and problem-solving skills, with ability to debug and iterate models. ● Excellent communication and teamwork abilities. Preferred Skills: ● Projects involving ChatGPT, LLaMA, Stable Diffusion or similar models. ● Experience with prompt engineering, RAG pipelines, vector DBs (FAISS, Pinecone, Weaviate) . ● Exposure to CI/CD pipelines and ML metadata/versioning. ● GitHub portfolio or publications in generative AI. ● Awareness of ethics, bias mitigation, privacy, compliance in AI. Job Type: Full-time Pay: ₹200,000.00 - ₹500,000.00 per year Benefits: Provident Fund Work Location: Hybrid remote in Pune, Maharashtra

Posted 3 weeks ago

Apply

5.0 years

6 - 7 Lacs

Hyderābād

On-site

Our Company: At Teradata, we're not just managing data; we're unleashing its full potential. Our ClearScape Analytics™ platform and pioneering Enterprise Vector Store are empowering the world's largest enterprises to derive unprecedented value from their most complex data. We're rapidly pushing the boundaries of what's possible with Artificial Intelligence, especially in the exciting realm of autonomous and agentic systems We’re building intelligent systems that go far beyond automation — they observe, reason, adapt, and drive complex decision-making across large-scale enterprise environments. As a member of our AI engineering team, you’ll play a critical role in designing and deploying advanced AI agents that integrate deeply with business operations, turning data into insight, action, and measurable outcomes. You’ll work alongside a high-caliber team of AI researchers, engineers, and data scientists tackling some of the hardest problems in AI and enterprise software — from scalable multi-agent coordination and fine-tuned LLM applications, to real-time monitoring, drift detection, and closed-loop retraining systems. If you're passionate about building intelligent systems that are not only powerful but observable, resilient, and production-ready, this role offers the opportunity to shape the future of enterprise AI from the ground up. We are seeking a highly skilled Senior AI Engineer to drive the development and deployment of Agentic AI systems with a strong emphasis on AI observability and data platform integration. You will work at the forefront of cutting-edge AI research and its practical application—designing, implementing, and monitoring intelligent agents capable of autonomous reasoning, decision-making, and continuous learning. Ignite the Future of AI at Teradata! What You'll Do: Shape the Way the World Understands Data As a senior Agentic AI Engineer at Teradata, you’ll build cutting-edge intelligent agents that transform how users explore data, derive insights, and automate workflows across industries such as healthcare, finance, and telecommunications. You will: Design and implement autonomous AI agents for semantic search, text-to-SQL translation, and analytical task execution. Develop modular prompts, reasoning chains, and decision graphs tailored to complex enterprise use cases. Enhance agent performance through experimentation with LLMs, prompt tuning, and advanced reasoning workflows. Integrate agents with Teradata’s Model Context Protocol (MCP) to enable seamless interaction with model development pipelines. Build tools that allow agents to monitor training jobs, evaluate models, and interact with unstructured and structured data sources. Work on retrieval-augmented generation (RAG) pipelines and extend agents to downstream ML systems. Who You'll Work With: Join Forces with the Best You’ll collaborate with a world-class team of AI architects, ML engineers, and domain experts at Silicon Valley , working together to build the next generation of enterprise AI systems. You’ll also work cross-functionally with: Product managers and UX designers to craft agentic workflows that are intuitive and impactful. Domain specialists to ensure solutions align with real-world business problems in regulated industries. Infrastructure and platform teams responsible for training, evaluation, and scaling AI workloads. This is a rare opportunity to shape foundational AI capabilities within a global, data-driven company. This is a deeply collaborative environment where technical innovation meets real-world application, where your ideas are not only heard but implemented to shape the next generation of data interaction. What Makes You a Qualified Candidate: Skills in Action 5+ years of product engineering experience in AI/ML, with strong software development fundamentals. Proficiency with LLM APIs (e.g., OpenAI, Claude, Gemini) and agent frameworks such as AutoGen, LangGraph, AgentBuilder, or CrewAI. Experience designing multi-step reasoning chains, prompt pipelines, or intelligent workflows. Familiarity with agent evaluation metrics: correctness, latency, failure modes. Passion for building production-grade systems that bring AI to life. What You Bring: Passion and Potential Master’s or Ph.D. in Computer Science, AI, or a related field, or equivalent industry experience. Experience working with multimodal inputs, retrieval systems, or structured knowledge sources. Deep understanding of enterprise data workflows and scalable AI architectures. Prior exposure to MCP or similar orchestration/protocol systems. #LI-VB1

Posted 3 weeks ago

Apply

8.0 years

6 - 7 Lacs

Hyderābād

On-site

Our Company: Ignite the Future of AI at Teradata! At Teradata, we're not just managing data; we're unleashing its full potential. Our ClearScape Analytics™ platform and pioneering Enterprise Vector Store are empowering the world's largest enterprises to derive unprecedented value from their most complex data. We're rapidly pushing the boundaries of what's possible with Artificial Intelligence, especially in the exciting realm of autonomous and agentic systems We’re building intelligent systems that go far beyond automation — they observe, reason, adapt, and drive complex decision-making across large-scale enterprise environments. As a member of our AI engineering team, you’ll play a critical role in designing and deploying advanced AI agents that integrate deeply with business operations, turning data into insight, action, and measurable outcomes. In this role, you’ll architect foundational components for production-grade AI systems — from agent frameworks and LLM pipelines to observability and evaluation layers that ensure reliability, accountability, and performance. You’ll be responsible not just for building models, but for making them measurable, debuggable, and trustworthy in real-world, high-stakes deployments. You’ll work alongside a high-caliber team of AI researchers, engineers, and data scientists tackling some of the hardest problems in AI and enterprise software — from scalable multi-agent coordination and fine-tuned LLM applications, to real-time monitoring, drift detection, and closed-loop retraining systems. If you're passionate about building intelligent systems that are not only powerful but observable, resilient, and production-ready, this role offers the opportunity to shape the future of enterprise AI from the ground up. What You'll Do: Shape the Way the World Understands Data As a staff Agentic AI Engineer at Teradata, you’ll build cutting-edge intelligent agents that transform how users explore data, derive insights, and automate workflows across industries such as healthcare, finance, and telecommunications. You will: Design and implement autonomous AI agents for semantic search, text-to-SQL translation, and analytical task execution. Develop modular prompts, reasoning chains, and decision graphs tailored to complex enterprise use cases. Enhance agent performance through experimentation with LLMs, prompt tuning, and advanced reasoning workflows. Integrate agents with Teradata’s Model Context Protocol (MCP) to enable seamless interaction with model development pipelines. Build tools that allow agents to monitor training jobs, evaluate models, and interact with unstructured and structured data sources. Work on retrieval-augmented generation (RAG) pipelines and extend agents to downstream ML systems. Who You'll Work With: Join Forces with the Best You’ll collaborate with a world-class team of AI architects, ML engineers, and domain experts in Silicon Valley , working together to build the next generation of enterprise AI systems. You’ll also work cross-functionally with: Product managers and UX designers need to craft agentic workflows that are intuitive and impactful. Domain specialists to ensure solutions align with real-world business problems in regulated industries. Infrastructure and platform teams responsible for training, evaluation, and scaling AI workloads. This is a rare opportunity to shape foundational AI capabilities within a global, data-driven company. This is a deeply collaborative environment where technical innovation meets real-world application, where your ideas are not only heard but implemented to shape the next generation of data interaction. What Makes You a Qualified Candidate: Skills in Action 8+ years of software engineering experience, with 5+ years focused on AI/ML, intelligent systems, or agent-based architectures. Deep understanding of software design principles and scalable architecture patterns. Strong experience with LLM APIs (e.g., OpenAI, Claude, Gemini) and agentic frameworks (e.g., AutoGen, LangGraph, AgentBuilder, CrewAI). Proven ability to build complex multi-step workflows using prompt pipelines, tools, and adaptive reasoning. Proficiency in Python and experience with vector databases, API integration, and orchestration tools. Familiarity with agent evaluation metrics: correctness, latency, grounding, and tool use accuracy. Experience leading AI projects from inception to deployment in a production setting. What You Bring: Passion and Potential Master’s or Ph.D. in Computer Science, AI, or a related field, or equivalent industry experience. Experience working with multimodal inputs, retrieval systems, or structured knowledge sources. Hands-on experience with prompt engineering, function-calling agents, RAG patterns, and evaluation harnesses. Prior work with Model Composition Protocol (MCP) or similar orchestration frameworks is a strong plus. Excellent cross-team communication and stakeholder engagement skills. Passion for shipping high-quality AI products that are safe, explainable, and valuable. #LI-VB1

Posted 3 weeks ago

Apply

5.0 years

2 - 8 Lacs

Hyderābād

On-site

Key Responsibilities ML Ops Strategy & Implementation: Design, implement, and maintain end-to-end MLOps pipelines, ensuring seamless integration of machine learning models into production environments. Model Deployment & Monitoring: Utilize Azure ML services to deploy models efficiently and monitor their performance, ensuring reliability and scalability. CI/CD Pipeline Development: Develop and manage continuous integration and continuous deployment pipelines using Azure DevOps or similar tools to automate model training, testing, and deployment processes. Collaboration & Consultation: Work closely with data scientists, engineers, and business stakeholders to understand requirements and translate them into robust MLOps solutions. Performance Optimization: Implement strategies for model optimization, including hyperparameter tuning and resource management, to enhance model accuracy and efficiency. Governance & Compliance: Ensure that deployed models adhere to organizational policies, security standards, and regulatory requirements. Required Skills & Qualifications Experience: Minimum of 5 years in machine learning roles, with at least 2–3 years focused on MLOps, specifically in deploying and managing models in production. Technical Proficiency: Strong programming skills in Python , including frameworks like Flask, FastAPI, and libraries such as Pandas and NumPy. Hands-on experience with Azure Machine Learning , including model training, deployment, and monitoring. Familiarity with containerization technologies like Docker and orchestration tools such as Kubernetes . Experience with CI/CD tools like Azure DevOps , GitLab CI , or GitHub Actions . Knowledge of MLflow , Azure Databricks , and Azure Kubernetes Service (AKS) . Portfolio: Demonstrated experience with at least 2–3 production-level implementations of machine learning models, showcasing the ability to transition models from development to production environments effectively. Soft Skills: Excellent communication and consulting skills, with the ability to collaborate across teams and present complex technical concepts to non-technical stakeholders. Preferred Qualifications Experience with model governance, drift detection, and performance monitoring in production settings. Familiarity with Azure governance tools, cost management, and policy enforcement. Exposure to Agile methodologies and project management tools like Azure Boards or JIRA .

Posted 3 weeks ago

Apply

3.0 years

2 - 7 Lacs

Chennai

On-site

An Amazing Career Opportunity for AI/ML Engineer Location: Chennai, India (Hybrid) Job ID: 39582 Position Summary A rewarding career at HID Global beckons you! We are looking for an AI/ML Engineer , who is responsible for designing, developing, and deploying advanced AI/ML solutions to solve complex business challenges. This role requires expertise in machine learning, deep learning, MLOps, and AI model optimization , with a focus on building scalable, high-performance AI systems. As an AI/ML Engineer , you will work closely with data engineers, software developers, and business stakeholders to integrate AI-driven insights into real-world applications. You will be responsible for model development, system architecture, cloud deployment, and ensuring responsible AI adoption . We are a leading company in the trusted source for innovative HID Global Human Resources products, solutions and services that help millions of customers around the globe create, manage and use secure identities. Who are we? HID powers the trusted identities of the world’s people, places, and things, allowing people to transact safely, work productively and travel freely. We are a high-tech software company headquartered in Austin, TX, with over 4,000 worldwide employees. Check us out: www.hidglobal.com and https://youtu.be/23km5H4K9Eo LinkedIn: www.linkedin.com/company/hidglobal/mycompany/ About HID Global, Chennai HID Global powers the trusted identities of the world’s people, places and things. We make it possible for people to transact safely, work productively and travel freely. Our trusted identity solutions give people secure and convenient access to physical and digital places and connect things that can be accurately identified, verified and tracked digitally. Millions of people around the world use HID products and services to navigate their everyday lives, and over 2 billion things are connected through HID technology. We work with governments, educational institutions, hospitals, financial institutions, industrial businesses and some of the most innovative companies on the planet. Headquartered in Austin, Texas, HID Global has over 3,000 employees worldwide and operates international offices that support more than 100 countries. HID Global® is an ASSA ABLOY Group brand. For more information, visit www.hidglobal.com. HID Global has is the trusted source for secure identity solutions for millions of customers and users around the world. In India, we have two Engineering Centre (Bangalore and Chennai) over 200+ Engineering Staff. Global Engineering Team is based in Chennai and one of the Business Unit Engineering team is based in Bangalore. Physical Access Control Solutions (PACS) HID's Physical Access Control Solutions Business Area: HID PAC’s Business Unit focuses on the growth of new clients and existing clients where we leverage the latest card and reader technologies to solve the security challenges of our clients. Other areas of focus will include authentication, card sub systems, card encoding, Biometrics, location services and all other aspects of a physical access control infrastructure. Qualifications:- To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Roles & Responsibilities: Design, develop, and deploy robust & scalable AI/ML models in Production environments. Collaborate with business stakeholders to identify AI/ML opportunities and define measurable success metrics. Design and build Retrieval-Augmented Generation (RAG) pipelines integrating vector stores, semantic search, and document parsing for domain-specific knowledge retrieval. Integrate Multimodal Conversational AI platforms (MCP) including voice, vision, and text to deliver rich user interactions. Drive innovation through PoCs, benchmarking, and experiments with emerging models and architectures. Optimize models for performance, latency and scalability. Build data pipelines and workflows to support model training and evaluation. Conduct research & experimentation on the state-of-the-art techniques (DL, NLP, Time series, CV) Partner with MLOps and DevOps teams to implement best practices in model monitoring, version and re-training. Lead code reviews, architecture discussions and mentor junior & peer engineers. Architect and implement end-to-end AI/ML pipelines, ensuring scalability and efficiency. Deploy models in cloud-based (AWS, Azure, GCP) or on-premises environments using tools like Docker, Kubernetes, TensorFlow Serving, or ONNX Ensure data integrity, quality, and preprocessing best practices for AI/ML model development. Ensure compliance with AI ethics guidelines, data privacy laws (GDPR, CCPA), and corporate AI governance. Work closely with data engineers, software developers, and domain experts to integrate AI into existing systems. Conduct AI/ML training sessions for internal teams to improve AI literacy within the organization. Strong analytical and problem solving mindset. Technical Requirements: Strong expertise in AI/ML engineering and software development. Strong experience with RAG architecture, vector databases Proficiency in Python and hands-on experience in using ML frameworks (tensorflow, pytorch, scikit-learn, xgboost etc) Familiarity with MCPs like Google Dialogflow, Rasa, Amazon Lex, or custom-built agents using LLM orchestration. Cloud-based AI/ML experience (AWS Sagemaker, Azure ML, GCP Vertex AI, etc.). Solid understanding of AI/ML life cycle – Data preprocessing, feature engineering, model selection, training, validation and deployment. Experience in production grade ML systems (Model serving, APIs, Pipelines) Familiarity with Data engineering tools (SPARK, Kafka, Airflow etc) Strong knowledge of statistical modeling, NLP, CV, Recommendation systems, Anomaly detection and time series forecasting. Hands-on in Software engineering with knowledge of version control, testing & CI/CD Hands-on experience in deploying ML models in production using Docker, Kubernetes, TensorFlow Serving, ONNX, and MLflow. Experience in MLOps & CI/CD for ML pipelines, including monitoring, retraining, and model drift detection. Proficiency in scaling AI solutions in cloud environments (AWS, Azure & GCP). Experience in data preprocessing, feature engineering, and dimensionality reduction. Exposure to Data privacy, Compliance and Secure ML practices Education and/or Experience: Graduation or master’s in computer science or information technology or AI/ML/Data science 3+ years of hands-on experience in AI/ML development/deployment and optimization Experience in leading AI/ML teams and mentoring junior engineers. Why apply? Empowerment: You’ll work as part of a global team in a flexible work environment, learning and enhancing your expertise. We welcome an opportunity to meet you and learn about your unique talents, skills, and experiences. You don’t need to check all the boxes. If you have most of the skills and experience, we want you to apply. Innovation : You embrace challenges and want to drive change. We are open to ideas, including flexible work arrangements, job sharing or part-time job seekers. Integrity: You are results-orientated, reliable, and straightforward and value being treated accordingly. We want all our employees to be themselves, to feel appreciated and accepted. This opportunity may be open to flexible working arrangements. HID is an Equal Opportunity/Affirmative Action Employer – Minority/Female/Disability/Veteran/Gender Identity/Sexual Orientation. We make it easier for people to get where they want to go! On an average day, think of how many times you tap, twist, tag, push or swipe to get access, find information, connect with others or track something. HID technology is behind billions of interactions, in more than 100 countries. We help you create a verified, trusted identity that can get you where you need to go – without having to think about it. When you join our HID team, you’ll also be part of the ASSA ABLOY Group, the global leader in access solutions. You’ll have 63,000 colleagues in more than 70 different countries. We empower our people to build their career around their aspirations and our ambitions – supporting them with regular feedback, training, and development opportunities. Our colleagues think broadly about where they can make the most impact, and we encourage them to grow their role locally, regionally, or even internationally. As we welcome new people on board, it’s important to us to have diverse, inclusive teams, and we value different perspectives and experiences. #LI-HIDGlobal

Posted 3 weeks ago

Apply

3.0 years

16 - 20 Lacs

India

Remote

Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

India

On-site

About the Company Sparsa AI is a Singapore based Industrial-AI Startup, building the next generation of agentic AI platform to transform how physical industries—such as manufacturing and logistics—make decisions and optimize their operations. Our AI agents orchestrate complex workflows across business functions and enterprise applications including ERP, MES, CRM and supply chain environments to resolve real-world constraints and unlock productivity. About the Role Sparsa AI is seeking a execution-focused Vice President of Engineering to balance vision, product strategy and delivery of its agentic AI platform. This executive role blends customer-centric product leadership with technical oversight of a multi-tenant SaaS and ML/AI stack and the engineering teams that build and manage them. The ideal candidate brings solid experience in AI product development, cloud infrastructure, and cross-functional engineering team building, with a strong focus on delivering enterprise-grade solutions. This is a high-impact opportunity to shape a category-defining platform in one of the most ambitious AI startups focused on Real Economy enterprises. This job can be located anywhere in India . Frequent travel within India as well as to locations in Europe and Asia is expected. Core Responsibilities Define, own, and continuously evolve the end-to-end product roadmap in alignment with company vision and market demand. Translate output from the Chief AI Officer-led innovation stream into deployable enterprise-ready products. Prioritize product features and initiatives based on customer needs, business impact, and technical feasibility. Product Lifecycle Leadership Own the end-to-end product lifecycle: concept → MVP → iterative releases → scale. Set and uphold delivery, quality, and performance standards across the product organization. Infrastructure & Deployment Ownership Own the architecture and operations of cloud-native infrastructure used to support product deployment and scaling. Lead development and oversight of AI/ML Ops systems ensuring robust, automated, and secure model training, testing, and deployment pipelines. Ensure alignment of product infrastructure with enterprise IT security, compliance, and integration requirements. Product-Market Fit & GTM Alignment Partner closely with the leadership team to align product strategy with GTM execution. Drive delivery success for Agentic AI solutions across our growing customer base, ensuring measurable outcomes and operational reliability. Interface with key customers and partners to understand emerging needs and drive product-market fit in targeted (real economy) industries. Qualifications 10+ years of experience in product or platform leadership , ideally in AI startups or SaaS environments. Demonstrated success delivering customer-facing software and ML/AI solutions from zero to scale. Strong ability to connect customer problems with technical solutions, and manage trade-offs. Experienced in building and leading cross-functional teams (product, engineering, cloud, MLOps in agile environments). Proven experience building and scaling multi-tenant SaaS platforms with strong observability, compliance, and performance. Deep understanding of cloud-native ML architecture, MLOps best practices (CI/CD, versioning, drift detection), and integrating third-party tools across AWS, Azure, or GCP environments. Fluent English & German language skills. A high degree of mobility and flexibility in location is preferred. Required Skills Hands-on exposure to LLM agents, orchestration frameworks (LangChain, Semantic Kernel, etc). Experience with developer platforms, agent SDKs, or enterprise integration stacks (e.g., SAP, MES, RPA). Experience with mainstream ERPs like SAP/Oracle etc. Experience in ERP and MES products like SAP, Siemens etc. Preferred Skills You are a builder and executor, not just a strategist. Thrive in ambiguity and are energized by both 0→1 and 1→N product challenges. Have deep empathy for both internal dev teams and external enterprise users. Share Sparsa’s mission to provide a Digital Workforce as a Service (DWAAS) through agentic AI. Pay range and compensation package Executive-level role at a visionary AI company with presence in Asia and Europe. High ownership, equity participation, and impact on product and company direction. Direct collaboration with the founding team (CEO, CSO, CAIO). A platform to build something transformative for global industries. Equal Opportunity Statement If you are passionate about building transformative products at the intersection of AI and industrial operations, we invite you to shape the future with us. This is your opportunity to lead Product delivery in a fast-growing company that is redefining how the real economy works. At Sparsa AI, you'll work alongside an exceptional team, solve real-world problems, and leave a lasting impact on global industries. Let’s build the future of Industrial AI-Agents—together. If you have the chops, let’s connect!

Posted 3 weeks ago

Apply

0.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka

Remote

Bengaluru, Karnataka, India Department Data Engineering Job posted on Jul 09, 2025 Employment type Full Time About Us MatchMove is a leading embedded finance platform that empowers businesses to embed financial services into their applications. We provide innovative solutions across payments, banking-as-a-service, and spend/send management, enabling our clients to drive growth and enhance customer experiences. Are You The One? As a Technical Lead Engineer - Data , you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS . You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability , while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark . Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization , enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control , and compliance (GDPR, MAS TRM) . Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities Architect scalable, cost-optimized pipelines across real-time and batch paradigms , using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS , with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack : Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum. Expertise in designing data pipelines for real-time, streaming, and batch systems , including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale. Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls , encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain , with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts , data mesh patterns , and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases . Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. MatchMove Culture: We cultivate a dynamic and innovative culture that fuels growth, creativity, and collaboration. Our fast-paced fintech environment thrives on adaptability, agility, and open communication. We focus on employee development, supporting continuous learning and growth through training programs, learning on the job and mentorship. We encourage speaking up, sharing ideas, and taking ownership. Embracing diversity, our team spans across Asia, fostering a rich exchange of perspectives and experiences. Together, we harness the power of fintech and e-commerce to make a meaningful impact on people's lives. Grow with us and shape the future of fintech and e-commerce. Join us and be part of something bigger! Personal Data Protection Act: By submitting your application for this job, you are authorizing MatchMove to: collect and use your personal data, and to disclose such data to any third party with whom MatchMove or any of its related corporation has service arrangements, in each case for all purposes in connection with your job application, and employment with MatchMove; and retain your personal data for one year for consideration of future job opportunities (where applicable).

Posted 3 weeks ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

1. AI Video Creator Role Overview: Produce hyper‑realistic, luxury‑level videos using AI tools like MidJourney, RunwayML, Sora, Pika, Kling AI. Key Responsibilities: Generate high-end, brand-driven video content for real estate category. Collaborate with creative and marketing teams on campaigns, launches, editorials Author and refine prompts to ensure consistent, on‑brand visual outputs Stay current on AI and industry trends to enhance workflows Requirements: Proven portfolio in AI-assisted content within real estate Proficiency with AI tools (MidJourney, DALL·E, RunwayML) + traditional editing software (Photoshop, After Effects, Blender, etc.) Basic Python/LLM experience for creative ideation Gen‑AI video spots end‑to‑end—from prompt through compositing to final render. Highlights: Own quality control: ensure content is loop‑free, flicker‑free, “cringe‑free” Design prompt workflows, shot lists, and apply compositing, scripting, VFX Collaborate to blend AI output seamlessly with creative vision Desired Skills: Skilled in Premiere, After Effects, etc. Deep understanding of generative AI “quirks” (temporal drift, sync issues) Bonus: Python scripting for automation, or dataset curation for brand consistency Create videos using tools like Synthesia, HeyGen, Pictory, Runway, etc. Develop scripts/prompts/storyboards for AI-generated visuals Preferred Skills: Proven portfolio in AI tool‑based video production Familiarity with Adobe/Premiere or motion graphics tools Basic generative AI knowledge + optional voice‑cloning/sync experience Category Description Core Responsibilities Use generative AI to produce videos; refine prompts/scripts; post‑process outputs. Collaborations Tight integration with creative/marketing teams. Essential Skills Proficiency in AI video tools, standard editing software, prompt engineering. Bonus Skills Python, VFX, dataset management, voice‑synthesis knowledge. Formats & Types Roles vary between freelance, remote, full-time, and regional (US/India). Portfolios Required Strong, relevant AI‑driven video work is essential

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies