Jobs
Interviews

44412 Gcp Jobs - Page 44

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

175.0 years

0 Lacs

gurugram, haryana, india

On-site

At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. Function Description: Global Servicing(GS) brings together the company’s external and internal servicing functions to provide best in class servicing to our customers and colleagues. Emerging as the Enterprise Sales Operations & Business Enablement utility, SABE (Sales & Business Enablement) drives standardization & agility for the organization. Sales Operations include Pre-sales, acquisition & Account Management while Business Enablement includes Platform & Capabilities . Enterprise Data Platforms (EDP) team in Enterprise Digital & Data Solutions (EDDS) is responsible for leading Amex data & capabilities on the centralized enterprise platforms. Key data platforms- Lumi and Cornerstone uses big data-based environment to house all Amex data and a key analytical environment for carrying out data analytics and support business decisioning. EDP also has product ownership of multiple BI products, DQ capabilities & Data Streaming and Decisioning Platforms including Sisense, nVision, HyperDrive, Qalibrate etc. These products enable users across the Enterprise to drive their analytical, reporting, BI & DQ needs. EDP Business Support team in SABE will be responsible for providing centralize servicing for these platforms/ products and ensure timely resolution to queries raised by platform users, driving & governing core platform functions while working closely with the EDP Product owners. Core Responsibilities: Lead, coach and mentor a team of business analysts to implement servicing standards and data quality framework for the widely used EDP product Work with customers to identify, specify and document complex business requirements and provide appropriate solutions, specifically around prioritized product enhancement initiatives Manage customer expectations including scope, schedule, changes, capacity, and problem resolution Drive high engagement with customers to ensure on-time, high quality project deliverables Lead effort to automate and standardize new product development to increase efficiency and accuracy Provide functional and technical guidance to the Team Conduct deep analysis to uncover trends, recommend business solutions and implement strategic initiatives Drive partner and team engagement through governance call,1 on 1 s and other connects Team development evaluation assessing proficiency on tools, technologies, and data Get involved in initiatives of SABE to advance service offerings & support Exec teams in strategy building exercise Servicing responsibilities: Manage and enhance servicing experience for end customers through high quality resolutions & strong process adherence i.e. governance, reporting, escalation. Take appropriate action to close feedback loop by recommending solutions to unstructured challenges via correct channels Identify and document standard process guidelines to build knowledge within team and reduce the overall product issues · Be the Subject matter expert of the platform/product Products Operations - End to end ownership - Collaborating across platforms to deliver on business value for eg: Data Validations, Data Onboarding, Data Mapping, Metadata organization Drive initiatives to monitor, analyze & improve the quality of Data Formulate and communicate strategies in a clear and compelling way in the form of innovative reporting solution Manage customer expectations including scope, schedule, changes, and problem resolution Build Plug & Play Products & Capabilities Process How will you make an impact in this role? Critical Factors to Success (Outcome Driven): Business Outcomes: Identify and solve complex customer issues spanning data needs, access problems, query optimization, tool troubleshooting and much more Drive insights from issues & partner with tech & business teams to provide product consultation to evolve the platform & overall user experience Leadership Outcomes: Put enterprise thinking first, connect the role’s agenda to enterprise priorities and balance the needs of customers, partners, colleagues & shareholders Lead with an external perspective, challenge status quo and bring continuous innovation to our existing offerings Demonstrate learning agility, make decisions quickly and with the highest level of integrity. Deliver the world’s best customer experiences every day Past Experience 8+ years’ experience leading teams in production support or data environment Preferred: Payments Industry Experience, Big Data Platform, Servicing Approach. Academic Background Bachelor’s degree in STEM fields w/ work experience in information management, strategy, or the payments business. Functional Customer service, prioritization, multitasking, communication & leadership skills Case management system such as ServiceNow, JIRA etc. Technical Must have -PL/SQL, HIVE Preferred – GCP, Big Query, Python, Advanced Excel Platforms: Big Data, Knowledge of Data Management systems MS Office suites (Excel, PowerPoint, Word) ServiceNow/Rally/ JIRA We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.

Posted 3 days ago

Apply

1.0 - 6.0 years

3 - 6 Lacs

hyderabad

Work from Office

Role Purpose The purpose of the role is to resolve, maintain and manage clients software/ hardware/ network based on the service requests raised from the end-user as per the defined SLAs ensuring client satisfaction Do Ensure timely response of all the tickets raised by the client end user Service requests solutioning by maintaining quality parameters Act as a custodian of clients network/ server/ system/ storage/ platform/ infrastructure and other equipments to keep track of each of their proper functioning and upkeep Keep a check on the number of tickets raised (dial home/ email/ chat/ IMS), ensuring right solutioning as per the defined resolution timeframe Perform root cause analysis of the tickets raised and create an action plan to resolve the problem to ensure right client satisfaction Provide an acceptance and immediate resolution to the high priority tickets/ service Installing and configuring software/ hardware requirements based on service requests 100% adherence to timeliness as per the priority of each issue, to manage client expectations and ensure zero escalations Provide application/ user access as per client requirements and requests to ensure timely solutioning Track all the tickets from acceptance to resolution stage as per the resolution time defined by the customer Maintain timely backup of important data/ logs and management resources to ensure the solution is of acceptable quality to maintain client satisfaction Coordinate with on-site team for complex problem resolution and ensure timely client servicing Review the log which Chat BOTS gather and ensure all the service requests/ issues are resolved in a timely manner Mandatory Skills: Cloud-PaaS-GCP-Google Cloud Platform.

Posted 3 days ago

Apply

8.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Position Overview As a Lead Engineer, you will be responsible for architecting, developing, and scaling our core platform using React (frontend), Django (backend), and PostgreSQL (database). You will collaborate closely with cross-functional teams, mentor engineers, ensure best practices, and play a key role in shaping our technology roadmap. Key Responsibilities Technical Leadership Own architecture and design decisions for scalable, secure, and high-performance applications. Define coding standards, best practices, and development workflows. Mentor and guide a team of developers. Full-Stack Development Build responsive, user-friendly frontends with React. Develop robust backend services and APIs using Django/DRF. Design and optimize database schemas and queries in PostgreSQL. System Architecture & Scalability Ensure applications are cloud-ready and optimized for scale. Implement CI/CD pipelines and oversee deployment strategies. Integrate third-party APIs, AI modules, and data pipelines as needed. Collaboration & Strategy Work closely with product managers, designers, and stakeholders to deliver features. Translate business needs into technical solutions. Participate in strategic planning of product roadmap and infrastructure. Required Qualifications 5–8 years of professional experience in full-stack development. Strong expertise in React.js, Django/Django REST Framework, and PostgreSQL. Proven experience designing scalable system architectures. Solid understanding of cloud platforms (AWS, GCP, or Azure). Strong grasp of API design, authentication, and security best practices. Experience with CI/CD, Docker, Kubernetes preferred. Ability to lead and mentor engineering teams. Excellent problem-solving and communication skills. Nice-to-Have Skills Experience in healthcare, telemedicine, or AI-driven applications. Knowledge of microservices architecture. Familiarity with mobile development (React Native or Flutter). Understanding of HIPAA/GDPR compliance and data security.

Posted 3 days ago

Apply

10.0 years

0 Lacs

pune, maharashtra, india

On-site

Metro Global Solution Center (MGSC) is internal solution partner for METRO, a €30.5 Billion international wholesaler with operations in 31 countries through 625 stores & a team of 93,000 people globally. Metro operates in a further 10 countries with its Food Service Distribution (FSD) business and it is thus active in a total of 34 countries. MGSC, location wise is present in Pune (India), Düsseldorf (Germany) and Szczecin (Poland). We provide HR, Finance, IT & Business operations support to 31 countries, speak 24+ languages and process over 18,000 transactions a day. We are setting tomorrow’s standards for customer focus, digital solutions, and sustainable business models. For over 10 years, we have been providing services and solutions from our two locations in Pune and Szczecin. This has allowed us to gain extensive experience in how we can best serve our internal customers with high quality and passion. We believe that we can add value, drive efficiency, and satisfy our customers. Website: https://www.metro-gsc.in Company Size: 600-650 Headquarters: Pune, Maharashtra, India Type: Privately Held Inception: 2011 Job Description Who we are At METRO, we drive technology for one of the world’s leading international food wholesalers — METRO. From e-commerce to checkout and delivery software, we build products that make each day a success for our customers and colleagues. With passion and ownership, we shape the future of wholesale. We are looking for… - A senior full stack engineer with deep expertise in both frontend and backend technologies. - Someone with strong leadership skills who can guide architectural discussions, mentor engineers, and drive technical excellence. - An engineer with proven experience in complex, distributed systems and workflow-driven architectures (Camunda/CIB7, Istio, microservices). This role matters to us… As a Senior Full Stack Engineer / Tech Lead, you will play a key role in METRO’s global Quality Management System, which harmonizes and streamlines quality assurance processes across all entities. This solution is built upon a large-scale codebase that integrates backend services and a complex monolithic frontend. Your contribution will be pivotal in guiding the split into modular, scalable components while ensuring reliability and design consistency. You will also lead technical discussions, align with architects, and mentor other engineers. Key Responsibilities Design, develop, and maintain both frontend (React, Redux, Material UI) and backend (Java, Spring Boot) components. Lead the modernization and modularization of a very large monolithic frontend & backend codebase. Collaborate with architects and product managers to align on long-term technical strategies and system design. Mentor and support mid-level engineers, fostering a culture of knowledge sharing and high code quality. Ensure system performance, security, and scalability across frontend and backend layers. Promote clean code practices, automated testing, and CI/CD pipelines to maintain development excellence. Work closely with DevOps and platform teams to ensure cloud-native deployments on GCP with Kubernetes and Istio. Qualifications Must-Have Qualifications Education Bachelor’s or Master’s degree in Computer Science, Software Engineering, or equivalent practical experience. Work Experience & Skills Proven hands-on experience with frontend frameworks (React, Redux, Material UI, HTML5, CSS3, JavaScript/TypeScript). Extensive backend experience with Java, Spring Boot, and microservices architectures. Strong experience working with Camunda (preferably CIB7) for workflow automation. Experience with Istio or other service mesh technologies. Experience with relational and NoSQL databases (PostgreSQL, MongoDB). Hands-on experience with Docker, Kubernetes, and CI/CD pipelines (GitHub Actions, Jenkins X, or similar). Proficiency in automated testing across frontend and backend components. Excellent English communication skills (written and spoken), with ability to collaborate across roles and cultures. Other Requirements Ability to balance frontend user experience with backend scalability and performance. Leadership skills — proven experience in mentoring and guiding engineering teams. Strong problem-solving mindset with a process-oriented approach. Nice-to-Have Experience splitting monolithic systems into modular architectures. Familiarity with cloud observability tools (Prometheus, Grafana, DataDog, GCP Monitoring). Knowledge of security best practices in distributed architectures (OAuth2, RBAC, mTLS). Experience participating in UI/UX design discussions and collaborating with designers or users.

Posted 3 days ago

Apply

3.0 years

16 - 20 Lacs

agra, uttar pradesh, india

Remote

Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

3.0 years

16 - 20 Lacs

ghaziabad, uttar pradesh, india

Remote

Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

3.0 years

16 - 20 Lacs

noida, uttar pradesh, india

Remote

Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

3.0 years

16 - 20 Lacs

noida, uttar pradesh, india

Remote

Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

8.0 years

0 Lacs

chennai, tamil nadu, india

On-site

About Kinaxis Elevate your career journey by embracing a new challenge with Kinaxis. We are experts in tech, but it’s really our people who give us passion to always seek ways to do things better. As such, we’re serious about your career growth and professional development, because People matter at Kinaxis. In 1984, we started out as a team of three engineers based in Ottawa, Canada. Today, we have grown to become a global organization with over 2000 employees around the world, and support 40,000+ users in over 100 countries. As a global leader in end-to-end supply chain management, we enable supply chain excellence for all industries. We are expanding our team in Chennai and around the world as we continue to innovate and revolutionize how we support our customers. Our journey in India began in 2020 and we have been growing steadily since then! Building a high-trust and high-performance culture is important to us and we are proud to be Great Place to Work® CertifiedTM. Our state-of-the-art office, located in the World Trade Centre in Chennai, offers our growing team space for expansion and collaboration. Location Chennai, India About The Team The Senior Technology Consultant team will be responsible for understanding Kinaxis customers’ most pressing business performance challenges and will be committed to helping our customers solve complex issues in their supply chain management practice. The incumbent will work with new and existing customers and provide expert guidance in integrating Kinaxis’ Maestro solution with existing client enterprise systems so that our customers can start to experience immediate value from the product. What you will do Perform integration configuration – mapping, loading, transforming and validating data required to support our customer’s unique system landscape on moderate to complex projects. Design customized technology solutions to address specific business challenges or opportunities, considering the customer’s technological ecosystem and based on the integration approach (Kinaxis-led vs. customer-led). Assist with the implementation and deployment of technology solutions, including project management, system integration, configuration, testing, and training. Demonstrate knowledge and deep proficiency in both the Kinaxis Integration Platform Suite, Maestro data model, REST based API Integration capabilities, and support the client in identifying and implementing solutions best suited to individual data flows. Collaborate with Kinaxis Support and/or Cloud Services teams to address client queries around security risks or security incidents. Participate in deep-dive customer business requirements discovery sessions and develop integration requirements specifications. Drive data management and integration related activities including validation and testing of the solutions. Support deployment workshops to help customers achieve immediate value from their investment. Act as the point person for Kinaxis-led integrations and coach and guide more junior and/or offshore consultants through the tactical deliverables for data integration requirements, ensuring a smooth delivery of the end solution. Liaise directly with customers and internal SMEs such as the Technology Architect through the project lifecycle. Technologies we use Strong integration knowledge especially in extracting and transforming data from enterprise class ERP systems like SAP, Oracle, etc. Experience with ERP solutions such as SAP, Oracle, Infor, MS Dynamics etc. Hands on experience and expertise with ETL tools such as Talend, Informatica, SAP CPI / SAP BTP, OIC, MuleSoft, Apache Hop etc. Technical skills such as SQL, JAVA, JavaScript, Python, etc. Strong understanding of data modelling. Knowledge of Cloud Service Providers like GCP, Azure, AWS and their offerings is an advantage. Experience with configuration of data integration from / to SAP through BAPI / RFC, ABAP Programs, CDS Views, or ODATA is an advantage. What we are looking for Bachelor’s degree in Computer Science, Information Technology, AI/ML or a related field. 8-12 years of relevant experience in business software consulting, ideally in supply chain. Minimum 6 years of experience in data integration across complex enterprise systems. Passion for working in customer-facing roles and able to demonstrate strong interpersonal, communication, and presentation skills. Understanding of the software deployment life cycle; including business requirements definition, review of functional specifications, development of test plans, testing, user training, and deployment. Excellent communication, presentation, facilitation, time management, and customer relationship skills. Excellent problem solving and critical thinking skills. Ability to work virtually and plan for up to 50% travel. #Senior Work With Impact: Our platform directly helps companies power the world’s supply chains. We see the results of what we do out in the world every day—when we see store shelves stocked, when medications are available for our loved ones, and so much more. Work with Fortune 500 Brands: Companies across industries trust us to help them take control of their integrated business planning and digital supply chain. Some of our customers include Ford, Unilever, Yamaha, P&G, Lockheed-Martin, and more. Social Responsibility at Kinaxis: Our Diversity, Equity, and Inclusion Committee weighs in on hiring practices, talent assessment training materials, and mandatory training on unconscious bias and inclusion fundamentals. Sustainability is key to what we do and we’re committed to net-zero operations strategy for the long term. We are involved in our communities and support causes where we can make the most impact. People matter at Kinaxis and these are some of the perks and benefits we created for our team: Flexible vacation and Kinaxis Days (company-wide day off on the last Friday of every month) Flexible work options Physical and mental well-being programs Regularly scheduled virtual fitness classes Mentorship programs and training and career development Recognition programs and referral rewards Hackathons For more information, visit the Kinaxis web site at www.kinaxis.com or the company’s blog at http://blog.kinaxis.com . Kinaxis strongly encourages diverse candidates to apply to our welcoming community. We strive to make our website and application process accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at recruitmentprograms@kinaxis.com . This contact information is for accessibility requests only and cannot be used to inquire about the status of applications.

Posted 3 days ago

Apply

5.0 - 8.0 years

27 - 42 Lacs

bengaluru

Work from Office

Job Summary NetApp is a cloud-focused software company seeking a skilled coder to develop a Cloud Orchestrator, driving leadership in Hybrid Cloud globally. We expect you to be an excellent coder who will take a lead in design and implementation as per the requirements of managed Cloud Services and should be able to quickly learn the existing code & architecture. Strong technical skills and problem-solving abilities are key for success in this role. Job Requirements Develop end-to-end features with a focus on backend implementation. Collaborate with cross-functional teams to design and deliver high-quality solutions. Utilize problem-solving skills to troubleshoot and resolve technical issues. Ensure code quality and maintainability. 4+ years of relevant experience in designing and developing enterprise products. Proficiency in Go, C++, or C#. Hands-on experience working with Hyper scalers: Azure, GCP, or AWS preferred. Expertise in Container-based technologies, preferably Kubernetes & Docker. Knowledge of Storage operating systems with NetApp ONTAP knowledge as an added advantage. Full-stack product development experience is preferred. Experience working with message queues, REST API, streaming Logging frameworks. In-depth knowledge of infrastructure like hypervisor, Cloud Storage, and experience with cloud services including Databases, Caching, Scaling, Load Balancers, Networking, etc. Thorough understanding of Linux or other Unix-like Operating Systems. Excellent communication and leadership skills. Education A minimum of 4 - 8 years of experience is required. A Bachelor of Science Degree in Electronics/Electrical Engineering or Computer Science, a Master degree, or a PhD; or equivalent experience is required.

Posted 3 days ago

Apply

3.0 years

16 - 20 Lacs

surat, gujarat, india

Remote

Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

3.0 years

0 Lacs

new delhi, delhi, india

On-site

🚀 About HexaJar HexaJar is a new-age AI technology company building intelligent, automation-driven software for the future of business. From OCR to NLP pipelines, we’re designing enterprise-ready solutions with modern AI, automation, and full-stack systems. As we scale rapidly, we’re looking for an experienced Senior Developer / Tech Lead to join us on the ground floor — someone who can take full ownership of system architecture, tech direction, and team mentoring. 🛠️ What You’ll Lead & Build Design, build, and scale microservices using Node.js or Python (FastAPI/Django REST) Architect AI-first workflows integrating with OpenAI, PaddleOCR, Google Vision API Lead backend infrastructure for data pipelines, reconciliation logic, reporting systems Mentor junior developers & interns and conduct code reviews Collaborate directly with the CTO, CEO, and design teams Own DevOps setup using Docker, EC2, AWS/GCP (Kubernetes later) 🔐 Tech Stack You’ll Work With Backend & Infra: Node.js / FastAPI / Django REST PostgreSQL, MongoDB AWS S3 / GCP Cloud Docker, EC2 (K8s optional) JWT auth, AES-256 encryption, HTTPS/TLS AI Layer: PaddleOCR, Google Vision API, AWS Textract HuggingFace transformers OpenAI GPT-4 / Claude Fuzzy matching (Levenshtein), regex pipelines RAG architecture (LangChain or custom) Frontend: React.js / Next.js TailwindCSS + ShadCN UI Recharts, Tabulator, PDFKit ✅ You’re a Good Fit If You... Have 3+ years of experience in full-stack or backend roles Can lead a small team & take full ownership of tech delivery Have experience with AI APIs, OCR, or NLP (bonus) Write clean, testable, scalable code Are excited about building something from scratch with a fast team Have worked in early-stage startups or product teams 🎯 Bonus Points For: Experience integrating with financial tools (GST, bank CSVs, Tally, Zoho) Worked with vector DBs (e.g., Pinecone, FAISS) Familiarity with RAG, embeddings, and document intelligence 📧 Ready to Join Us? To apply, send your resume and a brief intro to: 📨 info@hexajar.com

Posted 3 days ago

Apply

2.0 - 5.0 years

0 Lacs

new delhi, delhi, india

On-site

Role Description: As an AI Engineer at Knowdis.ai, you will be an integral part of our AI development team, working on challenging projects that leverage the latest advancements in Natural Language Processing (NLP) and Reinforcement Learning. You will be responsible for designing, implementing, and optimizing AI models that drive our core products, focusing on product recommendation systems, marketplaces, and translation systems. This role offers the opportunity to work with a team of highly skilled professionals in a dynamic and collaborative environment. Key Responsibilities: Develop and implement state-of-the-art AI models for product recommendation systems, marketplaces, and translation systems. Design and optimize algorithms for Natural Language Processing (NLP) and Reinforcement Learning. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Conduct research to stay up-to-date with the latest advancements in AI and integrate relevant findings into ongoing projects. Perform data preprocessing, feature engineering, and model evaluation to ensure high performance and accuracy of AI models. Deploy and maintain AI models in production environments, ensuring scalability and reliability. Participate in code reviews, provide constructive feedback, and ensure best practices in AI development are followed. Document technical designs, experiments, and results for internal and external stakeholders. Qualifications and Experience: Bachelor's degree in Computer Science or a related field from a Tier-1 Institute. 2-5 years of hands-on experience in AI/ML development, with a focus on NLP or Reinforcement Learning. Strong proficiency in programming languages such as Python, and experience with AI/ML frameworks and libraries (e.g., TensorFlow, PyTorch, Keras). Proven experience in developing and deploying AI models in real-world applications, particularly in product recommendation systems, marketplaces, or translation systems. Solid understanding of machine learning algorithms, data structures, and software engineering principles. Experience with data pre-processing, feature extraction, and model evaluation techniques. Ability to work collaboratively in a team environment and communicate effectively with technical and non-technical stakeholders. Strong problem-solving skills, attention to detail, and a passion for innovation in AI technology Preferred Qualifications: Master's degree in Computer Science or a related field. Experience with cloud platforms (e.g., AWS, GCP, Azure) and scalable AI/ML infrastructure. Selection Process: Interested Candidates are mandatorily required to apply through the listing on Jigya . Only applications received through this posting will be evaluated further. Shortlisted candidates may be required to appear in an Online Assessment and Screening interview administered by Jigya Candidates selected after the Jigya screening rounds will be interviewed by KnowDis

Posted 3 days ago

Apply

3.0 years

0 Lacs

noida, uttar pradesh, india

Remote

Location: Noida (WFO) Timings: 10:30 AM to 7:30 PM; Mon-Fri We are seeking a skilled Python Developer with at least 3 years of hands-on experience in building and scaling backend applications. The ideal candidate should have strong expertise in Django Rest Framework (DRF) or FastAPI, with proven experience in designing APIs, writing clean code, and working in a collaborative environment. Key Responsibilities Develop, test, and maintain scalable backend applications using Python with DRF or FastAPI. Design and implement RESTful APIs for web and mobile applications. Collaborate with frontend developers, product managers, and stakeholders to define and deliver business requirements. Write clean, reusable, and efficient code following best practices and coding standards. Optimize application performance, troubleshoot issues, and ensure high availability. Implement and manage database schemas, queries, and migrations. Work with version control systems (Git) and participate in code reviews. Ensure proper documentation and maintain technical specifications. Required Skills & Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience). Minimum 3 years of professional experience as a Python Developer. Strong hands-on experience with Django Rest Framework (DRF) or FastAPI. Proficiency in Python 3.x and understanding of object-oriented programming. Solid understanding of RESTful API design and best practices. Strong knowledge of databases (PostgreSQL, MySQL, or MongoDB) Familiarity with Git, Docker, and CI/CD pipelines. Good problem-solving skills and attention to detail. Ability to work collaboratively in an Agile/Scrum environment. Preferred Skills Experience with cloud platforms (AWS, GCP, or Azure). Knowledge of asynchronous programming and event-driven architectures. Familiarity with testing frameworks (PyTest, UnitTest). Exposure to microservices architecture. Perks and benefits of working at Algoscale: Opportunity to collaborate with leading companies across the globe. Opportunity to work with the latest and trending technologies. Competitive salary and performance-based bonuses. Comprehensive group health insurance. Flexible working hours and remote work options. (For some positions only) Generous vacation and paid time off. Professional learning and development programs and certifications.

Posted 3 days ago

Apply

8.0 years

0 Lacs

mohali district, india

On-site

Job Title: Lead Node.js Location: Mohali, Punjab Job Type: Full-time Experience Required: 8+ years Key Responsibilities: • Lead the design and development of scalable, high-performance backend services using Node.js. • Own architecture decisions, code quality, and best practices within the backend team. • Collaborate closely with cross-functional teams including front-end developers, product managers, and DevOps. • Design and develop RESTful APIs and work with databases such as MongoDB, MySQL, or PostgreSQL. • Drive adoption of engineering standards, processes, and automation. • Review code, provide feedback, and mentor junior and mid-level developers. • Monitor application performance and troubleshoot production issues effectively. • Stay updated with emerging backend technologies and bring innovation into the stack. Required Skills & Qualifications: • 8+ years of hands-on backend development experience, with a strong focus on Node.js. • Deep understanding of JavaScript (ES6+), asynchronous programming, and event-driven architecture. • Experience with Express.js, RESTful APIs, and microservices-based architecture. • Proficient in working with both SQL and NoSQL databases. • Strong understanding of system design, architecture patterns, and scalability. • Familiarity with cloud platforms like AWS, GCP, or Azure. • Exposure to DevOps tools and practices (CI/CD, Docker, Kubernetes) is a strong plus.

Posted 3 days ago

Apply

3.0 years

16 - 20 Lacs

ahmedabad, gujarat, india

Remote

Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

3.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Job Title: Security Architect - AI Products & Multi-Cloud Security Location : Offshore( Bangalore/Pune/Hyderabad) Job Summary We are seeking a skilled Security Architect to ensure the security of our AI-powered products across multi-cloud platforms. This role will focus on implementing end-to-end security practices during the entire software development lifecycle, ensuring data privacy, safeguarding AI models, and promoting Responsible AI practices. You will be instrumental in developing and enforcing security guardrails that protect our AI solutions from potential threats and vulnerabilities. Key Responsibilities Application Security: Develop security policies and practices for AI and ML models. Conduct security assessments, code reviews, and threat modeling for AI applications. Implement security measures following OWASP Top 10 guidelines to prevent common vulnerabilities. DevSecOps: Integrate security into CI/CD pipelines to enable automated security testing. Use tools like GitHub Actions, Jenkins, and Terraform to automate infrastructure security checks. Promote secure coding standards and practices across development teams. Data Security: Design and implement data protection mechanisms such as encryption (both at rest and in transit) and data anonymization techniques. Ensure compliance with data privacy regulations such as GDPR and CCPA. Utilize tools like Data Loss Prevention (DLP) and data masking technologies for sensitive data protection. Identity & Access Management (IAM): Develop and enforce IAM strategies across multi-cloud platforms (AWS, Azure, GCP). Implement Zero Trust Architecture and role-based access controls (RBAC) to safeguard user access. Utilize multi-factor authentication (MFA) and identity federation protocols. AI Security & AI Guardrails: Define AI guardrails to mitigate risks like model drift, bias, adversarial attacks, and unauthorized model access. Implement AI model monitoring tools like LIME, SHAP, and IBM AIF360 for model interpretability and fairness. Promote Responsible AI practices, ensuring ethical AI deployment and compliance with industry standards. Cloud Security: Architect and implement secure cloud environments using AWS, Azure, and GCP services. Leverage cloud-native security tools such as AWS Shield, Azure Security Center, and Google Security Command Center. Conduct regular cloud security audits and vulnerability assessments. Compliance & Governance: Ensure alignment with security and compliance frameworks like NIST, ISO 27001, and SOC 2. Lead security audits and penetration testing to identify and mitigate vulnerabilities. Establish security policies and guidelines to ensure organizational compliance. Technical Skills Required 3+ years of experience in Data Privacy, cybersecurity, focusing on AI and cloud security. Hands-on experience with one major cloud (AWS, Azure, or GCP) or preferably multi-cloud security (AWS, Azure, GCP) and AI model governance. Strong knowledge of DevSecOps practices and automated security testing. Proficiency with AI/ML security frameworks and tools for monitoring and securing AI models. Experience with security tools like Burp Suite, OWASP ZAP, and SonarQube. Familiarity with AI ethics, model explainability tools (e.g., LIME, SHAP), and AI risk management. Strong understanding of Privacy by Design Principle, data privacy regulations (GDPR, CCPA) and data security best practices. Knowledge of identity management solutions and best practices in IAM. Strong knowledge of Data lifecycle management in AI context. Preferred Qualifications Certified Information Systems Security Professional (CISSP) Certified Cloud Security Professional (CCSP) AWS Certified Security - Specialty Azure Security Engineer Associate Certified AI Ethics & Governance Professional Soft Skills Excellent communication skills to collaborate with cross-functional teams, including Data Science, DevOps, and Product Management. Strong analytical and problem-solving abilities. Proven ability to stay updated with the latest security trends, AI regulations, and cloud technologies. Ability to articulate security concepts and practices to both technical and non-technical stakeholders. Nice-to-Have Experience with Machine Learning Operations (MLOps) security. Hands-on knowledge of Container Security (Docker, Kubernetes). Familiarity with AI ethics frameworks and AI safety research. Exposure to Responsible AI tools and methodologies.

Posted 3 days ago

Apply

5.0 - 8.0 years

5 - 9 Lacs

hyderabad

Work from Office

As a Data Engineer, you will design and implement data pipelines and integrations across cloud platforms to support AI and GenAI workloads. You’ll work closely with Data Scientists and ML Engineers to enable GenAI-powered applications through clean, secure, and optimized data flows. Key Responsibilities - Data Engineering Build and maintain ETL/ELT pipelines for structured, semi-structured, and unstructured data. Work with data warehouses/lakehouses (Snowflake, BigQuery, Databricks, Redshift). Develop real-time streaming pipelines using Kafka, Spark, or Flink. Ensure data quality, validation, and error handling. Prepare and manage data pipelines to feed LLMs and GenAI models. Work with vector databases (FAISS, Pinecone, Weaviate, Milvus) for RAG-based solutions. Support embedding generation and prompt engineering workflows. Collaborate with AI teams to integrate GenAI APIs and frameworks (LangChain, Hugging Face, OpenAI). Deploy pipelines on cloud services (AWS Glue, Azure Data Factory, GCP Dataflow/Dataproc). Use Airflow, Dagster, or Prefect for orchestration. Required Skills Proficiency in Python, SQL, PySpark. Hands-on experience with ETL pipelines and data modeling. Knowledge of cloud data platforms (AWS, Azure, or GCP). Experience with vector databases and GenAI frameworks Familiarity with Docker, Kubernetes, CI/CD pipelines. Preferred Skills Exposure to MLOps / LLMOps practices. Experience with semantic search, embeddings, or RAG systems. Familiarity with data security, compliance, and governance.

Posted 3 days ago

Apply

3.0 years

0 Lacs

hyderabad, telangana, india

On-site

At Revalgo, we're a fast-growing, product-driven startup revolutionizing the AI landscape with innovative, scalable solutions. Our mission is to empower businesses and individuals with cutting-edge AI technology that solves real-world problems. We’re looking for AI Service Delivery Specialist to join our global support team and ensure our customers get the best from our AI-powered products. This is not a typical “support” job — you will be at the intersection of AI engineering, product reliability, and customer success , solving advanced issues that require both technical depth and strong problem- solving skills. Role Overview As an AI Service Delivery Specialist at Revalgo, you’ll work during US business hours (night shift IST) and serve as the technical backbone for our customers. You’ll handle escalated cases, dive deep into AI-driven systems, and collaborate closely with product and engineering teams to ensure seamless customer experiences. This is the ideal role for AI Engineers who enjoy hands-on problem solving, system debugging, and bridging business needs with technical excellence. Key Responsibilities • Advanced Troubleshooting: Resolve escalated customer issues related to AI-driven products, APIs, and data pipelines. • AI/ML Support: Diagnose and resolve problems involving LLMs, embeddings, RAG stacks, and vector databases. • System Reliability: Monitor production systems, ensure uptime, and proactively prevent incidents. • Customer Collaboration: Conduct discovery and resolution sessions with global customers, ensuring technical clarity. • Cross-Functional Alignment: Escalate bugs and feature gaps to engineering, ensuring quick resolution. • Knowledge Development: Document solutions, create internal playbooks, and enhance the support knowledge base. • Continuous Learning: Stay updated on AI/ML trends, tools, and support technologies. Requirements • 3+ years in AI Engineering, Backend Development, or L2 Technical Support with a focus on scalable systems. • Strong skills in Python, APIs, and cloud environments (AWS, GCP, Azure) . • Good knowledge of PostgreSQL with hands-on experience in debugging and performance tuning. • Familiarity with LLMs, embeddings, LangChain, and retrieval-augmented generation (RAG) stacks . • Experience using ticketing tools (Zendesk, Jira, Freshdesk, etc.). • Excellent communication skills in English, with the ability to simplify complex technical issues for customers. • Willingness to work night shifts aligned with US business hours . Bonus: Exposure to vector databases, GitHub, or observability/monitoring tools. Why Join Revalgo? • Impact: Work on cutting-edge AI systems solving real-world problems. • Premium Night Shift Benefits: o Competitive night shift allowance . o Compensatory offs for festival/weekend shifts. • Growth Opportunities: Clear career path into AI Engineering, DevOps, or Product roles .

Posted 3 days ago

Apply

4.0 years

0 Lacs

bengaluru, karnataka, india

On-site

About The Job Collaborate with the product team, data scientists and other stakeholders across the company to understand and define business requirements Effectively communicate complex technical concepts to both technical and non-technical teams Lead end-to-end lifecycle of ML microservices — from planning and design to implementation, deployment, and monitoring — in partnership with fellow ML engineers Write clean, efficient, reusable, and maintainable code Enhance and take ownership of our current codebases, systems, and workflows Drive team productivity and deliver excellence while fostering sustainable development practices Guide and mentor other engineers, and promote a culture of continuous learning and growth Drive architectural decisions and technical direction while fostering a culture of engineering excellence About You Bachelor's degree in Computer Science or equivalent 4-7+ years of experience in backend engineering 3+ years of experience in Python Expertise in Flask/FastAPI, APIs, SQL, AWS/GCP, RabbitMQ/Kafka/SQS, Docker and CI/CD practices Experience implementing monitoring tools (e.g., Datadog, Grafana) to track latency, errors, and alerts for scalable backend systems Proven leadership in planning and implementing medium to large-scale software projects Excellent communication and collaboration skills Preferred experience in deploying ML models in production Startup experience is a plus

Posted 3 days ago

Apply

3.0 years

16 - 20 Lacs

jaipur, rajasthan, india

Remote

Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

3.0 years

16 - 20 Lacs

greater lucknow area

Remote

Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

3.0 years

16 - 20 Lacs

thane, maharashtra, india

Remote

Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

3.0 years

16 - 20 Lacs

nagpur, maharashtra, india

Remote

Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

3.0 years

16 - 20 Lacs

nashik, maharashtra, india

Remote

Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies