Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
16 - 20 Lacs
Patna, Bihar, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Experience: 5+ Mandatory Skills DevOps Engineer with below skills set - Python(Basic Python and OOPs knowledge), AWS (elasticsearch, rds, kafka, ecs), jenkins, terraform, cloudformation, docker Auxiliary Skills Java, glue, redshift, athena, snowflake, s3, jira Nice to Have - Work proactively Quick Learner Good Communication Skills Good Team Player Show more Show less
Posted 3 weeks ago
3.0 years
16 - 20 Lacs
Surat, Gujarat, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Now let’s help you understand the role !! About your new role!! ● You'll work with Android/web developers to develop backend services that meet their needs ● Identify libraries and technologies that solve our problems and/or are worth experimentation ● Verify user feedback in making system more stable and easy ● Learn and use core AWS technologies to design and then build available and scalable backend web services and customer-facing APIs. ● Experience in agile methodologies like Scrum Good understanding of branching, build, deployment, continuous integration methodologies What Makes You A Great Fit : ● Experience working on scalable, high availability applications/ services. ● Good understanding of data structures, algorithms and design patterns. ● Excellent analytical and problem-solving skills. ● Hands on experience in Python and familiarity with at least one framework (Django, Flask etc) ● Good exposure in writing and optimising SQL(such as PostgreSQL) for high-performance systems with large databases. ● Understanding of message queues, pub-sub, and in-memory data stores like Memcache / Redis ● Experience with NoSQL and distributed databases like MongoDB, Cassandra etc. ● Comfortable with search engines like ElasticSearch or SOLR. Show more Show less
Posted 3 weeks ago
3.0 years
16 - 20 Lacs
Thane, Maharashtra, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 3 weeks ago
3.0 years
16 - 20 Lacs
Greater Lucknow Area
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 3 weeks ago
3.0 years
16 - 20 Lacs
Ahmedabad, Gujarat, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 3 weeks ago
3.0 years
16 - 20 Lacs
Nagpur, Maharashtra, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 3 weeks ago
3.0 years
16 - 20 Lacs
Jaipur, Rajasthan, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 3 weeks ago
3.0 years
16 - 20 Lacs
Nashik, Maharashtra, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 3 weeks ago
3.0 years
16 - 20 Lacs
Kanpur, Uttar Pradesh, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 3 weeks ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description- Software Engineer (Backend) About PhonePe PhonePe is India’s leading digital payments platform with 500 Million+ registered users. Using PhonePe, users can send and receive money, recharge mobile, DTH, data cards, pay at stores, make utility payments, buy gold, and make investments. PhonePe went live for customers in August 2016 and was the first non-banking UPI app and offered money transfer to individuals and merchants, recharges and bill payments to begin with. In 2017, PhonePe forayed into financial services with the launch of digital gold, providing users with a safe and convenient option to buy 24-karat gold securely on its platform. PhonePe has since launched Mutual Funds and Insurance products like tax-saving funds, liquid funds, international travel insurance, Corona Care, a dedicated insurance product for the COVID-19 pandemic among others. PhonePe launched its Switch platform in 2018, and today its customers can place orders on over 300 apps including Ola, Myntra, IRCTC, Goibibo, RedBus, Oyo etc. directly from within the PhonePe mobile app. PhonePe is accepted at over 18 million merchant outlets across 500 cities nationally. Culture At PhonePe, we take extra care to make sure you give your best at work, Everyday! And creating the right environment for you is just one of the things we do. We empower people and trust them to do the right thing. Here, you own your work from start to finish, right from day one. Being enthusiastic about tech is a big part of being at PhonePe. If you like building technology that impacts millions, ideating with some of the best minds in the country and executing on your dreams with purpose and speed, join us! Challenges Building for Scale, Rapid Iterative Development, and Customer-centric Product Thinking at each step defines every day for a developer at PhonePe. Though we engineer for a 50 million+ strong user base, we code with every individual user in mind. While we are quick to adopt the latest in Engineering, we care utmost for security, stability, and automation. Apply if you want to experience the best combination of passionate application development and product-driven thinking Role & Responsibilities ● Build Robust and scalable web-based applications. You will need to think of platforms & reuse. ● Build abstractions and contracts with separation of concerns for a larger scope. ● Drive problem-solving skills for high-level business and technical problems. ● Do high-level design with guidance; Functional modeling, break-down of a module. ● Do incremental changes to architecture: impact analysis of the same. ● Do performance tuning and improvements in large scale distributed systems. ● Mentor young minds and foster team spirit, break down execution into phases to bring predictability to overall execution. ● Work closely with Product Manager to derive capability views from features/solutions, Lead execution of medium-sized projects. ● Work with broader stakeholders to track the impact of projects/features and proactively iterate to improve them. Requirements- ● Strong experience in the art of writing code and solving problems on a Large Scale (FinTech experience preferred). ● B.Tech, M.Tech, or Ph.D. in Computer Science or related technical discipline (or equivalent). ● Excellent coding skills – should be able to convert the design into code fluently. Experience in at least one general programming language (e.g. Java, C, C++) & tech stack to write maintainable, scalable, unit-tested code. ● Experience with multi-threading, concurrency programming, object-oriented design skills, knowledge of design patterns, and huge passion and ability to design intuitive modules, class-level interfaces and knowledge of Test driven development. ● Good understanding of databases (e.g. MySQL) and NoSQL (e.g. HBase, Elasticsearch, Aerospike, etc). ● Experience in full life cycle development in any programming language on a Linux platform and building highly scalable business applications, which involve implementing large complex business flows and dealing with a huge amount of data. ● Strong desire for solving complex and interesting real-world problems. ● Go-getter attitude that reflects in energy and intent behind assigned tasks ● An open communicator who shares thoughts and opinions frequently listens intently and takes constructive feedback. ● Ability to drive the design and architecture of multiple subsystems. ● Ability to break-down larger/fuzzier problems into smaller ones in the scope of the product ● Understanding of the industry’s coding standards and an ability to create appropriate technical documentation. ● Experience of having been a software engineer for at least 1+ years to 3 years of experience Show more Show less
Posted 3 weeks ago
4.0 - 9.0 years
4 - 8 Lacs
Pune, Chennai, Bengaluru
Work from Office
Python AI development (Flask/Fast API) Development experience using Microservices based architecture Knowledge of Google Cloud Platform (GCP) or other cloud environments. Familiarity with containerization and orchestration technologies such as Docker. Experience with databases such as MySQL and search technologies like ElasticSearch. Experience working with Queue-based systems (e.g., RabbitMQ, Kafka) is a plus. Location : - Bengaluru,Chennai,Pune,Noida,Mumbai,Hyderabad,Kochi Mandatory Key Skills RabbitMQ,Docker,Elasticsearch,GCP,MySQL,Fast API,Kafka,Microservices,Flask,Python*
Posted 3 weeks ago
13.0 years
0 Lacs
Pune, Maharashtra, India
Remote
At NICE, we don’t limit our challenges. We challenge our limits. Always. We’re ambitious. We’re game changers. And we play to win. We set the highest standards and execute beyond them. And if you’re like us, we can offer you the ultimate career opportunity that will light a fire within you. Tech Manager India - Pune At NICE, we don’t limit our challenges. We challenge our limits. Always. We’re ambitious. We’re game changers. And we play to win. We set the highest standards and execute beyond them. And if you’re like us, we can offer you the ultimate career opportunity that will light a fire within you. So, what’s the role all about? We are seeking a highly skilled and motivated Engineering Manager to lead our Communication Surveillance team, focused on building scalable compliance solutions for financial markets. You’ll drive R&D delivery, technical excellence, quality & manage a high-performing team, and ensure delivery of robust surveillance systems aligned with regulatory requirements How will you make an impact? Lead and mentor a team of software engineers in building scalable surveillance systems. Drive the design, development, and maintenance of applications using .NET Core and C#. Collaborate with cross-functional teams including App OPS, DevOps, Professional Services and product Own project delivery timelines, code quality, and system architecture. Ensure best practices in software engineering, including CI/CD, code reviews, and testing. Have you got what it takes? Key Technical Skills: Strong expertise in .NET Core and C# – architecture, development, and optimization. Familiarity with AWS services and cloud-native development. Good knowledge in RDBMS – MS SQL, Postgresql Technical experience with indexing/search technologies (preferably Elasticsearch). Experience in containerization. Good to Have: Experience in Python based development Good to have technical experience with indexing/search technologies (preferably Elasticsearch). Experience in a financial markets compliance domain. Qualifications: Bachelor’s or Master’s degree in Computer Science or a related field. 13-15 years of total experience with at least 2–3 years in a leadership or managerial role. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7076 Reporting into: Director Role Type: Tech Manager About NICE NICE Ltd. (NASDAQ: NICE) software products are used by 25,000+ global businesses, including 85 of the Fortune 100 corporations, to deliver extraordinary customer experiences, fight financial crime and ensure public safety. Every day, NICE software manages more than 120 million customer interactions and monitors 3+ billion financial transactions. Known as an innovation powerhouse that excels in AI, cloud and digital, NICE is consistently recognized as the market leader in its domains, with over 8,500 employees across 30+ countries. NICE is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, age, sex, marital status, ancestry, neurotype, physical or mental disability, veteran status, gender identity, sexual orientation or any other category protected by law. Show more Show less
Posted 3 weeks ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the Role: We are seeking a data-driven and customer-obsessed Manager – Onsite Analytics to own the end-to-end analysis and optimization of user journeys across our eCommerce platform. This role will be responsible for shaping how we understand, measure, and improve onsite user engagement, with a laser focus on conversion-driving features — especially product sorting and boosting. You will lead the development of our product ranking algorithms from the ground up, but also extend your impact to broader onsite behavioral analytics — covering search, navigation, merchandising, content placement, and performance metrics across key funnel stages. Key Responsibilities: Onsite Analytics Ownership Build and own the onsite analytics strategy across homepage, PLPs (product listing pages), PDPs, and other core surfaces. Define and monitor KPIs (CTR, CVR, scroll depth, bounce rate, engagement score, exit paths, etc.) that quantify user experience and commercial success. Create dashboards, deep-dive reports, and insights to drive decision-making across product, growth, and merchandising teams. Identify bottlenecks in user journeys and propose actionable hypotheses for conversion uplift. ⚙️ Product Sorting & Boosting Algorithm Development Design and implement a flexible, scalable product sorting and boosting engine tailored for both global and category-specific needs. Model relevance, popularity, availability, freshness, and business priorities into sorting logic using data-driven approaches. Run controlled experiments (A/B tests) to evaluate algorithmic changes and continuously improve engagement and conversion rates. 🤝 Cross-functional Partnership Collaborate with data science, engineering, product, UX, and category teams to improve user experience and content discoverability. Act as the go-to person for onsite behavioral understanding and help teams prioritize experiments based on high-impact opportunities. 🔍 Experimentation & Personalization Lead A/B and multivariate testing across sorting, layout, and navigation modules. Partner with personalization teams to segment and target user experiences intelligently. Requirements 4–8 years of experience in product analytics, growth strategy, algorithmic merchandising, or eCommerce performance optimization. Proven track record in using data to influence onsite product decisions and user-facing algorithms. Strong analytical skills with expertise in SQL, Python/R, and BI tools (e.g., Looker, Tableau). Experience designing ranking or personalization systems is highly preferred. Deep understanding of user funnel metrics and eCommerce KPIs. Strong communication and project management skills with cross-functional teams. Preferred Qualifications Experience building or managing algorithmic systems in a large-scale consumer product. Familiarity with platforms like Algolia, Elasticsearch, or proprietary ranking frameworks. Understanding of UI/UX implications of sorting logic on user behavior. Ability to balance precision with business objectives (e.g., revenue, visibility, seller fairness). Why Join Us? Own one of the highest-leverage components of an eCommerce growth engine. Build from scratch — but with the support of a mature data ecosystem. See the direct impact of your work on user satisfaction, revenue, and business outcomes. Work with a collaborative, forward-thinking team that values innovation and user empathy. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Engineering Manager As An Engineering Manager, You Will Lead And Mentor Engineering Team To Build Products And Services Aligned For Airtel’s Growth. This Entails That You Will Solve Problems And Build, Refactor And Evolve Overall Architecture. This Will Involve Working On Existing Codebase And New Projects. As An Engineering Lead, You Will Deliver On Technical Expertise : Provide guidance on technical and functional aspects of project and take decisions. You will setup and execute best practices in application architecture, development, performance, deployment and execution by owning end-to-end business deliveries. People Management : Conduct performance review and setup constructive feedback loop for team members and build their succession plans. You will be responsible for allocating existing resources and hiring new for effectively meeting organizational goals and deadlines. Project Management : You will be part of overall planning and execution of engineering projects and ensure their timely delivery within the allocated budget. Communication and Collaboration : You will facilitate communication within the engineering teams and stakeholders including quality, operations, product and program to ensure alignment for business goals. You will address conflicts in teams and promote positive and productive working environment. Innovation and continuous learning. : You will build culture of innovation and continuous improvement in your team. You will encourage and adopt new technology and emerging industry trends and methods for improving efficiency of your team. At Airtel, we build products at scale with many technologies including : Java and related technologies including Tomcat, Netty, Springboot, hibernate, Elasticsearch, Kafka, Web services Caching technologies like Redis, Aerospike or Hazelcast Data storage technologies like Oracle, S3, Postgres, MySQL or Mongodb Tooling including Git, Command line, Jenkins, Jmeter, Postman, Gatling, Ngnix/Haproxy, Jira/Confluence, Grafana, Kibana #BAL Show more Show less
Posted 3 weeks ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Experience : 4.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Ahmedabad) Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Attri) What do you need for this opportunity? Must have skills required: Python, Python Programming Attri is Looking for: About The Role: We are a global team with our people spread out across different countries. We strive to build a diverse team of passionate people who believe in bringing change through their work. At Attri, we are seeking a talented Frontend Engineer to join our dynamic team. We are a cutting-edge company, and we're looking for an individual who is passionate, inquisitive, and a self-learner, to contribute to the success of our projects. Responsibilities: Modern Web Development: Proficiency in HTML5, CSS3, ES6+, Typescript, and Node.js, with a strong emphasis on staying up-to-date with the latest technologies. TypeScript: Hands on with Generics, Template Literals, Mapped Types, Conditional Types Flexible Approach: Based on problem at hand apply appropriate solution while considering all the risks Frontend React.js and Flux Architecture: Extensive experience in React.js and Flux Architecture, along with external state management to build robust and performant web applications. JS Event Loop: Understanding of event loop, criticality of not blocking main thread, cooperative scheduling in react. State Management: Hands on with more than one state management library Ecosystem: Ability to leverage vast JS ecosystem and hands on with non-typical libraries. Backend SQL - Extensive hands on with Postgres with comfortable with json_agg, json_build_object, WITH CLAUSE, CTE, View/Materialized View, Transactions Redis - Hands-on with different data structures and usage. Architectural Patterns - Backend for Frontend, Background Workers, CQRS, Event Sourcing, Orchestration/Choreography, etc Transport Protocols, such as HTTP(S), SSE, and WS(S), to optimize data transfer and enhance application performance Serialization Protocols - JSON and at least one more protocol Authentication/Authorization - Comfortable with OAuth, JWT and other mechanisms for different use cases Comfortable with reading open source code of libraries in use and understanding of internals Able to fork the library to either improve, fix bug, or redesign Tooling: Knowledge of essential frontend tools like Prettier, ESLint, and Conventional Commit to maintain code quality and consistency. Dependency management and versioning Familiarity with CI/CD Testing: Utilize Jest/Vitest and React Testing Library for comprehensive testing of your code, ensuring high code quality and reliability. Collaboration: Collaborate closely with our design team to craft responsive and themable components for data-intensive applications, ensuring a seamless user experience. Programming Paradigms: Solid grasp of both Object-Oriented Programming and Functional Programming concepts to create clean and maintainable code. Design/Architectural Patterns: Identifying suitable design and architectural pattern to solve the problem at hand. Comfortable with tailoring the pattern to fit the problem optimally Modular and Reusable Code: Write modular, reusable, and testable code that enhances codebase maintainability. DSA: Basic understanding of DSA when required to optimize hot paths. Good To Have: Python: Django Rest Framework, Celery, Pandas/Numpy, Langchain, Ollama Storybook: Storybook to develop components in isolation, streamlining the UI design and development process. Charting and Visualization: Experience with charting and visualization libraries, especially ECharts by Apache, to create compelling data representations. Tailwind CSS: Understanding of Tailwind CSS for efficient and responsive UI development. NoSQL Stores - ElasticSearch, Neo4j, Cassandra, Qdrant, etc. Functional Reactive Programming RabbitMQ/Kafka Great To Have: Open Source Contribution: Experience in contributing to open-source projects (not limited to personal projects or forks) that showcases your commitment to the development community. Renderless/Headless React Components: Developing renderless or headless React components to provide flexible and reusable UI solutions. End-to-End Testing: Experience with Cypress or any other end-to-end (E2E) testing framework, ensuring the robustness and quality of the entire application. Deployment: Being target agnostic and understanding the nuances of application in operation. What You Bring: Bachelor's degree in Computer Science, Information Technology, or a related field. 5+ years of relevant experience in frontend web development, including proficiency in HTML5, CSS3, ES6+, Typescript, React.js, and related technologies. Solid understanding of Object-Oriented Programming, Functional Programming, SOLID principles, and Design Patterns. Proven experience in developing modular, reusable, and testable code. Prior work on data-intensive applications and collaboration with design teams to create responsive and themable components. Experience with testing frameworks like Jest/Vitest and React Testing Library. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 3 weeks ago
4.0 - 6.0 years
3 - 5 Lacs
Mumbai, Kurla
Work from Office
Required: Expertise in AWS, including basic services like networking, data and workload management. o AWS Networking: VPC, VPC Peering, Transit Gateway, RouteTables, SecurityGroups, etc. Data: RDS, DynamoDB, ElasticSearch Workload: EC2, EKS, Lambda, etc. Required Skills: Experience in any one of the CI/CD tools (Gitlab/Github/Jenkins) including runner setup, templating and configuration. Kubernetes experience or Ansible Experience (EKS/AKS/GKE), basics like a pod, deployment, networking, and service mesh. Used any package manager like Helm. Scripting experience (python), automation in pipelines when required, system service. Infrastructure automation (Terraform/pulumi/cloud formation), write modules, setup pipeline and version the code. Optional: Experience in any programming language is not required but is appreciated. Good experience in GIT, SVN or any other code management tool is required. DevSecops tools like (Qualys/SonarQube/BlackDuck) for security scanning of artefacts, infrastructure and code. Observability tools (Opensource: Prometheus, Elasticsearch, OpenTelemetry; Paid: Datadog, 24/7, etc)
Posted 3 weeks ago
4.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Role: Drupal Developer Location: Juhi Nagar, Navi Mumbai (Work from Office – Alternate Saturdays will be working) Experience: 4+ years Joining: Immediate Joiners Only Work Mode: This is a Work from Office role. Work Schedule: Alternate Saturdays will be working. About company: It is an innovative technology company focused on delivering robust web solutions. (Further company details would typically be inserted here once provided by the client.) We are looking for talented individuals to join our team and contribute to cutting-edge projects. The Opportunity: Drupal Developer We are seeking an experienced and highly skilled Drupal Developer to join our team. The ideal candidate will have a strong understanding of Drupal's architecture and a proven track record in developing custom modules, implementing sophisticated theming, and integrating with various APIs. This is a hands-on role for an immediate joiner who is passionate about building secure, scalable, and high-performance Drupal applications. Key Responsibilities Develop and maintain custom Drupal modules using Hooks, Plugin system, Form API, and Entity API. Implement and work with REST, JSON:API, and GraphQL within Drupal for seamless data exchange. Design and implement Drupal themes using Twig templating engine and preprocess functions to ensure a consistent and engaging user experience. Configure and manage user roles and access control to maintain application security and data integrity. Apply best practices in securing Drupal applications, identifying and mitigating potential vulnerabilities. Integrate Drupal with various third-party APIs and external systems. Collaborate with cross-functional teams to define, design, and ship new features. Contribute to all phases of the development lifecycle, from concept to deployment and maintenance. Requirements Experience: 4+ years of professional experience in Drupal development. Custom Module Development: Strong understanding and hands-on experience with custom module development (Hooks, Plugin system, Form API, Entity API). API Integration (Drupal): Proficiency with REST / JSON:API / GraphQL in Drupal. Drupal Theming: Experience with Drupal theming using Twig and preprocess functions. Security & Access Control: Experience with user roles and access control, and a strong understanding of best practices in securing Drupal applications. Third-Party Integration: Familiarity with APIs and third-party integration. Joining: Immediate Joiners Only. Preferred Experience Experience with Rocket.Chat integration or other messaging tools. Exposure to Solr/Elasticsearch using Drupal Search API. Skills: rocket.chat integration,api integration,security,drupal development.,hooks,api integration (drupal),custom module development,json:api,form api,drupal theming,plugin system,third-party integration,graphql,drupal,rest,preprocess functions,entity api,twig,access control Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
Raipur, Chhattisgarh, India
Remote
About Gravity: Gravity Engineering Services Pvt. Ltd is a Full stack product company at the forefront of transformative Enterprise products and technology consulting. Our diverse portfolio includes Commerce Cloud, Ecommerce Marketplaces, B2B Multi-Channel PIM platform, Product Lifecycle Management System, and other cutting-edge solutions. With a commitment to delivering meaningful digital experiences, Gravity empowers clients to achieve their business objectives through technology and design. Job Description: ● Understand project requirements, writing bug free clean code and ensure that the ● solution works per the agreed architecture, SLAs, KPIs and business model ● Integrate backend with third party APIs ● 100% Hands on role Make design decisions that contribute to maintainable systems ● Adapt to rapidly evolving requirements and changing priorities and drive the team accordingly ● Reverse engineer for debugging errors in code and ensuring quality control in the process. ● Continually drive products towards a meaningful balance between user needs, business objectives and technical feasibility Qualifications: ● Bachelor or Master Degree in Computer Science, Software Engineering from a reputed University ● 5+ years of experience with Django , Django Rest Framework, Python 3, MySQL, Elasticsearch, web sockets, JavaScript, JIRA, Gitlab, Rest API, GCP or AWS ● Experience in writing unit testing and test case automation. ● Ability to operate in an Agile environment with a start-up mentality and unstructured environment, Energy, drive and passion to work, and operate in a digital world. ● Excellent communication skills and ability to work with remote teams ● Deep understanding of Photo Editing industry trends, technology, and customer needs. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description Responsible for ensuring the reliability, scalability, and performance of cloud-native systems across AWS, Azure, or GCP environments. Leverages advanced skills in Kubernetes, Infrastructure as Code (Terraform, CloudFormation), and configuration management tools (Ansible, Puppet, Chef) to manage and automate cloud infrastructure. Leads the implementation of containerized solutions, CI/CD pipelines, and proactive monitoring using tools like Prometheus, Grafana, Splunk, and ELK Stack. Develops and executes robust testing strategies, streamlines incident response, and enhances service performance through real-time observability and automated dashboards. Cloud Platforms: Advanced proficiency in one or more cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP), including expertise in services such as EC2, S3, RDS, and VPC networking. Container Orchestration: Strong experience with container orchestration platforms such as Kubernetes, including deployment, scaling, and management of containerized applications. Configuration Management and Automation: Proficiency in configuration management tools such as Ansible, Puppet, or Chef, with a strong emphasis on automation and infrastructure as code (IaC) practices. Monitoring and Observability: Hands-on experience with monitoring and observability tools such as Splunk, Prometheus, Grafana, ELK stack (Elasticsearch, Logstash, Kibana), or similar solutions for real-time system monitoring, logging, tracing, and alerting. Continuous Integration/Continuous Deployment (CI/CD): Experience with CI/CD pipelines and tools such as Jenkins, GitLab CI/CD, CircleCI, or Travis CI, including automated testing, deployment, and rollback strategies. Infrastructure as Code (IaC): Proficiency in IaC tools such as Terraform or CloudFormation for provisioning and managing infrastructure resources declaratively. Scripting and Automation: Strong scripting skills in languages such as Python, Shell, or Go for automating repetitive tasks, managing configurations, and orchestrating deployments. Databases and Datastores: Experience with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra), time series databases Including performance tuning, replication, and high availability configurations. Security Best Practices: Familiarity with security best practices for cloud environments, including identity and access management (IAM), encryption, network security, and compliance standards such as PCI-DSS and GDPR. Version Control Systems: Proficiency in version control systems such as Git, including branching strategies, code reviews, and collaboration workflows. Synthetic Monitoring: Experience with synthetic monitoring tools such as New Relic Synthetics, Datadog Synthetics, or Selenium for simulating user interactions and monitoring application performance from external locations. Network Understanding: Strong understanding of networking, distributed systems, microservices architecture, and other relevant architectural concepts. Analytical Skills: Excellent problem-solving skills and the ability to troubleshoot complex issues in production environments. Responsibilities Efficient Lifecycle Management: You will be enhancing application and cloud service lifecycles. Reliable Software Improvement: Boost software dependability for organizational efficiency. Expert Guidance in Reliability: Provide expert direction on reliability practices. Robust Testing Development: Develop effective testing strategies and tools. Adaptable SRE Solutions Implementation: Implement flexible solutions to enhance system stability. Dashboard Development Leadership: Lead comprehensive SRE Dashboard creation. Optimized Performance Testing Deployment: Deploy specialized tests for peak system performance. Swift Incident Resolution: Resolve production incidents promptly to minimize disruptions. Continuous Service Enhancement: Enhance service reliability through proactive measures. Proactive Anomaly Management: Identify and address anomalies before they impact operations. Automated Dashboard Setup: Streamline dashboard provisioning for efficient operations. Precise Code Debugging: Investigate and resolve issues at the code level efficiently. Seamless Release Integration: Integrate SRE practices seamlessly into the release cycle. Efficient Process Automation: Automate repetitive tasks to save time and resources. Dynamic SRE Solutions Enhancement: Assess and enhance SRE solutions for optimal performance. Collaborative SRE Implementation: Work with teams to implement and refine SRE practices. Proactive System Enhancement: Improve system resilience through proactive initiatives. Effective SRE Training Delivery: Deliver training sessions for widespread SRE knowledge. Scalability Strategy Planning: Design strategies for scalable infrastructure growth. Proactive Improvements: Spend at least 50% of your time on proactive improvements to system reliability and resilience Training: Conduct SRE training sessions Nice To Have Previous FedEx experience Master’s degree Domain knowledge in logistics, finance, or supply chain Education: Bachelor's degree or equivalent in Computer Science, Electrical / Electronics Engineering, MIS or related discipline. TOGAF certification and SAFe Agile certification strongly preferred. Experience: Six to seven (6-7) years equivalent work experience in information technology or engineering environment with a direct responsibility for strategy formulation and solution/technical architecture, as well as designing, architecting, developing, implementing, and monitoring efficient and effective solutions to diverse and complex business problems. Knowledge, Skills And Abilities Fluency in English Accuracy & Attention to Detail Influencing & Persuasion Planning & Organizing Problem Solving Project Management Preferred Qualifications Pay Transparency: Pay Additional Details: FedEx was built on a philosophy that puts people first, one we take seriously. We are an equal opportunity/affirmative action employer and we are committed to a diverse, equitable, and inclusive workforce in which we enforce fair treatment, and provide growth opportunities for everyone. All qualified applicants will receive consideration for employment regardless of age, race, color, national origin, genetics, religion, gender, marital status, pregnancy (including childbirth or a related medical condition), physical or mental disability, or any other characteristic protected by applicable laws, regulations, and ordinances. Our Company FedEx is one of the world's largest express transportation companies and has consistently been selected as one of the top 10 World’s Most Admired Companies by "Fortune" magazine. Every day FedEx delivers for its customers with transportation and business solutions, serving more than 220 countries and territories around the globe. We can serve this global network due to our outstanding team of FedEx team members, who are tasked with making every FedEx experience outstanding. Our Philosophy The People-Service-Profit philosophy (P-S-P) describes the principles that govern every FedEx decision, policy, or activity. FedEx takes care of our people; they, in turn, deliver the impeccable service demanded by our customers, who reward us with the profitability necessary to secure our future. The essential element in making the People-Service-Profit philosophy such a positive force for the company is where we close the circle, and return these profits back into the business, and invest back in our people. Our success in the industry is attributed to our people. Through our P-S-P philosophy, we have a work environment that encourages team members to be innovative in delivering the highest possible quality of service to our customers. We care for their well-being, and value their contributions to the company. Our Culture Our culture is important for many reasons, and we intentionally bring it to life through our behaviors, actions, and activities in every part of the world. The FedEx culture and values have been a cornerstone of our success and growth since we began in the early 1970’s. While other companies can copy our systems, infrastructure, and processes, our culture makes us unique and is often a differentiating factor as we compete and grow in today’s global marketplace. Show more Show less
Posted 3 weeks ago
0.0 - 1.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Company Description About Eurofins: Eurofins Scientific is an international life sciences company, providing a unique range of analytical testing services to clients across multiple industries, to make life and the environment safer, healthier and more sustainable. From the food you eat to the medicines you rely on, Eurofins works with the biggest companies in the world to ensure the products they supply are safe, their ingredients are authentic and labelling is accurate. Eurofins is a global leader in food, environmental, pharmaceutical and cosmetic product testing and in agroscience CRO services. It is also one of the global independent market leaders in certain testing and laboratory services for genomics, discovery pharmacology, forensics, CDMO, advanced material sciences and in the support of clinical studies. In over just 30 years, Eurofins has grown from one laboratory in Nantes, France to 58,000 staff across a network of over 1,000 independent companies in 54 countries, operating 900 laboratories. Performing over 450 million tests every year, Eurofins offers a portfolio of over 200,000 analytical methods to evaluate the safety, identity, composition, authenticity, origin, traceability and purity of biological substances and products, as well as providing innovative clinical diagnostic testing services, as one of the leading global emerging players in specialised clinical diagnostics testing. Eurofins is one of the fastest growing listed European companies with a listing on the French stock exchange since 1997. In FY 2021, Eurofins achieved a record revenue of over EUR 6.7 billion. Eurofins IT Solutions India Pvt Ltd (EITSI) is a fully owned subsidiary of Eurofins and functions as a Global Software Delivery Center exclusively catering to Eurofins Global IT business needs. The code shipped out of EITSI impacts the global network of Eurofins labs and services. The primary focus at EITSI is to develop the next generation LIMS (Lab Information Management system), Customer portals, e-commerce solutions, ERP/CRM system, Mobile Apps & other B2B platforms for various Eurofins Laboratories and businesses. Young and dynamic, we have a rich culture and we offer fulfilling careers. Job Description Associate Software Engineer: Eurofins IT Solutions, Bengaluru, Karnataka, India With 54 facilities worldwide, Eurofins BioPharma Product Testing (BPT) is the largest network of bio/pharmaceutical GMP product testing laboratories providing comprehensive laboratory services for the world's largest pharmaceutical, biopharmaceutical, and medical device companies. Behind the scenes, BPT is enabled by global engineering teams working on next-generation applications like Eurofins Quality Management System(eQMS). eQMS is sophisticated web application that will be used by our scientists, engineers, and technicians to manage several quality and compliance management processes. This role reports to Engineering Manager. Required Experience and Skills Eligibility Criteria: 2024 /2025 pass out B.E./ B. Tech (CS, IS, EC,) / BSc (CS, IT) 0-12 months of IT Industry experience 60% aggregate in highest qualification (no backlogs) Technical Skills: 0 to 1 year of experience or strong foundational knowledge in .NET Core , C# , Angular , and Web API development. Familiar with database systems such as SQL Server , Cosmos DB , and MongoDB . Able to write code in at least one of the following languages like C# Basic understanding of Azure fundamentals including services like App Services , Azure Functions , Azure SQL , and Cosmos DB . Exposure to scripting with C# and PowerShell is a plus. Familiar with version control using Git and repositories hosted on Azure Repos . Experience with development tools like Visual Studio , Postman , and Swagger for API testing and documentation is an advantage. Knowledge of ElasticSearch and Azure Service Bus is beneficial. Soft Skills: Good communication and interpersonal relation skills in an international environment Good Attitude towards learning new things. Good Knowledge on logical reasoning and problem-solving skills. Ability to coordinate the work with different individuals / teams Continuous Improvement: Staying updated with technologies, improving processes. Preferred Qualifications: Certifications in Azure (e.g., AZ-900, AZ 204). Elasticsearch Engineer Certification. Understanding of cloud database management with Azure Cosmos DB . Familiarity with modern web development practices and RESTful APIs. Technology Stack: Backend Development: .NET Core, C#, Web API Frontend Development: Angular, TypeScript Databases: SQL Server, Cosmos DB, MongoDB Cloud & Azure: Azure App Services, Azure Functions, Azure SQL, Azure Repos Scripting & Tools: PowerShell (basic), C# scripting, Visual Studio, Git, Postman, Swagger Additional Technologies: ElasticSearch, Azure Service Bus, Redis Cache Qualifications Bachelors in Engineering, Computer Science or equivalent.
Posted 3 weeks ago
0.0 - 1.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Company Description About Eurofins: Eurofins Scientific is an international life sciences company, providing a unique range of analytical testing services to clients across multiple industries, to make life and the environment safer, healthier and more sustainable. From the food you eat to the medicines you rely on, Eurofins works with the biggest companies in the world to ensure the products they supply are safe, their ingredients are authentic and labelling is accurate. Eurofins is a global leader in food, environmental, pharmaceutical and cosmetic product testing and in agroscience CRO services. It is also one of the global independent market leaders in certain testing and laboratory services for genomics, discovery pharmacology, forensics, CDMO, advanced material sciences and in the support of clinical studies. In over just 30 years, Eurofins has grown from one laboratory in Nantes, France to 58,000 staff across a network of over 1,000 independent companies in 54 countries, operating 900 laboratories. Performing over 450 million tests every year, Eurofins offers a portfolio of over 200,000 analytical methods to evaluate the safety, identity, composition, authenticity, origin, traceability and purity of biological substances and products, as well as providing innovative clinical diagnostic testing services, as one of the leading global emerging players in specialised clinical diagnostics testing. Eurofins is one of the fastest growing listed European companies with a listing on the French stock exchange since 1997. In FY 2021, Eurofins achieved a record revenue of over EUR 6.7 billion. Eurofins IT Solutions India Pvt Ltd (EITSI) is a fully owned subsidiary of Eurofins and functions as a Global Software Delivery Center exclusively catering to Eurofins Global IT business needs. The code shipped out of EITSI impacts the global network of Eurofins labs and services. The primary focus at EITSI is to develop the next generation LIMS (Lab Information Management system), Customer portals, e-commerce solutions, ERP/CRM system, Mobile Apps & other B2B platforms for various Eurofins Laboratories and businesses. Young and dynamic, we have a rich culture and we offer fulfilling careers. Job Description Associate Software Engineer: Eurofins IT Solutions, Bengaluru, Karnataka, India With 54 facilities worldwide, Eurofins BioPharma Product Testing (BPT) is the largest network of bio/pharmaceutical GMP product testing laboratories providing comprehensive laboratory services for the world's largest pharmaceutical, biopharmaceutical, and medical device companies. BPT is enabled by global engineering teams working on suite of next-generation applications including Laboratory Information Management Systems (LIMS), Electronic Notebook (ELN), LabAccess etc. As Associate Software Engineer, you will be a crucial part of our delivery team, ensuring the Eurofins Electronic Notebook product’s continuous integration and continuous deployment (environments maintenance, build, deployment etc. with quick turnaround time there by reducing the impact on Business) for a suite of applications in collaboration with other stake holders. These are sophisticated computer programs that will be used by our scientists, engineers, and technicians to document research, experiments, and procedures performed in our international network of laboratories. As a technology leader, BPT wants to give you the opportunity not just to accept new challenges and opportunities but to impress with your ingenuity, focus, attention to detail and collaboration with a global team of professionals. This role reports to a Deputy Manager. Required Experience and Skills Eligibility Criteria: 2024 /2025 pass out B.E./ B. Tech (CS, IS, EC,) / BSc (CS, IT) 0-12 months of IT Industry experience 60% aggregate in highest qualification (no backlogs) Technical Skills: 0 to 1 year experience OR Knowledge on, .Net, Angular, data base (SQL Server, Mongo DB, COSMOS DB) is good to have Must be able to write programs in one of the following programming languages: TypeScript, C++, C#, or Java. Knowledge on Azure Technologies (Azure Pipelines), PowerShell and C# Scripting, Azure artifacts & Azure Git Repos are added advantage Knowledge on tools like Azure DevOps, are added advantage. Knowledge on any programming language is added advantage. Note: The candidate must be willing to work in DevOps area and prepared to be challenged on any required capabilities in this regard during interviews. Soft Skills: Good communication and interpersonal relation skills in an international environment Good Attitude towards learning new things. Good Knowledge on logical reasoning and problem-solving skills. Ability to coordinate the work with different individuals / teams Continuous Improvement: Staying updated with technologies, improving processes. Preferred Qualifications: Certifications in Azure (e.g., AZ-900, AZ-400, AZ-104). Elasticsearch Engineer Certification. Experience with networking and security protocols in Azure. Knowledge of database management in Azure Cosmos DB. Technology Stack: Azure Technologies (Azure Pipelines), PowerShell and C# Scripting, Azure artifacts & Azure Git Repos .Net, App Service Plans – App Services, Azure Functions, AKS, C#, SQL scripts COSMOS DB, SQL DB Elastic Search, Azure Service Bus, Redis Cache Qualifications Bachelors in Engineering, Computer Science or equivalent.
Posted 3 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Java Backend Developer (SDE & SDE2) - Pune Company: Global Tech Innovator in Digital Commerce Location: Pune (Work from Office, 5 days/week) Experience: SDE: 2+ yrs SDE2: 3-5 yrs Salary: SDE: ₹15-16 LPA (incl. 10% variable) SDE2: ₹21-22 LPA (incl. 10% variable) Joining: Immediate or within 15 days Posted by: Techconnexions (for our clie nt) Why Join Us? Ready to code the future? Join a global tech leader revolutionizing digital commerce! Build scalable Java applications, solve tough challenges, and work with cutting-edge tech in a high-energy Pune team. If you live for innovation and thrive on "Every Day is Game Day," this is your shot! What You’ll Do Build and optimize high-performance Java applications from storyboards to deployment. Write clean, reusable code using Spring, Hibernate, and RESTful APIs. Squash bugs, boost performance, and ensure top-notch quality. Collaborate with teams to deliver robust backend solutions. Keep code organized with automation and best practices. What You Bring Must-Have : SDE: 2+ yrs Java experience; SDE2: 3-5 yrs hands-on Java expertise. Strong Core Java, OOP, Spring Boot, Hibernate, REST APIs, and JDBC. Experience with MySQL/MSSQL; bonus for Elasticsearch, MongoDB, or Redis. Know-how in concurrency patterns, design patterns, and reusable libraries. Solid grasp of data structures, algorithms, and problem-solving. Skilled with Git, Maven/Gradle, and CI pipelines. Nice-to-Have: Familiarity with Kafka, RabbitMQ, or messaging frameworks. Exposure to AWS, Azure, GCP, Docker, or Kubernetes. Experience with unit testing and scalable app design. Understanding of JVM quirks and workarounds. Qualifications : B.Tech/M.Tech in Computer Science, IT, or related field. Proven track record of delivering scalable solutions. Who You Are: Own It: Take charge and make things happen. Communicate: Clear, confident, and collaborative. Adapt: Thrive in a fast-paced, dynamic setup. Innovate: Bring fresh ideas and a hunger to learn. What’s in It for You? Work with a global tech giant pushing digital innovation. High-energy team where your ideas matter. Cutting-edge tools and learning opportunities. Competitive pay with 10% variable bonus. Ready for This? Work from Office: Pune, 5 days/week. Push Limits: Bring big ideas and hustle hard. Own Your Craft: Deliver work you’re proud of. Show more Show less
Posted 3 weeks ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Role: We are looking for a Software Developer (Frontend) with 2 to 4 years of experience, who has hands-on expertise in React.js and Node.js . You will work closely with the product and design teams to build and improve features for our high-traffic e-commerce platform , enabling a seamless and scalable shopping experience for millions of customers. You should apply if you have: 2 to 4 years of professional experience in full-stack or frontend/backend development. Strong proficiency in React.js and Node.js , with the ability to write clean, maintainable, and scalable code. Experience working on e-commerce platforms or other scalable web applications. Good understanding of data structures, algorithms, and system design. Familiarity with RESTful APIs, MongoDB, MySQL, Redis, or similar databases. Experience integrating third-party services such as payment gateways, CRMs, and analytics tools. Ownership mindset, a passion for problem-solving, and attention to performance and security. Ability to collaborate in cross-functional teams and contribute in a fast-paced startup environment. You should not apply if you: Require constant supervision or lack a proactive approach. Are not comfortable working from our Gurugram office . Don’t enjoy building scalable systems or solving customer-facing product problems. Lack of experience with modern JavaScript frameworks or backend development. Skills Required: React.js, Redux / Context API, Hooks Node.js, Express.js MongoDB / MySQL HTML5, CSS3, JavaScript (ES6+) Git, Postman, REST APIs Webpack / Babel / Vite Strong debugging and optimization skills Optional but good to have: Next.js, TypeScript, Redis, AWS, Elasticsearch What will you do? Design, develop, and maintain features for our core e-commerce platform using React.js and Node.js. Collaborate with product managers, designers, and other engineers to deliver high-quality products. Optimize frontend performance and backend API responses for faster load times and better UX. Maintain code quality through writing unit tests and participating in code reviews. Build reusable components and libraries for future use. Continuously discover, evaluate, and implement new technologies to maximize development efficiency. Solve technical problems, identify bottlenecks, and improve system performance. Work Experience: 2–4 years (full-time professional experience) Working Days: Monday – Friday Location: Golf Course Road, Gurugram, Haryana (Work from Office) Why Nutrabay: We believe in an open, intellectually honest culture where everyone is given the autonomy to contribute and do their life’s best work. As a part of the dynamic team at Nutrabay, you will have a chance to learn new things, solve new problems, build your competence and be a part of an innovative marketing-and-tech startup that’s revolutionising the health industry. Working with Nutrabay can be fun, and a place of a unique growth opportunity. Here you will learn how to maximise the potential of your available resources. You will get the opportunity to do work that helps you master a variety of transferable skills, or skills that are relevant across roles and departments. You will be feeling appreciated and valued for the work you delivered. We are creating a unique company culture that embodies respect and honesty that will create more loyal employees than a company that simply shells out cash. We trust our employees and their voice and ask for their opinions on important business issues. About Nutrabay: Nutrabay is the largest health & nutrition store in India. Our vision is to keep growing, having a sustainable business model and continue to be the market leader in this segment by launching many innovative products. We are proud to have served over 1 million customers uptill now and our family is constantly growing. We have built a complex and high converting eCommerce system and our monthly traffic has grown to a million. We are looking to build a visionary and agile team to help fuel our growth and contribute towards further advancing the continuously evolving product. Funding: We raised $5 Million in a Series A funding Show more Show less
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Elasticsearch is a powerful search and analytics engine used by businesses worldwide to manage and analyze their data efficiently. In India, the demand for Elasticsearch professionals is on the rise, with many companies seeking skilled individuals to work on various projects involving data management, search capabilities, and more.
These cities are known for their thriving tech industries and have a high demand for Elasticsearch professionals.
The salary range for Elasticsearch professionals in India varies based on experience and skill level. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.
A typical career path in Elasticsearch may involve starting as a Junior Developer, moving on to become a Senior Developer, and eventually progressing to a Tech Lead position. With experience and expertise, one can also explore roles such as Solution Architect or Data Engineer.
Apart from Elasticsearch, professionals in this field are often expected to have knowledge of the following skills: - Apache Lucene - Java programming - Data modeling - RESTful APIs - Database management systems
As you explore job opportunities in Elasticsearch in India, remember to continuously enhance your skills and knowledge in this field. Prepare thoroughly for interviews and showcase your expertise confidently. With the right mindset and preparation, you can excel in your Elasticsearch career and contribute significantly to the tech industry in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.