Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
16 - 20 Lacs
mysore, karnataka, india
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
8.0 years
0 Lacs
uttarakhand, india
On-site
As cyber threats grow in scale and complexity, cloud security isn’t just important, it’s essential. At Microsoft, we’re building secure, resilient platforms to protect our cloud environment and meet the highest standards of trust and assurance. We’re looking for a Principal Security Engineer – Cloud Security to help us lead that future. Join our dynamic Regulated Industries team within the CISO organization, where you will drive initiatives that embed security into the fabric of our cloud platforms while enabling rapid, automated detection and response capabilities. You will lead efforts in this hands-on engineering role to eliminate manual toil, build resilient security controls, and ensure our defenses can scale alongside the business. You'll be joining a team that operates at the bleeding edge of cloud and security, working across Azure and hybrid environments to protect Microsoft and its customers through innovation, collaboration, and engineering excellence. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day. Responsibilities Design and deploy advanced security controls and architectures across Azure and hybrid cloud environments. Lead the design of secure, scalable, and resilient systems, influencing decisions across networking, identity, compute, and data boundaries. Develop automation pipelines for detection, response, and remediation using tools like Azure Sentinel, Logic Apps, Defender for Cloud, Microsoft Graph, and custom scripting. Investigate security incidents, help contain threats, and provide technical support for high-impact response efforts. Build scalable integrations with Microsoft security stack to improve visibility, containment, and incident response. Collaborate with threat detection teams to operationalize detection-as-code, security playbooks, and custom analytic rules aligned to MITRE ATT&CK. Integrate AI/ML solutions into security operation for intelligent incident triage, control validation, and telemetry analysis. Partner with engineering, platform, and devops teams to embed security guardrails into CI/CD and cloud workflows. Serve as a technical advisor and mentor to security engineers, sharing best practices for automation and secure-by-design patterns. Contribute to internal frameworks, reusable modules, and open-source tooling that improve cloud security maturity across the org. Develop and integrate machine learning models and AI agents for anomaly detection, behavioral analytics, policy drift detection, alert triage, and security decision support. Track emerging threats, evolving compliance landscapes, and Microsoft’s latest security innovations and turn that insight into action. Qualifications Required Qualifications: 8+ years of experience in security engineering or platform architecture, with 4+ years focused on cloud security in Azure, AWS, or GCP. Deep, hands-on expertise with Microsoft Azure; including AKS, App Services, Key Vault, Managed Identities, API Management and Azure Policy. Advanced proficiency in Python, PowerShell, Kusto/KQL and the ability to design and build tooling that scales across environments and teams. Experience with AI/ML in security contexts, such as anomaly detection, predictive modeling, or triaging security signals using large datasets. Strong communication skills so you can speak both engineer and executive fluently. Preferred Qualifications Hands-on experience with Microsoft Defender for Cloud, Azure Monitor, Sentinel, or Purview. Strong experience building automated solutions for vulnerability management, threat detection, and security configuration drift. Fluency in cloud architecture patterns for multi-region, multi-tenant, and compliance-bound workloads (PCI, HIPAA, HITRUST) Security certifications such as CCSP, GCSA, AZ-305, DP-100 or equivalent. #BRAVOFY26 Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 2 days ago
3.0 years
16 - 20 Lacs
dehradun, uttarakhand, india
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
3.0 years
16 - 20 Lacs
vijayawada, andhra pradesh, india
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
3.0 years
16 - 20 Lacs
thiruvananthapuram, kerala, india
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
3.0 years
16 - 20 Lacs
patna, bihar, india
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
8.0 years
0 Lacs
tripura, india
On-site
As cyber threats grow in scale and complexity, cloud security isn’t just important, it’s essential. At Microsoft, we’re building secure, resilient platforms to protect our cloud environment and meet the highest standards of trust and assurance. We’re looking for a Principal Security Engineer – Cloud Security to help us lead that future. Join our dynamic Regulated Industries team within the CISO organization, where you will drive initiatives that embed security into the fabric of our cloud platforms while enabling rapid, automated detection and response capabilities. You will lead efforts in this hands-on engineering role to eliminate manual toil, build resilient security controls, and ensure our defenses can scale alongside the business. You'll be joining a team that operates at the bleeding edge of cloud and security, working across Azure and hybrid environments to protect Microsoft and its customers through innovation, collaboration, and engineering excellence. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day. Responsibilities Design and deploy advanced security controls and architectures across Azure and hybrid cloud environments. Lead the design of secure, scalable, and resilient systems, influencing decisions across networking, identity, compute, and data boundaries. Develop automation pipelines for detection, response, and remediation using tools like Azure Sentinel, Logic Apps, Defender for Cloud, Microsoft Graph, and custom scripting. Investigate security incidents, help contain threats, and provide technical support for high-impact response efforts. Build scalable integrations with Microsoft security stack to improve visibility, containment, and incident response. Collaborate with threat detection teams to operationalize detection-as-code, security playbooks, and custom analytic rules aligned to MITRE ATT&CK. Integrate AI/ML solutions into security operation for intelligent incident triage, control validation, and telemetry analysis. Partner with engineering, platform, and devops teams to embed security guardrails into CI/CD and cloud workflows. Serve as a technical advisor and mentor to security engineers, sharing best practices for automation and secure-by-design patterns. Contribute to internal frameworks, reusable modules, and open-source tooling that improve cloud security maturity across the org. Develop and integrate machine learning models and AI agents for anomaly detection, behavioral analytics, policy drift detection, alert triage, and security decision support. Track emerging threats, evolving compliance landscapes, and Microsoft’s latest security innovations and turn that insight into action. Qualifications Required Qualifications: 8+ years of experience in security engineering or platform architecture, with 4+ years focused on cloud security in Azure, AWS, or GCP. Deep, hands-on expertise with Microsoft Azure; including AKS, App Services, Key Vault, Managed Identities, API Management and Azure Policy. Advanced proficiency in Python, PowerShell, Kusto/KQL and the ability to design and build tooling that scales across environments and teams. Experience with AI/ML in security contexts, such as anomaly detection, predictive modeling, or triaging security signals using large datasets. Strong communication skills so you can speak both engineer and executive fluently. Preferred Qualifications Hands-on experience with Microsoft Defender for Cloud, Azure Monitor, Sentinel, or Purview. Strong experience building automated solutions for vulnerability management, threat detection, and security configuration drift. Fluency in cloud architecture patterns for multi-region, multi-tenant, and compliance-bound workloads (PCI, HIPAA, HITRUST) Security certifications such as CCSP, GCSA, AZ-305, DP-100 or equivalent. #BRAVOFY26 Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 2 days ago
7.0 years
0 Lacs
pune, maharashtra, india
On-site
Site Reliability Engineering (SRE) at Equifax is a discipline that combines software and systems engineering for building and running large-scale, distributed, fault-tolerant systems. SRE ensures that internal and external services meet or exceed reliability and performance expectations while adhering to Equifax engineering principles. SRE is also an engineering approach to building and running production systems – we engineer solutions to operational problems. Our SREs are responsible for overall system operation and we use a breadth of tools and approaches to solve a broad set of problems. Practices such as limiting time spent on operational work, blameless postmortems, proactive identification, and prevention of potential outages. Our SRE culture of diversity, intellectual curiosity, problem solving and openness is key to its success. Equifax brings together people with a wide variety of backgrounds, experiences and perspectives. We encourage them to collaborate, think big, and take risks in a blame-free environment. We promote self-direction to work on meaningful projects, while we also strive to build an environment that provides the support and mentorship needed to learn, grow and take pride in our work. What You’ll Do Manage system(s) uptime across cloud-native (AWS, GCP) and hybrid architectures. Build infrastructure as code (IAC) patterns that meet security and engineering standards using one or more technologies (Terraform, scripting with cloud CLI, and programming with cloud SDK). Build CI/CD pipelines for build, test and deployment of application and cloud architecture patterns, using platform (Jenkins) and cloud-native toolchains. Build automated tooling to deploy service requests to push a change into production. Build runbooks that are comprehensive and detailed to manage detect, remediate and restore services. Solve problems and triage complex distributed architecture service maps. On call for high severity application incidents and improving run books to improve MTTR Lead availability blameless postmortem and own the call to action to remediate recurrences. What Experience You Need BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent job experience required 7-10 years of experience in software engineering, systems administration, database administration, and networking. 4+years of experience developing and/or administering software in public cloud Experience in monitoring infrastructure and application uptime and availability to ensure functional and performance objectives. Experience in languages such as Python, Bash, Java, Go JavaScript and/or node.js Demonstrable cross-functional knowledge with systems, storage, networking, security and databases System administration skills, including automation and orchestration of Linux/Windows using Terraform, Chef, Ansibleand/or containers (Docker, Kubernetes, etc.) Proficiency with continuous integration and continuous delivery tooling and practices Cloud Certification Strongly Preferred What Could Set You Apart You take a system problem-solving approach, coupled with strong communication skills and a sense of ownership and drive Experience managing Infrastructure as code via tools such as Terraform or CloudFormation Passion for automation with a desire to eliminate toil whenever possible You’ve built software or maintained systems in a highly secure, regulated or compliant industry Experience and passion for working within a DevOps culture and as part of a team Proficiency with continuous integration and continuous delivery tooling and practices
Posted 2 days ago
5.0 years
0 Lacs
pune, maharashtra, india
On-site
Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Design, develop, and operate high scale applications across the full engineering stack Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Manage sole project priorities, deadlines, and deliverables. Research, create, and develop software applications to extend and improve on Equifax Solutions Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What Experience You Need Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+) Automated Testing: JUnit, Selenium, LoadRunner, SoapUI
Posted 2 days ago
7.0 years
0 Lacs
pune, maharashtra, india
On-site
Equifax is seeking a Technical Architect to significantly contribute to identifying best-fit architectural solutions for one or more projects. You will collaborate with some of the best talent in the industry to create and implement innovative high quality solutions focused on our customers’ business needs. You are a problem solver who is highly motivated by designing solutions that amaze customers. As a Technical Architect at Equifax, you will design and plan technology solutions, alongside providing leadership and support to development teams. You excel at driving business value through technology and addressing technical issues and concerns. What You Will Do Lead moderately large or complex projects and ensure the success of application rollouts. Determine and develop aligned recommended architectural approaches document current systems Serves as technical architecture expert for software development / infrastructure teams Assists project teams in estimating the architecture design effort and in developing/ reviewing/ approving architecture designs. Advises project teams on the application of standards/methodology practices to the project. Communicates and validates programmer architecture with Technology, and Business executive level stakeholders. Uses a broad and deep understanding of technical concepts in multiple specialized fields to develop solutions to problems and critical design issues. What Experience You Need BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent job experience required 7+ years of experience developing and/or administering software in the public cloud, including 10+ years overall relevant experience. Experience in monitoring infrastructure and application uptime and availability to ensure functional and performance objectives. Experience in languages such as Python, Ruby, Bash, Java, Go, Perl, JavaScript and/or node.js Demonstrable cross-functional knowledge with systems, storage, networking, security and databases Experience with multiple OSS frameworks and microservice implementation styles. Experience with message driven application development (pub-sub, Kafka) and event driven microservice implementation Proficiency with continuous integration and continuous delivery tooling and practices Strong analytical and troubleshooting skills Cloud Certification Strongly Preferred What Could Set You Apart An ability to demonstrate successful performance of our Success Profile skills, including: Application Development/Programming - Experience working with software design, programming languages (preferably Python, Java, Javascript), databases and networking. Knowledge of Event Driven Architecture, Functional/Object Oriented Programming Cloud Services (AWS/GCP) - Leveraging cloud native GCP/AWS service while architecting a particular product/platform. Patterns and anti-patterns for Cloud Services DevSecOps - Need to have a strong understanding of both security and DevOps principles. They need to be able to design and implement solutions that meet the security and compliance requirements of the organization. They also need to be able to work with developers and operations teams to ensure that the solutions are implemented in a way that does not impact the development or deployment process. Technical Communication/Presentation - Demonstrates strong written and verbal communication skills, and the ability to tailor to specific audiences. Work with others to achieve results and proactively address sources of conflict and emotion with focus on the best solution for Equifax Technology Advising/Consulting - Bring new ideas and perspective to table to help improve design and implementation of Technology Architecture. Must have a strong understanding of the technical aspects of the projects they are working on, as well as the ability to work effectively with others. Must also be able to think strategically and to be able to solve problems creatively. Identify and mitigate potential risks before it impacts projects. Knowledge with enterprise architecture frameworks, patterns, and best practices. Experience in developing designs and architecture documents that the rest of the SDLC teams can follow.
Posted 2 days ago
7.0 years
0 Lacs
pune, maharashtra, india
On-site
Site Reliability Engineering (SRE) at Equifax is a discipline that combines software and systems engineering for building and running large-scale, distributed, fault-tolerant systems. SRE ensures that internal and external services meet or exceed reliability and performance expectations while adhering to Equifax engineering principles. SRE is also an engineering approach to building and running production systems – we engineer solutions to operational problems. Our SREs are responsible for overall system operation and we use a breadth of tools and approaches to solve a broad set of problems. Practices such as limiting time spent on operational work, blameless postmortems, proactive identification, and prevention of potential outages. Our SRE culture of diversity, intellectual curiosity, problem solving and openness is key to its success. Equifax brings together people with a wide variety of backgrounds, experiences and perspectives. We encourage them to collaborate, think big, and take risks in a blame-free environment. We promote self-direction to work on meaningful projects, while we also strive to build an environment that provides the support and mentorship needed to learn, grow and take pride in our work. What You’ll Do Manage system(s) uptime across cloud-native (AWS, GCP) and hybrid architectures. Build infrastructure as code (IAC) patterns that meet security and engineering standards using one or more technologies (Terraform, scripting with cloud CLI, and programming with cloud SDK). Build CI/CD pipelines for build, test and deployment of application and cloud architecture patterns, using platform (Jenkins) and cloud-native toolchains. Build automated tooling to deploy service requests to push a change into production. Build runbooks that are comprehensive and detailed to manage detect, remediate and restore services. Solve problems and triage complex distributed architecture service maps. On call for high severity application incidents and improving run books to improve MTTR Lead availability blameless postmortem and own the call to action to remediate recurrences. What Experience You Need BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent job experience required 7-10 years of experience in software engineering, systems administration, database administration, and networking. 4+years of experience developing and/or administering software in public cloud 3+ years of Project and/or People Leadership experience in Cloud/Data Engineering Experience in monitoring infrastructure and application uptime and availability to ensure functional and performance objectives. Demonstrable cross-functional knowledge with systems, storage, networking, security and databases System administration skills, including automation and orchestration of Linux/Windows using Terraform, Chef, Ansible and/or containers (Docker, Kubernetes, etc.) Proficiency with continuous integration and continuous delivery tooling and practices Cloud Certification Strongly Preferred What Could Set You Apart An ability to demonstrate successful performance of our Success Profile skills, including: DevSecOps - Leads DevSecOps operational practices and designs solutions that improve resilience of products/services. Designs, codes, verifies, tests, documents, modifies complex programs/scripts and integrated software services. Leads exploration of new software development methods, tools, and techniques. Continuously looks for opportunities to improve standard processes and tools to achieve a well-engineered result. Conducts reviews of overall team performance and works directly with colleagues to improve team performance. Operational Excellence - Drives work plans for short-term assignments of moderate complexity, typically contained within their own function. Establishes the processes to monitor and measure systems against key metrics to ensure availability of systems. Reviews and recommends new ways of working to make processes run smoother and faster. Systems Thinking - Ensures knowledge of best practices and how systems integrate with others to improve their own work and the work of less experienced colleagues. Assess technology trends, use knowledge and make recommendations on improving upon the defined expectations of systems availability. Technical Communication/Presentation - Articulates complex messages and the impacts to stakeholders to build support and agreement. Demonstrates strong written and verbal communication skills, and the ability to tailor to specific audiences. Work with others to achieve results and proactively address sources of conflict and emotion with focus on the best solution for Equifax. Troubleshooting - Applies a methodical approach to routine and moderately complex issue definition and resolution. Initiates and coordinates actions to investigate and resolve problems in systems, processes and services.Reviews and approves problem fixes/remedies. Plans and coordinates the implementation of agreed remedies. Ensure that patterns and trends are assessed and make recommendations for improved system reliability.
Posted 2 days ago
5.0 years
0 Lacs
pune, maharashtra, india
On-site
Title: Generative AI Engineer Location: Pune(Onsite) Job Type: Fulltime Job Description You will be a key member of the Turing GenAI delivery organization and part of a GenAI project. You will be required to work with a team of other Turing engineers across different skill sets. In the past, the Turing GenAI delivery organization has implemented industry leading multi-agent LLM systems, RAG systems, and Open Source LLM deployments for major enterprises. Required skills • 5+ years of professional experience in building Machine Learning models & systems • 1+ years of hands-on experience in how LLMs work & Generative AI (LLM) techniques particularly prompt engineering, RAG, and agents. • Experience in driving the engineering team toward a technical roadmap. • Expert proficiency in programming skills in Python, Langchain/Langgraph and SQL is a must. • Understanding of Cloud services, including Azure, GCP, or AWS • Excellent communication skills to effectively collaborate with business SMEs Roles & Responsibilities • Develop and optimize LLM-based solutions: Lead the design, training, fine-tuning, and deployment of large language models, leveraging techniques like prompt engineering, retrieval-augmented generation (RAG), and agent-based architectures. • Codebase ownership: Maintain high-quality, efficient code in Python (using frameworks like LangChain/LangGraph) and SQL, focusing on reusable components, scalability, and performance best practices. • Cloud integration: Aide in deployment of GenAI applications on cloud platforms (Azure, GCP, or AWS), optimizing resource usage and ensuring robust CI/CD processes. • Cross-functional collaboration: Work closely with product owners, data scientists, and business SMEs to define project requirements, translate technical details, and deliver impactful AI products. • Mentoring and guidance: Provide technical leadership and knowledge-sharing to the engineering team, fostering best practices in machine learning and large language model development. • Continuous innovation: Stay abreast of the latest advancements in LLM research and generative AI, proposing and experimenting with emerging techniques to drive ongoing improvements in model performance. Thanks Aatmesh aatmesh.singh@ampstek.com
Posted 2 days ago
5.0 years
0 Lacs
pune, maharashtra, india
Remote
Job Description Senior Data Engineer Our Enterprise Data & Analytics (EDA) is looking for an experienced Senior Data Engineer to join our growing data engineering team. You’ll work in a collaborative Agile environment using the latest engineering best practices with involvement in all aspects of the software development lifecycle. You will craft and develop curated data products, applying standard architectural & data modeling practices to maintain the foundation data layer serving as a single source of truth across Zendesk . You will be primarily developing Data Warehouse Solutions in BigQuery/Snowflake using technologies such as dbt, Airflow, Terraform. What You Get To Do Every Single Day Collaborate with team members and business partners to collect business requirements, define successful analytics outcomes and design data models Serve as Data Model subject matter expert and data model spokesperson, demonstrated by the ability to address questions quickly and accurately Implement Enterprise Data Warehouse by transforming raw data into schemas and data models for various business domains using SQL & dbt Design, build, and maintain ELT pipelines in Enterprise Data Warehouse to ensure reliable business reporting using Airflow, Fivetran & dbt Optimize data warehousing processes by refining naming conventions, enhancing data modeling, and implementing best practices for data quality testing Build analytics solutions that provide practical insights into customer 360, finance, product, sales and other key business domains Build and Promote best engineering practices in areas of version control system, CI/CD, code review, pair programming Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery Work with data and analytics experts to strive for greater functionality in our data systems Basic Qualifications What you bring to the role: 5+ years of data engineering experience building, working & maintaining data pipelines & ETL processes on big data environments 5+ years of experience in Data Modeling and Data Architecture in a production environment 5+ years in writing complex SQL queries 5+ years of experience with Cloud columnar databases (We use Snowflake) 2+ years of production experience working with dbt and designing and implementing Data Warehouse solutions Ability to work closely with data scientists, analysts, and other stakeholders to translate business requirements into technical solutions. Strong documentation skills for pipeline design and data flow diagrams. Intermediate experience with any of the programming language: Python, Go, Java, Scala, we primarily use Python Integration with 3rd party API SaaS applications like Salesforce, Zuora, etc Ensure data integrity and accuracy by conducting regular data audits, identifying and resolving data quality issues, and implementing data governance best practices. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Preferred Qualifications Hands-on experience with Snowflake data platform, including administration, SQL scripting, and query performance tuning Good Knowledge in modern as well as classic Data Modeling - Kimball, Innmon, etc Demonstrated experience in one or many business domains (Finance, Sales, Marketing) 3+ completed “production-grade” projects with dbt Expert knowledge in python What Does Our Data Stack Looks Like ELT (Snowflake, Fivetran, dbt, Airflow, Kafka, HighTouch) BI (Tableau, Looker) Infrastructure (GCP, AWS, Kubernetes, Terraform, Github Actions) Please note that Zendesk can only hire candidates who are physically located and plan to work from Karnataka or Maharashtra. Please refer to the location posted on the requisition for where this role is based. Hybrid: In this role, our hybrid experience is designed at the team level to give you a rich onsite experience packed with connection, collaboration, learning, and celebration - while also giving you flexibility to work remotely for part of the week. This role must attend our local office for part of the week. The specific in-office schedule is to be determined by the hiring manager. The Intelligent Heart Of Customer Experience Zendesk software was built to bring a sense of calm to the chaotic world of customer service. Today we power billions of conversations with brands you know and love. Zendesk believes in offering our people a fulfilling and inclusive experience. Our hybrid way of working, enables us to purposefully come together in person, at one of our many Zendesk offices around the world, to connect, collaborate and learn whilst also giving our people the flexibility to work remotely for part of the week. Zendesk is an equal opportunity employer, and we’re proud of our ongoing efforts to foster global diversity, equity, & inclusion in the workplace. Individuals seeking employment and employees at Zendesk are considered without regard to race, color, religion, national origin, age, sex, gender, gender identity, gender expression, sexual orientation, marital status, medical condition, ancestry, disability, military or veteran status, or any other characteristic protected by applicable law. We are an AA/EEO/Veterans/Disabled employer. If you are based in the United States and would like more information about your EEO rights under the law, please click here. Zendesk endeavors to make reasonable accommodations for applicants with disabilities and disabled veterans pursuant to applicable federal and state law. If you are an individual with a disability and require a reasonable accommodation to submit this application, complete any pre-employment testing, or otherwise participate in the employee selection process, please send an e-mail to peopleandplaces@zendesk.com with your specific accommodation request.
Posted 2 days ago
2.0 years
0 Lacs
pune, maharashtra, india
On-site
We are looking for a Site Reliability Engineer (SRE) with a strong background in Google Cloud Platform (GCP), Google BI and AI/ML tools (Looker, BigQuery ML, Vertex AI, etc.). The ideal candidate will be responsible for ensuring the reliability, performance, and scalability of our on-premises and cloud-based systems along with focus on reducing costs for Google Cloud. What You’ll Do Work in a DevSecOps environment responsible for the building and running of large-scale, massively distributed, fault-tolerant systems. Work closely with development and operations teams to build highly available, cost effective systems with extremely high uptime metrics. Work with cloud operations team to resolve trouble tickets, develop and run scripts, and troubleshoot Create new tools and scripts designed for auto-remediation of incidents and establishing end-to-end monitoring and alerting on all critical aspects Build infrastructure as code (IAC) patterns that meet security and engineering standards using one or more technologies (Terraform, scripting with cloud CLI, and programming with cloud SDK). Participate in a team of first responders in a 24/7 environment, follow the sun operating model for incident and problem management What Experience You Need BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent job experience required 2-5 years of experience in software engineering, systems administration, database administration, and networking. 1+ years of experience developing and/or administering software in public cloud Experience in monitoring infrastructure and application uptime and availability to ensure functional and performance objectives. Experience in languages such as Python, Bash, Java, Go JavaScript and/or node.js Demonstrable cross-functional knowledge with systems, storage, networking, security and databases System administration skills, including automation and orchestration of Linux/Windows using Terraform, Chef, Ansible and/or containers (Docker, Kubernetes, etc.) Proficiency with continuous integration and continuous delivery tooling and practices Cloud Certification Strongly Preferred What Could Set You Apart You have experience designing, analyzing and troubleshooting large-scale distributed systems. You take a system problem-solving approach, coupled with strong communication skills and a sense of ownership and drive You have experience managing Infrastructure as code via tools such as Terraform or CloudFormation You are passionate for automation with a desire to eliminate toil whenever possible You’ve built software or maintained systems in a highly secure, regulated or compliant industry You thrive in and have experience and passion for working within a DevOps culture and as part of a team
Posted 2 days ago
0 years
0 Lacs
pune, maharashtra, india
On-site
Key Responsibilities JOB DESCRIPTION Design and implement data pipelines on GCP using BigQuery, Airflow Implement ETL/ELT frameworks for large-scale campaign and clickstream data Optimize pipeline costs and query performance in BigQuery Automate workflows, monitoring, and alerting for critical jobs Required Skills Strong proficiency in SQL (BigQuery preferred) and Python/PySpark Hands-on with GCP services: BigQuery, Cloud Storage Familiar with Airflow/Composer, CI/CD tools, and Git workflows/Jenkins About The Team eClerx is a global leader in productized services, bringing together people, technology and domain expertise to amplify business results. Our mission is to set the benchmark for client service and success in our industry. Our vision is to be the innovation partner of choice for technology, data analytics and process management services. Since our inception in 2000, we've partnered with top companies across various industries, including financial services, telecommunications, retail, and high-tech. Our innovative solutions and domain expertise help businesses optimize operations, improve efficiency, and drive growth. With over 18,000 employees worldwide, eClerx is dedicated to delivering excellence through smart automation and data-driven insights. At eClerx, we believe in nurturing talent and providing hands-on experience. eClerx is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, or any other legally protected basis, per applicable law.
Posted 2 days ago
7.0 years
0 Lacs
pune, maharashtra, india
On-site
What you will do Responsible for the design, development and implementation of short and long-term solutions to information technology needs through new and existing applications, systems architecture, network systems and applications infrastructure. Often involved in modifying or adapting existing designs. Reviews Technology requirements and business processes; codes, test, debugs and implements software solutions. Roles are often project based, delivering Technology change within the business. Develops and maintains technical capabilities and products to meet the business needs. Provides engineering support in the conceptualization, development, implementation and automation of technical capabilities and products. Provides technical advice and consultation on complex, critical programming applications. Achieves goals through the work of others (Note that in rare circumstances high level functional leaders may be functioning as individual contributors who coordinate work across function and are accountable for the results of a function within a BU) Management responsibilities include performance appraisals, pay reviews, training and development What Experience You Need BS degree in a STEM major or equivalent discipline 7+ years of management/development experience in software development. 3+ years GCP/AWS and Java hands-on experience needed. Mentor, lead and be responsible for all the deliverables offshore. Reviews Technology requirements and business processes; codes, test, debugs and implements software solutions. Design and implementation of new and existing applications. What could set you apart UI (Angular) experience is preferred Active Cloud Certificate.
Posted 2 days ago
2.0 - 5.0 years
0 Lacs
pune, maharashtra, india
On-site
Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. The Database Engineer will be actively involved in the evaluation, review, and management of databases. You will be part of a Team who supports a range of Applications and databases. You should be well versed in database administration which includes installation, performance tuning and troubleshooting. A strong candidate will be able to rapidly troubleshoot complex technical problems under pressure, implement solutions that are scalable, while managing multiple customer groups. What You’ll Do Develop and operationalize large-scale enterprise data solutions with a focus on high availability, low latency and scalability. Work closely with development and operations teams to build highly available, cost effective Database systems with extremely high uptime metrics. Execute and translate into operational excellence the database architecture roadmaps and specific plans to implement them Performance diagnosis, Performance tuning, scalability and DR goals Database version upgrades Patch testing and deployment Proactive database capacity planning Usage of new database features Data security and protection Functionality, stress, and load testing Execute major database migrations with minimal system downtime Ensure database standards are followed and implemented Database health monitoring Technical Documentation Proactively monitor production database systems, analyze find ings, formulate recommendations etc. Strong knowledge in Database concepts and fundamentals Work with cloud operations team to resolve trouble tickets, develop and run scripts, and troubleshoot Create new tools and scripts designed for auto-remediation of incidents and establishing end-to-end monitoring and alerting on all critical aspects Build infrastructure as code (IAC) patterns that meets security and engineering standards using one or more technologies (Terraform, scripting with cloud CLI, and programming with cloud SDK). Participate in a team of first responders 24/7, follow the sun operating model for incident and problem management. Sound Communication skills. What Experience You Need At least 2-5 years’ relevant experience as a Production Support DBA (Postgresql preferred) Experience working with cross-functional teams like Development, Operating Systems, Networking and Security. Knowledge of DB Tools & Utilities (e.g., Toad, PgAdmin, DBeaver, Export/Import etc.) Experience in Monitoring Database infrastructure metrics (e.g. uptime, availability, System resources etc) to ensure functional and performance objectives. Experience in scripting languages such as Linux Shell or Python or Go. Ability to plan work to meet project deadlines, accommodate demands by users, set priorities, organize information and escalate issues appropriately Customer service-oriented, strong problem solving skills, and the ability to understand new technologies quickly are essential Bachelor's degree in Computer Science or related technical field involving coding (e.g., Electronics, Physics or Mathematics), or equivalent job experience required. What Could Set You Apart Strong skillset on Postgresql Experience or Exposure to Public Cloud (preferred GCP or AWS) Knowledge / hands-on experience on Google BigQuery You have experience managing Infrastructure as code via tools such as Terraform or CloudFormation You are passionate for automation with a desire to eliminate toil whenever possible You have experience designing, analyzing and troubleshooting large-scale Database systems. You take a system problem-solving approach, coupled with strong professional communication skills and a sense of ownership and drive You thrive in and have experience and passion for working within a DevOps culture and as part of a team You’ve built software or maintained systems in a highly secure, regulated or compliant industry
Posted 2 days ago
1.0 - 2.0 years
0 Lacs
pune, maharashtra, india
On-site
Do you love solving real-world data problems with the latest and best techniques? And having fun while solving them in a team! Then come join our high-energy team of passionate data people. Jash Data Sciences is the right place for you. We are a cutting-edge Data Sciences and Data Engineering startup based in Pune, India. We believe in continuous learning and evolving together. And we let the data speak! What will you be doing? You will be discovering trends in the data sets and developing algorithms to transform raw data for further analytics Create Data Pipelines to bring in data from various sources, with different formats, transforming it, and finally loading it to the target database. Implement ETL/ ELT processes in the cloud using tools like AirFlow, Glue, Stitch, Cloud Data Fusion, DataFlow. Design and implement Data Lake, Data Warehouse, and Data Marts in AWS, GCP, or Azure using Redshift, BigQuery, PostgreSQL, etc. Creating efficient SQL queries and understanding query execution plans for tuning queries on engines like PostgreSQL. Performance tuning of OLAP/ OLTP databases by creating indices, tables, and views. Write Python scripts for orchestration of data pipelines Have thoughtful discussions with customers to understand their data engineering requirements. Break complex requirements into smaller tasks for execution. What do we need from you? Strong Python coding skills with basic knowledge of algorithms/data structures and their application. Strong understanding of Data Engineering concepts including ETL, ELT, Data Lake, Data Warehousing, and Data Pipelines. Experience designing and implementing Data Lakes, Data Warehouses, and Data Marts that support terabytes scale data. A track record of implementing Data Pipelines on public cloud environments (AWS/GCP/Azure) is highly desirable A clear understanding of Database concepts like indexing, query performance optimization, views, and various types of schemas. Hands-on SQL programming experience with knowledge of windowing functions, subqueries, and various types of joins. Experience working with Big Data technologies like PySpark/ Hadoop A good team player with an ability to communicate with clarity Show us your git repo/ blog! Qualification 1-2 years of experience working on Data Engineering projects for Data Engineer I 2-5 years of experience working on Data Engineering projects for Data Engineer II 1-5 years of Hands-on Python programming experience Bachelors/Masters's degree in Computer Science is good to have Courses or Certifications in the area of Data Engineering will be given a higher preference. Candidates who have demonstrated a drive for learning and keeping up to date with technology by continuing to do various courses/self-learning will be given high preference.
Posted 2 days ago
0 years
0 Lacs
pune, maharashtra, india
Remote
Hi, Greetings!! We are Hiring - Java Integration Developer. Below is the detailed JD, If interested please share requested details ASAP. JAVA INTEGRATION Developer 7+ Yrs experience Location – HYDERABAD OR PUNE (3 Days MANDATORY work from office – no work from home or remote available) Primary Skills : Java 6/7/8/11/17/21 basic Junit System to System integration SpringBoot /Spring Data Spring Cloud / Spring Security Microservice Design Patterns Kafka/MQ(Rabbit MQ,Active MQ etc.) Cloud(AWS,Azure,GCP etc.) If interested please share email to a.vagga@zensar.com Kindly share below details while sharing CV. Total experience: Relevant exp in Java: Experience in Springboot: Exp in Microservices : EXP I n API Integration : Current CTC: ECTC: Any offer Holding In Hand (Yes/No): Current Location: Preferred Location: Official Notice Period (If Serving Mention LWD ): Ready to work from office 3 days: Yes/N Current Company ( If Working as Contract mention Payroll Company): Available for an interviews on and Weekends:
Posted 2 days ago
6.0 years
0 Lacs
mumbai, maharashtra, india
On-site
Full Stack Python Developer | Software Developer – GenAI Productization Experience : 4–6 years Role Summary We are seeking an experienced Python Developer to join our GenAI team, focused on transforming proof-of-concepts (POCs) into production-grade systems. You will build and scale backend services capable of handling high concurrency, asynchronous processing, queueing, and real-time streaming . The ideal candidate has a strong foundation in backend engineering, infrastructure, API design, security, and performance optimization—especially in GCP cloud environments. Key Responsibilities Convert GenAI POCs into robust, production-ready services Develop scalable, asynchronous microservices optimized for high throughput and low latency Handle concurrency, rate limiting, throttling, and queueing strategies for high-load systems Collaborate with AI/ML teams on agent orchestration and model-serving pipelines Implement telemetry (logs, metrics, tracing) to ensure debuggability and performance insights Manage the full API lifecycle, including security (OAuth, API keys), testing, and documentation Publish and maintain client SDKs, Postman collections, and internal developer portals Define and enforce engineering standards: CI/CD automation, testing strategies, environment promotion, and release workflows Integrate with message brokers like Kafka, Google Pub/Sub for event-driven architecturesPrepare HLD/LLD, UML/sequence diagrams , and apply design patterns for resilient system design Design and implement reliable, versioned APIs with backward compatibility Required Skills Expert-level proficiency in Python , especially using FastAPI , and strong understanding of asynchronous programming and multiprocessing Deep understanding of microservices, event-driven, and async system design Proficient in WebSockets, gRPC , REST, and OpenAPI/Swagger-based API contract design Proficient in OOP, dependency injection, and Pydantic-based validation in FastAPI for building modular, maintainable APIs. Proficient in working with databases using ORMs like SQLAlchemy , along with strong command over relational database design, queries , and performance optimization. Hands-on experience with cloud-native development on GCP , AWS, or Azure, including API gateways , autoscaling, and serverless architecture Strong grasp of Docker , Git-based version control, and container orchestration workflows Deep understanding of network, authentication, and infosec aspects in API and app deployments Familiarity with CI/CD pipelines , infrastructure-as-code, and secure deployment practices Experienced in DevOps practices , including configuring reverse proxy with NGINX to enable secure and efficient communication between frontend and backend services deployed on GKE. Preferred Experience with Kafka, Google Pub/Sub , or equivalent message brokers Working knowledge of React.js, HTML, CSS for integration and debugging (not core responsibility)Prior experience with GenAI-based systems, especially real-time chatbots or voicebots Exposure to model orchestration frameworks, LLM serving, or Vertex AI Knowledge of zero-downtime deployment and rollback strategies Exposure to LLM orchestration (LangChain, LangGraph) Experience with RAG architectures, vector DBs, and MLOps frameworks (GCP Vertex Pipelines). Understanding of Model Context Protocol (MCP) and Agent-to-Agent toolkits for advanced agent workflows. Strong UX awareness to influence AI-driven product design and user journeys.
Posted 2 days ago
3.0 years
16 - 20 Lacs
gurugram, haryana, india
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
5.0 - 8.0 years
5 - 9 Lacs
hyderabad
Work from Office
Test Automation - Kotlin Automation Framework : Selenium with Kotlin Polymorphic Domain Specific Language(PDSL) Framework Allure report Docker Technologies used: Language Used: Kotlin Build Tool: Gradle CI Tool GitHub( Code repository) Harness(CI/CD tool) GCP Service Artifact Registry(For storing docker image) Cloud Run( For test execution) Cloud Storage( For storing test results) Mandatory Skills: Test Automation. Experience: 5-8 Years.
Posted 2 days ago
175.0 years
0 Lacs
gurgaon, haryana, india
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. We’re looking for a Site Reliability/Application Support Engineers (SRE/AS) responsible for web/servicing application performance, availability, and reliability. Candidate is responsible to provide consultation and strategic recommendations by quickly assessing and remediating complex platform availability issues. Site Reliability Engineering (SRE) is a continuous engineering discipline that effectively combines software development and systems engineering to build and run scalable, distributed, fault-tolerant systems. This role will ensure that American Express internal and external services have reliability and uptime appropriate to users' needs. We also ensure a continuous improvement, while keeping an ever-watchful eye, automated, on capacity and performance. This role will drive the SRE/AS mindset which strives to use software engineering to build and run better production systems. You will write software to optimize day to day work through better automation, monitoring, alerting, testing, and deployment. You’ll be expected to work with several Technology partners to identify areas of opportunity within the availability platform and build a solution to automate monitoring solutions for the modernization platform, technology, and constant innovations to drive efficiencies. You will be responsible for implementing tracing, monitoring, tooling solutions to maximize the performance and availability of our Web/Servicing applications. Qualifications BS or MS degree in computer science, computer engineering, or other technical discipline, or equivalent 6-10 years of work experience in DevOps/SRE. Experience in Genesys Engage (PureEngage), Genesys Cloud (PureCloud) or Genesys PureConnect. Good understanding of VoIP, SIP protocols and telephony infrastructure. Ability to analyze Genesys logs (SIP, T-Server logs, Interaction, WDE) to identify issues. Good understanding of call flows, IVR scripting (Genesys Composer, SCXML) and routing logic. Configuring and troubleshooting Genesys reports (GCXI, Infomart, Pulse). Analytical knowledge and exposure on root cause identification using analyzer tools like Kazimir, MyZamir and Speechminer. Experience with Oracle, SQL Server or PostgresSQL for configuration and troubleshooting. Strong understanding of TCP/IP, SIP and RTP. Knowledge of Public Cloud technologies GCP, AWS, AZURE etc. would be an advantage Hands on experience on enterprise tools set such as Grafana, Dynatrace, AppDynamics, BMC, Prometheus etc. Knowledge on Unix shell scripting, PERL or Python programming is preferred Working experience with Network load balancers, Global Traffic Managers (GTMs), Local Traffic Managers (LTMs) Hands on experience on configuring Splunk, Grafana dashboards, etc. Good understanding of Linux OS internals, performance tools, Core commands, security etc. Exposure to enterprise platform migration from dedicated to cloud environment is preferred We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 2 days ago
7.0 - 12.0 years
15 - 20 Lacs
bengaluru
Hybrid
Hello, Greetings for the Day!!! Mandatory SKills Java Any Cloud
Posted 2 days ago
5.0 - 8.0 years
6 - 10 Lacs
pune, chennai, bengaluru
Work from Office
Job Title AS/400 (iSeries/IBM i) System Administrator Proposed Grade Band B2 Demand Role System Administrator L2 Location Pune, Bangalore, Chennai, Coimbatore - India Rate up to 150K Educational Qualifications Bachelor's or equivalent technical education Language English Certifications IBM Certified System Administrator / Technical Expert, Solution Expert or Specialist Experience Level 4 - 6 years of experience in AS/400 System Administration and Operation support. AS/400 (iSeries/IBM i) Sr. System Administrator (L2 The AS/400 System Administrator responsible for delivery of AS/400 (IBM i) IT Infrastructure elements both on premise and on cloud hosted environments. Migration and implementation expertise of On-premises would be a key advantage and differentiator. Responsibilities include : Understand the clients overall IT architecture and AS/400 environment. Working in Account Delivery, he would be a part of the Technical Transition of IBMi specific from the incumbent SME. Candidate should be capable of solving complex technical problems, communicating technical concepts to clients and peers with mixed levels of technical abilities, and making sound technical and architectural decisions independently. Where required, he would work with the Team Lead to design and implementation of migrations be it newer hardware on premise hosting or to the public cloud. Besides supporting BAU/Day to day operations, candidate will be responsible and accountable for handling and resolving P1/P2 incidents till closure & RCA thereafter. Provide Technical Support for Business System Architecture Planning & Design Owning and managing system security at IBMi level. Debug minor OS, DB2 and other ISV product problems and apply fixes if necessary. Work with IBM/ISVs for technical support if necessary. IBM and other ISV software installation and maintenance Manage and secure network connectivity and communications related to the IBMi platform. Establish and adhere to IT industry best practices and standards for clients Power Systems for IBMi (AS/400). Consult with end users, leadership, vendors, and technicians to assess IBMi system requirements. Should be capable of working with a team in carrying out core activities such as system build and implementations, migrations, OS upgrades, DR exercises and handling P1 incidents. The role requires provision of out of hours support for the implementation of hardware/software changes and support for other projects requiring IBMi infrastructure expertise as well as being part of an on-call rota. Should be ready and available to work on various time zones depending on client needs. Skill and Experience (Essential) Engineering Degree with 4-6 years of experience in IT AS/400 System Administration support in a high demand-based, service-oriented environment. Sound hands on experience in working on AS/400 environments as a System Administrator with experience in LPAR administrator, System Capacity & Performance tuning, OS & other third-party Software installation, upgrade, and maintenance. Experience working in international business environments with complex AS/400 environments either on-site or in the cloud Experience in architecting Hardware system & OS upgrades is key. Hands on experience in Monitoring Tool: Icinga; HA: PowerHA and BRMS software. Familiarity with third party monitoring software will be an added advantage. Knowledge of performing Performance Tuning of the AS/400 environment. Experience in installation, configuration and maintenance of AS/400 hardware, software, networked related devices and AS/400 client-access software, journaling, HMC management etc. Should possess the knowledge and ability to do fail-over testing from Production to DR system in a HA software replication environment. Advanced knowledge of setting up, managing, and administering HA replication software such as MIMIX, iTera, etc. Sound experience in Backup/Recovery using BRMS. Average CL scripting skills is a requirement. Should be an expert with hands-on knowledge and experience on various IBM Power System models. Ability to independently plan and lead an AS/400 migration and implementation project team is critical. Public cloud (especially Azure/AWS/GCP/IBM) knowledge would be an added advantage. Good and effective written and oral communication Good understanding of ITIL modules like Incident, Asset, Problem and Change Management. Mandatory Skills: AS400 Admin. Experience: 5-8 Years.
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
55803 Jobs | Dublin
Wipro
24489 Jobs | Bengaluru
Accenture in India
19138 Jobs | Dublin 2
EY
17347 Jobs | London
Uplers
12706 Jobs | Ahmedabad
IBM
11805 Jobs | Armonk
Bajaj Finserv
11514 Jobs |
Amazon
11476 Jobs | Seattle,WA
Accenture services Pvt Ltd
10903 Jobs |
Oracle
10677 Jobs | Redwood City