Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Senior Software Engineer in the Developer Experience and Productivity Engineering team at Coupa, you will play a crucial role in designing, implementing, and enhancing our sophisticated AI orchestration platform. Your primary responsibilities will revolve around architecting AI and MCP tools architecture with a focus on scalability and maintainability, developing integration mechanisms for seamless connectivity between AI platforms and MCP systems, and building secure connectors to internal systems and data sources. You will have the opportunity to collaborate with product managers to prioritize and implement features that deliver significant business value. Additionally, you will mentor junior engineers, contribute to engineering best practices, and work on building a scalable, domain-based hierarchical structure for our AI platforms. Your role will involve creating specialized tools tailored to Coupa's unique operational practices, implementing secure knowledge integration with AWS RAG and Knowledge Bases, and designing systems that expand capabilities while maintaining manageability. In this role, you will get to work at the forefront of AI integration and orchestration, tackling complex technical challenges with direct business impact. You will collaborate with a talented team passionate about AI innovation and help transform how businesses leverage AI for operational efficiency. Furthermore, you will contribute to an architecture that scales intelligently as capabilities grow, work with the latest LLM technologies, and shape their application in enterprise environments. To excel in this position, you should possess at least 5 years of professional software engineering experience, be proficient in Python and RESTful API development, and have experience in building and deploying cloud-native applications, preferably on AWS. A solid understanding of AI/ML concepts, software design patterns, system architecture, and performance optimization is essential. Additionally, you should have experience with integrating multiple complex systems and APIs, strong problem-solving skills, attention to detail, and excellent communication abilities to explain complex technical concepts clearly. Preferred qualifications include experience with AI orchestration platforms or building tools for LLMs, knowledge of vector databases, embeddings, and RAG systems, familiarity with monitoring tools like New Relic, observability patterns, and SRE practices, and experience with DevOps tools like Jira, Confluence, GitHub, or similar tools and their APIs. Understanding security best practices for AI systems and data access, previous work with domain-driven design and microservices architecture, and contributions to open-source projects or developer tools are also advantageous. Coupa is committed to providing equal employment opportunities to all qualified candidates and employees, fostering a welcoming and inclusive work environment. Decisions related to hiring, compensation, training, or performance evaluation are made fairly, in compliance with relevant laws and regulations. Please note that inquiries or resumes from recruiters will not be accepted. By applying to this position, you acknowledge that Coupa collects your application, including personal data, for managing ongoing recruitment and placement activities, as well as for employment purposes if your application is successful. You can find more information about how your application is processed, the purposes of processing, and data retention in Coupa's Privacy Policy.,
Posted 2 days ago
6.0 - 11.0 years
8 - 12 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Observability & SRE Engineer Azure & Splunk (3 Months) Role Overview : We are looking for a highly skilled Observability and Site Reliability Engineer (SRE) with strong experience in Splunk integration with Azure, cloud-native monitoring, and chaos engineering practices. The ideal candidate will play a key role in improving system reliability, monitoring capabilities, and resilience across our Azure cloud infrastructure. Key Responsibilities : Design, implement, and manage observability solutions using Splunk integrated with Azure Monitor, Log Analytics, and Application Insights. Develop and maintain monitoring, alerting, and dashboarding solutions to ensure system health and performance. Implement Azure Chaos Engineering tools and scenarios to proactively test the resilience of cloud applications. Collaborate with application and infrastructure teams to identify SLOs/SLIs and define reliability objectives. Automate incident detection and response processes using Splunk alerts, Azure Automation, and scripting. Conduct root cause analysis (RCA) and post-incident reviews to drive continuous improvement. Drive the adoption of SRE principles and practices across engineering teams. Location - Delhi / NCR, Bangalore, Mumbai, Pune
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
As a Tech Lead specializing in Observability Tools at TCS, you will have over 10 years of experience in designing and implementing observability solutions. Your core responsibilities will include selecting, configuring, and deploying tools and platforms for collecting, processing, and analyzing telemetry data such as logs, metrics, and traces. You will also be tasked with creating monitoring and alerting systems, instrumenting applications and infrastructure, analyzing system performance, defining service level objectives, and improving incident response processes. Collaboration is key in this role, as you will work closely with development, operations, and SRE teams to integrate observability practices throughout the software development lifecycle. Additionally, you will be responsible for educating and mentoring teams on observability best practices and staying up to date with the latest observability trends and technologies. To excel in this role, you should possess a strong understanding of observability principles, proficiency with observability tools and platforms, programming and scripting skills, experience with cloud platforms, understanding of distributed systems, troubleshooting and problem-solving skills, communication and collaboration skills, knowledge of DevOps and SRE practices, data analysis and visualization skills, and experience with containerization and orchestration technologies. Overall, as a Tech Lead for Observability Tools at TCS, you will play a crucial role in enhancing system performance, reliability, and incident response by leveraging observability practices and tools effectively.,
Posted 1 week ago
15.0 - 19.0 years
0 Lacs
pune, maharashtra
On-site
As a Java Backend Engineer at VP level in Pune, India, you will lead a team of engineers in designing and implementing high-performance applications. Your role will involve working within an agile delivery team, utilizing innovative approaches to software development and staying updated with the latest technologies. You will foster a collaborative environment, contributing to all stages of software delivery from analysis to production support. Benefits: - Best in class leave policy - Gender neutral parental leaves - Reimbursement under childcare assistance benefit - Flexible working arrangements - Sponsorship for industry relevant certifications - Employee Assistance Program - Comprehensive Hospitalization Insurance - Accident and Term life Insurance - Health screening for 35 yrs. and above Key Responsibilities: - Develop technical designs and implement software solutions - Provide technical vision and direction to ensure team alignment with bank strategy - Mentor junior developers - Conduct code reviews - Troubleshoot and resolve technical issues - Collaborate with stakeholders to develop solutions Skills and Experience: - Bachelor's degree in Computer Science Engineering or related fields - 15+ years of Java/J2EE development experience - Design/development of large scale monolith banking applications - Proficiency in Java/J2EE technologies, Spring, Spring Boot Microservices, web services, and database technologies - Experience with Open-source technologies, application servers, and SRE practices - Agile Software Development experience - Performance testing and CI/CD pipeline implementation skills - Knowledge of BDD/TD methodologies and tools - Experience with Development and Monitoring Tools - Familiarity with Google cloud platform - Excellent communication and problem-solving skills Qualification: - Bachelor's degree in science, computers, or information technology Support: - Training and development - Coaching and support from experts - Culture of continuous learning - Flexible benefits Join us at Deutsche Bank Group where we empower each other to excel together every day. We promote a positive, fair, and inclusive work environment, celebrating the successes of our people. Visit our website for more information: https://www.db.com/company/company.htm,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Machine Learning and Python Senior Engineer at JPMorgan Chase within the Asset and Wealth Management Team, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm's business objectives. You will be involved in the development and deployment of AI/ML solutions to drive business value. Your role includes designing, developing, and deploying state-of-the-art AI/ML/LLM/GenAI solutions that align with business objectives. You will conduct thorough evaluations of Gen AI models, iterate on model architectures, and implement improvements to enhance overall performance across applications. Additionally, you will develop appropriate testing frameworks for models and guide India team developers through the Machine Learning Development Lifecycle (MDLC). Collaboration is key in this role, as you will work directly with the US Core Client Service AI team to drive AI implementations from India. You are expected to collaborate with cross-functional teams to understand requirements and translate them into technical solutions. It is essential to stay updated with the latest trends and advancements in data science, machine learning, and related fields to continuously enhance your skills and knowledge. In terms of qualifications, capabilities, and skills, you are required to have formal training or certification in Computer Science, Engineering, or a related field with at least 3 years of applied experience. Proficiency in programming skills, particularly in Python, is necessary. Hands-on experience in applied AI/ML engineering, leading developers, and working in a fast-paced environment is crucial. You should also be proficient in programming languages like Python for model development, experimentation, and integration with Azure OpenAI API. Strong collaboration and communication skills are essential to work effectively with geographically spread cross-functional teams, communicate complex concepts, and contribute to interdisciplinary projects. Problem-solving and analytical skills with a keen attention to detail are highly valued. Experience with cloud platforms for deploying and scaling AI/ML models is advantageous. Preferred qualifications include experience in backend development, knowledge of SRE practices, and exposure to cloud automation technologies. Additionally, familiarity with large language models (LLMs), databases, programming languages, web frameworks, APIs, and microservices is beneficial. You should be able to assess and choose suitable LLM tools and models for various tasks, curate custom datasets, fine-tune LLM models, and design advanced LLM prompts. Your role may involve executing experiments to push the capability limits of LLM models and enhance their dependability.,
Posted 1 week ago
14.0 - 24.0 years
50 - 60 Lacs
Noida, Hyderabad, Pune
Work from Office
Expectations Prior experience serving as an architect in Practice, COE, and HBUs, where they have creating service offerings, solution accelerators, and unique selling propositions Play a critical role in driving automation, continuous integration/continuous delivery (CI/CD), and monitoring capabilities to enhance the development and operations processes. Lead and execute designing, defining, and prototyping the end-to-end unified observability system leveraging NewRelic, Splunk and Grafana Stack Define build, implementation, and deployment strategies for the DevOps, Observability and Site Reliability Engineering Marketing of technology & domain solutions / service offerings to internal/external stakeholders Manage business relationship with the technology partners & start-up eco systems and demonstrate edge over competition. Passionate about technology and customer success with excellent communication and articulation skills Should have prior experience in presenting capabilities and solutions to end customers Build initial prototypes of the observability solution and lead the demo sessions with the customer teams Behavior Competencies Excellent Communication, interpersonal and Presentation Skills People Management Conflict Resolution Solutioning Customer Service Accountability Judgement and decision making Ability to build and maintain relationships with stakeholders Technical Skills At least 4 years of pre-sales experience, working with RFI / RFP, developing and presenting technical design & solution to the internal and external stakeholders Extensive experience in assessing SRE, DevOps, Observability maturity state for with ability to define maturity improvement roadmap. Extensive experience in defining and implementing SRE, DevOps, Observability strategies for 3 or more large scale projects Experience of cloud platforms such as AWS or Azure or GCP Deep expertise in Time Series Databases configurations and implementation on AWS cloud Experience of scale observability projects as architect in designing, implementation, and cloud deployment of observability on containerized (Azure AKS or AWS EKS) applications using NewRelic, Splunk and Grafana Stack or open source Grafana and Prometheus products/tools Deep expertise in designing and implementing of end-to-end distributed tracing using several Daemonsets/agents and telemetry gathering patterns. 3+ years in a Monitoring & Observability automation using NewRelic, Splunk and Grafana Stack including Prometheus based alerting. Deep expertise in observability tools such as Splunk, NewRelic, AWS CloudWatch, AWS OpenSearch, and ELK etc
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough