Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
7 - 17 Lacs
Bengaluru
Work from Office
EMS and Observability Consultant. Overview: We are seeking a skilled IT Operations Consultant specializing in Monitoring and Observability to design, implement, and optimize monitoring solutions for our customers. The ideal candidate will have a minimum of 7 years of relevant experience, with a strong background in monitoring, observability and IT service management . The ideal candidate will be responsible for ensuring system reliability, performance, and availability by creating robust observability architectures and leveraging modern monitoring tools. Primary Responsibilities: Design end-to-end monitoring and observability solutions to provide comprehensive visibility into infrastructure, applications, and networks. Implement monitoring tools and frameworks (e.g., Prometheus, Grafana, OpsRamp, Dynatrace, New Relic ) to track key performance indicators and system health metrics. Integration of monitoring and observability solutions with IT Service Management Tools. Develop and deploy dashboards, alerts, and reports to proactively identify and address system performance issues. Architect scalable observability solutions to support hybrid and multi-cloud environments. Collaborate with infrastructure, development, and DevOps teams to ensure seamless integration of monitoring systems into CI/CD pipelines . Continuously optimize monitoring configurations and thresholds to minimize noise and improve incident detection accuracy. Automate alerting, remediation, and reporting processes to enhance operational efficiency. Utilize AIOps and machine learning capabilities for intelligent incident management and predictive analytics . Work closely with business stakeholders to define monitoring requirements and success metrics. Document monitoring architectures, configurations, and operational procedures.
Posted 2 weeks ago
6.0 - 11.0 years
14 - 18 Lacs
Noida
Work from Office
Paytm is India's leading mobile payments and financial services distribution company. Pioneer of the mobile QR payments revolution in India, Paytm builds technologies that help small businesses with payments and commerce. Paytm’s mission is to serve half a billion Indians and bring them to the mainstream economy with the help of technology. About the role Your responsibility as a Lead Cassandra database administrator (DBA) will be the performance, integrity and security of a database. You'll be involved in the planning and development of the database, as well as in troubleshooting any issues on behalf of the users. Requirement : 6+ Years of experience in Configure, install, and manage Cassandra clusters. Manage node addition and deletion inCassandraclusters. Monitor Cassandraclusters and implement performance monitoring. Configure multi-DCCassandraclusters . Optimize Cassandraperformance , including query optimization and other related optimization tools and techniques. Implement and maintain Cassandrasecurity and integrity controls , including backup and disaster recovery strategies . Upgrade Cassandraclusters . Utilize cqlsh, Grafana, and Prometheus for monitoring and administration. Design and create database objects such as data table structures (Cassandracolumn families/tables). Perform data migration, backup, restore & recovery forCassandra. Resolve performance issues, including blocking and deadlocks (as applicable inCassandra's distributed context). Implement and maintain security and integrity controls including backup and disaster recovery strategies for document management systems and MySQL databases (if these are part of theCassandraAdministrator's broader scope, otherwise this point might be less relevant for a pureCassandrarole). Translate business requirements into technical design specifications and prepare high-level documentation for database design and database objects. Extensive work experience on query optimization, script optimization , and other related optimization tools and techniques . Strong understanding of Cassandradatabase architecture . Experience with backup and recovery procedures specific toCassandra. Knowledge of database security best practices . Experience with data migration and transformation . Ability to work under pressure and meet deadlines. Why join us A collaborative output driven program that brings cohesiveness across businesses through technology Improve the average revenue per use by increasing the cross-sell opportunities A solid 360 feedbacks from your peer teams on your support of their goals Compensation: If you are the right fit, we believe in creating wealth for you with enviable 500 mn+ registered users, 21 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants – and we are committed to it. India’s largest digital lending story is brewing here. It’s your opportunity to be a part of the story!
Posted 2 weeks ago
8.0 - 12.0 years
27 - 42 Lacs
Chennai
Work from Office
Job summary The Sr. Business Analyst will play a pivotal role in analyzing and optimizing business processes through the application of technical skills in SRE Grafana ELK Dynatrace AppMon and Splunk. This hybrid role requires a seasoned professional with 8 to 12 years of experience to drive impactful solutions in a day shift setting without the need for travel. Responsibilities Analyze business processes and identify areas for improvement using advanced technical skills. Collaborate with cross-functional teams to gather and document business requirements. Develop and implement monitoring solutions using Grafana and ELK to ensure system reliability. Utilize Dynatrace AppMon and Splunk to troubleshoot and resolve performance issues. Provide insights and recommendations based on data analysis to enhance business operations. Lead the design and execution of test plans to validate system changes. Ensure seamless integration of new solutions with existing systems and processes. Oversee the deployment of updates and enhancements in a hybrid work environment. Maintain comprehensive documentation of processes configurations and changes. Conduct training sessions to educate stakeholders on new tools and processes. Monitor system performance and proactively address potential issues. Collaborate with IT teams to ensure alignment with business objectives. Drive continuous improvement initiatives to optimize system performance and user experience. Qualifications Possess a strong background in SRE Grafana ELK Dynatrace AppMon and Splunk. Demonstrate excellent analytical and problem-solving skills. Exhibit proficiency in documenting business processes and technical specifications. Have experience in leading cross-functional teams and projects. Show capability in developing and executing test plans. Display strong communication skills to interact with stakeholders. Be adept at working in a hybrid work model and managing day shift responsibilities. Certifications Required Certified Business Analysis Professional (CBAP) Dynatrace Associate Certification
Posted 2 weeks ago
6.0 - 11.0 years
8 - 12 Lacs
Hyderabad
Work from Office
The Platform Engineer is responsible for designing, implementing and maintaining scalable, secure and highly available Linux-based systems and DevOps pipelines. This role requires close collaboration with cross-functional teams to align infrastructure capabilities with business goals. What youll do: Engineer and manage scalable Linux-based infrastructure within a DevOps framework including core services such as web servers (Nginx/Apache), FTP, DNS, SSH etc. Automate infrastructure provisioning and configuration using tools like Ansible, Terraform or Puppet. Build and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI, or GitHub Actions. Monitor system performance and analyze performance metric availability using tools like Nagios, Solarwinds, Prometheus, Grafana etc Actively participate in incident management processes, quickly identifying and resolving issues. Conduct root cause analysis to prevent future incidents Collaborate with development teams to ensure seamless integration and deployment of applications Apply security best practices to harden systems and manage vulnerabilities. Create and maintain documentation for systems, processes and procedures to ensure knowledge sharing across teams Participate in on-call rotations Stay updated on industry trends and emerging technologies What youll bring 6+years of experience leveraging automation to manage Linux systems and infrastructure, specifically RedHat In-depth knowledge of cloud platforms such as AWS and Azure Proficiency with infrastructure as code (IaC) tools such as Terraform and CloudFormation Strong technical experience implementing, managing and supporting Linux systems infrastructure Proficiency in one or more programming languages (Python, Powershell, etc) Ability to deliver software which meets consistent standards of quality, security and operability. Able to work flexible hours as required by business priorities; Available on a 24x7x365 basis when needed for production impacting incidents or key customer events Stay up to date on everything Blackbaud, follow us on Linkedin, X, Instagram, Facebook and YouTube Blackbaud is proud to be an equal opportunity employer and is committed to maintaining an inclusive work environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, physical or mental disability, age, or veteran status or any other basis protected by federal, state, or local law.
Posted 2 weeks ago
2.0 - 7.0 years
6 - 15 Lacs
Bengaluru
Work from Office
About Zeta Zeta is a Next-Gen Banking Tech company that empowers banks and fintechs to launch banking products for the future. It was founded by Bhavin Turakhia and Ramki Gaddipati in 2015.Our flagship processing platform - Zeta Tachyon - is the industrys first modern, cloud-native, and fully API-enabled stack that brings together issuance, processing, lending, core banking, fraud & risk, and many more capabilities as a single-vendor stack. 20M+ cards have been issued on our platform globally.Zeta is actively working with the largest Banks and Fintechs in multiple global markets transforming customer experience for multi-million card portfolios.Zeta has over 1700+ employees - with over 70% roles in R&D - across locations in the US , EMEA , and Asia . We raised $280 million at a $1.5 billion valuation from Softbank, Mastercard, and other investors in 2021. Learn more @ www.zeta.tech , careers.zeta.tech , Linkedin , Twitter About the Role: About the Role: As a Technical Support Engineer I/II for Banking Technology , you are expected to have an overall relevant experience of at least 2.6+ years (with at least 1 year of relevant work experience in enterprise products in a B2B banking technology company) . Zeta Tachyon is an Enterprise Saas Platform comprising 100+ externally consumable APIs, 10+ Customer-facing interfaces and multiple Data Extracts with more and more functionality getting added every month in a fast-paced environment. As a Technical Support Engineer for Banking Technology, you will be responsible for providing technical support and expertise to address issues and ensure the smooth operation of banking systems and technologies.You will play a crucial role in providing front-line technical support to customers and internal stakeholders. Your primary responsibility will be to handle and resolve basic technical issues related to banking systems, applications, and infrastructure. Responsibilities: Knowledge Sharing: Contributing to the knowledge base and sharing insights with the team is an indicator of success Customer Support: Provide first-level technical support to customers. Respond to inquiries, troubleshoot issues, and resolve problems in a timely and professional manner. Ensure a high level of customer satisfaction through effective communication and problem resolution. Incident Management: Monitor and triage incoming support requests via various channels (phone, email, ticketing system) and prioritize them based on urgency and impact. Log and track all customer interactions, activities, and resolutions accurately in the ticketing system. Troubleshooting: Diagnose and resolve basic technical issues related to banking systems, applications, and infrastructure. Utilize knowledge bases, troubleshooting guides, and documented procedures to identify solutions or escalate to higher-level support teams when necessary. Documentation and Knowledge Sharing: Contribute to the creation and maintenance of knowledge base articles, FAQs, and troubleshooting guides. Document common issues, their resolutions, and best practices to facilitate self-service for customers and improve overall efficiency. Escalation Management: Escalate complex or unresolved issues to the appropriate L2 or L3 support teams, providing detailed information and following escalation procedures. Collaborate with higher-level support teams to ensure prompt and effective resolution of customer issues. Collaboration and Communication: Collaborate with cross-functional teams, including developers, system administrators, and business analysts, to resolve customer issues and provide timely updates to customers. Compliance and Security: Adhere to security protocols, data privacy regulations, and industry compliance standards when handling customer data and accessing sensitive systems or information Skills: Adaptability and Learning: Embracing change, quickly acquiring new skills, and effectively applying them to support customer needs indicate success in a rapidly evolving technical support environment Strong problem-solving skills and ability to troubleshoot basic technical issues independently Excellent communication and interpersonal skills, with the ability to explain technical concepts to non-technical individuals Customer-oriented mindset with a focus on delivering high-quality customer service. Familiarity with ticketing systems and knowledge base tools is a plus Ability to work under pressure in a fast-paced environment and manage multiple priorities effectively Willingness to learn and adapt to new technologies and tools in the banking technology domain Experience and Qualifications: Engineer (preferably IT . Comp Sci) An overall experience of 2.6 + Years in banking technology Experience of 1+ years in hands-on Technical Support for Enterprise Products Prior experience in tools like JIRA, POSTMAN ; Kibana ; Splunk ; Grafana is required Experience in Banking /payment technologies is a plus
Posted 2 weeks ago
3.0 - 7.0 years
6 - 11 Lacs
Bengaluru
Work from Office
: Proficiency in problem solving and troubleshooting technical issues. Willingness to take ownership and strive for the best solutions. Experience in using performance analysis tools, such as Android Profiler, Traceview, perfetto, and Systrace etc. Strong understanding of Android architecture, memory management, and threading. Strong understanding of Android HALs, Car Framework, Android graphics pipeline, DRM, Codecs. Good knowledge in Hardware abstraction layers in Android and/or Linux. Good understanding of the git, CI/CD workflow Experience in agile based projects. Experience with Linux as a development platform and target Extensive experience with Jenkins and Gitlab CI system Hands-on experience with GitLab, Jenkins, Artifactory, Grafana, Prometheus and/or Elastic Search. Experience with different testing frameworks and their implementation in CI system Programming using C/C++, Java/Kotlin, Linux. Yocto and its use in CI Environments Familiarity with ASPICE Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. - Grade Specific Is highly respected, experienced and trusted. Masters all phases of the software development lifecycle and applies innovation and industrialization. Shows a clear dedication and commitment to business objectives and responsibilities and to the group as a whole. Operates with no supervision in highly complex environments and takes responsibility for a substantial aspect of Capgeminis activity. Is able to manage difficult and complex situations calmly and professionally. Considers the bigger picture when making decisions and demonstrates a clear understanding of commercial and negotiating principles in less-easy situations. Focuses on developing long term partnerships with clients. Demonstrates leadership that balances business, technical and people objectives. Plays a significant part in the recruitment and development of people. Skills (competencies) Verbal Communication
Posted 2 weeks ago
7.0 - 10.0 years
8 - 12 Lacs
Bengaluru
Work from Office
: Proficiency in problem solving and troubleshooting technical issues. Willingness to take ownership and strive for the best solutions. Experience in using performance analysis tools, such as Android Profiler, Traceview, perfetto, and Systrace etc. Strong understanding of Android architecture, memory management, and threading. Strong understanding of Android HALs, Car Framework, Android graphics pipeline, DRM, Codecs. Good knowledge in Hardware abstraction layers in Android and/or Linux. Good understanding of the git, CI/CD workflow Experience in agile based projects. Experience with Linux as a development platform and target Extensive experience with Jenkins and Gitlab CI system Hands-on experience with GitLab, Jenkins, Artifactory, Grafana, Prometheus and/or Elastic Search. Experience with different testing frameworks and their implementation in CI system Programming using C/C++, Java/Kotlin, Linux. Yocto and its use in CI Environments Familiarity with ASPICE 1. The Software Engineering Leader oversees and guides teams to deliver high-quality software solutions aligned with organizational goals and industry best practices.2. Is a professional in technology, proficient in strategic planning, decision-making, and mentoring, with an extensive background in software development and leadership.3. Is typically responsible for setting the strategic direction of software development efforts, managing project portfolios, and ensuring effective execution of software engineering initiatives to meet organizational objectives.4. Builds skills and expertise in leadership, staying abreast of industry trends, and cultivating a collaborative and high-performance culture within the software engineering team.5. Collaborates and acts as a team player with cross-functional teams, executives, and stakeholders, fostering a positive and productive environment for successful software development initiatives.
Posted 2 weeks ago
2.0 - 5.0 years
3 - 7 Lacs
Hyderabad
Work from Office
Your Role Design, implement, and maintain end-to-end ML pipelines for model training, evaluation, and deployment Collaborate with data scientists and software engineers to operationalize ML models, serving frameworks (TensorFlow Serving, TorchServe) and experience with MLOps tools Develop and maintain CI/CD pipelines for ML workflows Implement monitoring and logging solutions for ML models, experience with ML model serving frameworks (TensorFlow Serving, TorchServe) Optimize ML infrastructure for performance, scalability, and cost-efficiency Your Profile Strong programming skills in Python, with experience in ML frameworks; understanding of ML-specific testing and validation techniques Expertise in containerization technologies (Docker) and orchestration platforms (Kubernetes), Knowledge of data versioning and model versioning techniques Proficiency in cloud platform (AWS) and their ML-specific services Strong understanding of DevOps practices and tools (GitLab, Artifactory, Gitflow etc.) Experience with monitoring and observability tools (Prometheus, Grafana, ELK stack) and knowledge of distributed training techniques What youll love about working here We recognise the significance of flexible work arrangements to provide support in hybrid mode, you will get an environment to maintain healthy work life balance Our focus will be your career growth & professional development to support you in exploring the world of opportunities. Equip yourself with valuable certifications & training programmes in the latest technologies such as MLOps, Machine Learning
Posted 2 weeks ago
5.0 - 8.0 years
7 - 10 Lacs
Hyderabad, Ahmedabad
Work from Office
About the Role: Grade Level (for internal use): 10 The Team: We seek a highly motivated, enthusiastic, and skilled engineer for our Industry Data Solutions Team. We strive to deliver sector-specific, data-rich, and hyper-targeted solutions for evolving business needs. You will be expected to participate in the design review process, write high-quality code, and work with a dedicated team of QA Analysts and Infrastructure Teams. The Impact: Enterprise Data Organization is seeking a Software Developer to create software design, development, and maintenance for data processing applications. This person would be part of a development team that manages and supports the internal & external applications that is supporting the business portfolio. This role expects a candidate to handle any data processing, big data application development. We have teams made up of people that learn how to work effectively together while working with the larger group of developers on our platform. Whats in it for you: Opportunity to contribute to the development of a world-class Platform Engineering team . Engage in a highly technical, hands-on role designed to elevate team capabilities and foster continuous skill enhancement. Be part of a fast-paced, agile environment that processes massive volumes of dataideal for advancing your software development and data engineering expertise while working with a modern tech stack. Contribute to the development and support of Tier-1, business-critical applications that are central to operations. Gain exposure to and work with cutting-edge technologies, including AWS Cloud and Databricks . Grow your career within a globally distributed team , with clear opportunities for advancement and skill development. Responsibilities: Design and develop applications, components, and common services based on development models, languages, and tools, including unit testing, performance testing, and monitoring, and implementation Support business and technology teams as necessary during design, development, and delivery to ensure scalable and robust solutions Build data-intensive applications and services to support and enhance fundamental financials in appropriate technologies.( C#, .Net Core, Databricsk ,Python, Scala, NIFI , SQL) Build data modeling, achieve performance tuning and apply data architecture concepts Develop applications adhering to secure coding practices and industry-standard coding guidelines, ensuring compliance with security best practices (e.g., OWASP) and internal governance policies. Implement and maintain CI/CD pipelines to streamline build, test, and deployment processes; develop comprehensive unit test cases and ensure code quality Provide operations support to resolve issues proactively and with utmost urgency Effectively manage time and multiple tasks Communicate effectively, especially in writing, with the business and other technical groups Basic Qualifications: Bachelor's/Masters Degree in Computer Science, Information Systems or equivalent. Minimum 5 to 8 years of strong hand-development experience in C#, .Net Core, Cloud Native, MS SQL Server backend development. Proficiency with Object Oriented Programming. Nice to have knowledge in Grafana, Kibana, Big data, Git Hub, EMR, Terraforms, AI-ML Advanced SQL programming skills Highly recommended skillset in Databricks , Scalatechnologies. Understanding of database performance tuning in large datasets Ability to manage multiple priorities efficiently and effectively within specific timeframes Excellent logical, analytical and communication skills are essential, with strong verbal and writing proficiencies Knowledge of Fundamentals, or financial industry highly preferred. Experience in conducting application design and code reviews Proficiency with following technologies: Object-oriented programming Programing Languages (C#, .Net Core) Cloud Computing Database systems (SQL, MS SQL) Nice to haveNo-SQL (Databricks, Scala, python), Scripting (Bash, Scala, Perl, Powershell) Preferred Qualifications: Hands-on experience with cloud computing platforms including AWS , Azure , or Google Cloud Platform (GCP) . Proficient in working with Snowflake and Databricks for cloud-based data analytics and processing. Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ---- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ---- , SWP Priority Ratings - (Strategic Workforce Planning)
Posted 2 weeks ago
7.0 - 12.0 years
12 - 16 Lacs
Gurugram
Work from Office
About the Role: Grade Level (for internal use): 10 The Team: The TechOps team is responsible for cloud infrastructure provisioning and maintenance in addition to providing high quality Technical Support across a wide suite of products within PVR business segment. The TechOps team works closely with a highly competent Client Services team and the core project teams to resolve client issues and improve the platform. Our work helps ensure that all products are provided a high-quality service and maintaining client satisfaction. The team is responsible for owning and maintaining our cloud hosted apps. The Impact The role is an extremely critical role to help affect positive client experience by virtue of creating and maintaining high availability of business-critical services/applications. Whats in it for you The role provides for successful candidate to haveOpportunity to interact and engage with senior technology and operations users Work on latest in technology like AWS, Terraform, Datadog, Splunk, Grafana etc Work in an environment which allows for complete ownership and scalability What Were Looking For Basic Required Qualifications Total 7+ years of experience required with atleast 4+ years in infrastructure provisioning and maintenance using IaC in AWS.Building (and support) AWS infrastructure as code to support our hosted offering.Continuous improvement of infrastructure components, cloud security, and reliability of services.Operational support for cloud infrastructure including incident response and maintenance.Candidate needs to be an experienced technical resource (Java, Python, Oracle, PL/SQL, Unix) with strong understanding of ITIL standards such as incident and problem management.Ability to understand complex release dependencies and manage them automatically by writing relevant automationsDrive and take responsibilities of support and monitoring toolsShould have exposure to hands-on fault diagnosis, resolution, knowledge sharing and delivery in high pressure client focused environment.Extensive experience of working on mission critical systemsInvolve and drive RCA for repetitive incidents and provide solutions.Driving excellent levels of service to business, effective management & technology strategy development and ownership through defined processGood knowledge of SDLC, agile methodology, CI/CD and deployment tools like Gitlab, GitHub, ADOKnowledge of Networks, Database, Storage, Management Systems, services frameworks, cloud technologies Additional Preferred Qualifications:Keen problem solver with analytical nature and excellent problem-solving skillsetBe able to work flexible hours including some weekends and possibly public holidays to meet service level agreementsExcellent communication skills, both written and verbal with ability to represent complex technical issues/concepts to non-tech stakeholders About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, andmake decisions with conviction.For more information, visit www.spglobal.com/marketintelligence . Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ---- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ---- IFTECH103.1 - Middle Management Tier I (EEO Job Group)
Posted 2 weeks ago
12.0 - 17.0 years
14 - 18 Lacs
Hyderabad
Work from Office
Overview The Grafana and Elastic Architect will maintain and optimize the observability platform, ensure cost-effective operations, define guardrails, and promote best practices. This role will oversee the platforms BAU support, manage vendors and partners, and collaborate closely with application owners to onboard applications. The Architect will also lead the deployment of AI Ops and other advanced features within Grafana and Elastic while working with other observability, ITSM, and platform architects. This position includes people management responsibilities and involves leading a team to achieve operational excellence. Responsibilities Key Responsibilities 1. Platform Ownership & Cost Optimization Maintain and enhance the Grafana and Elastic platforms to ensure high availability and performance. Implement cost control mechanisms to optimize resource utilization across Observability platforms. Establish platform guardrails, best practices, and governance models. 2. BAU Support & Vendor/Partner Management Manage day-to-day operations, troubleshooting, and platform improvements. Engage and manage third-party vendors and partners to ensure SLA adherence and platform reliability. Work closely with procurement and finance teams to manage vendor contracts and renewals. 3. Application Onboarding & Collaboration Partner with application owners and engineering teams to onboard applications onto the Observability platform. Define standardized onboarding frameworks and processes for application teams. Ensure seamless integration with existing observability solutions like AppDynamics, ServiceNow ITOM, and other monitoring tools. 4. AI Ops & Advanced Features Implementation Deploy AI Ops capabilities within Grafana and Elastic to enhance proactive monitoring and anomaly detection. Implement automation and intelligent alerting to reduce MTTR and operational overhead. Stay updated with industry trends and recommend innovative AI-driven observability enhancements. 5. Cross-Functional Collaboration Work closely with architects of AppDynamics, ServiceNow, and other Observability platforms to ensure an integrated monitoring strategy. Align with ITSM, DevOps, and Cloud teams to create a holistic observability roadmap. Lead knowledge-sharing sessions and create technical documentation for the team. 6. People & Team Management Lead and managed a team responsible for Grafana and Elastic observability operations. Provide mentorship, coaching, and career development opportunities for team members. Define team goals, monitor performance, and drive continuous improvement in Observability practices. Foster a culture of collaboration, innovation, and accountability within the team. Qualifications Technical Expertise 12+ years of experience in IT Operations, Observability, or related fields. Strong expertise in Grafana and Elastic Stack (Elasticsearch, Logstash, Kibana). Experience in implementing AI Ops, machine learning, or automation within observability platforms. Proficiency in scripting and automation (Python, Ansible, Terraform) for Observability workloads. Hands-on experience with cloud-based Observability solutions, particularly in Azure environments. Familiarity with additional monitoring tools like AppDynamics, ServiceNow ITOM, SevOne, and ThousandEyes. Leadership & Collaboration Experience in managing vendors, contracts, and external partnerships. Strong stakeholder management skills and ability to work cross-functionally. Excellent communication and presentation skills. Ability to lead and mentor junior engineers in Observability best practices.
Posted 2 weeks ago
6.0 - 11.0 years
18 - 22 Lacs
Hyderabad
Work from Office
Overview We are seeking a highly skilled and analytically strong Site Reliability Engineer ( SRE) and Scrum with 6+ years of experience. The ideal candidate will have a proven track record in managing SRE responsibilities across multiple teams, with deep expertise in Active Directory (AD) groups, Databricks, architecture design, and enterprise tools like Clarity and ServiceNow. Strong Scrum delivery experience and cross-functional collaboration are essential. Responsibilities Key Responsibilities Lead SRE operations across distributed teams, ensuring system reliability, scalability, and performance. Design and implement robust monitoring, alerting, and observability frameworks. Lead Scrum ceremonies Manage and optimize Active Directory (AD) group structures and access controls. Collaborate with data engineering teams to support Databricks environments. Contribute to architectural discussions and decisions for high-availability systems. Drive incident response, root cause analysis, and continuous improvement initiatives. Integrate and manage workflows using Clarity PPM and ServiceNow for change, incident, and problem management. Actively participate in Scrum ceremonies (daily stand-ups, sprint planning, reviews, retrospectives). Collaborate with Product Owners and Scrum Masters to ensure timely and qual ity . Qualifications Education Bachelors or Masters degree in Computer Science, Information Systems, Business Analytics, or a related field. Experience 6+ years of experience in SRE, DevOps, or Infrastructure Engineering roles. Strong analytical thinking and troubleshooting skills. Hands-on experience with Active Directory (AD) group policy management, access provisioning. Databricks cluster management, job orchestration, performance tuning. Architecture designing scalable, fault-tolerant systems. Clarity PPM project tracking, resource planning. ServiceNow incident/change/problem management workflows. Proficiency in monitoring tools (e.g., Prometheus, Grafana, Datadog). Experience with CI/CD pipelines and infrastructure as code (Terraform, Ansible). Familiarity with cloud platforms (Azure, AWS, or GCP). Strong scripting skills (Python, Bash, PowerShell). Solid understanding of Agile/Scrum methodologies and tools like Jira or Azure DevOps. Preferred Qualifications Certified Scrum Master or equivalent Agile certification. Experience working in a global delivery model. Exposure to digital product and reporting services is a plus.
Posted 2 weeks ago
7.0 - 11.0 years
18 - 22 Lacs
Hyderabad
Work from Office
Overview We are looking for a self-driven, software engineering mindset SRE support engineer enabling an SRE-driven orchestration of all components of the end2end ecosystem & preemptively diagnosing anomalies and remediating through automation. The SRE support engineer is integral part of the global team with its main purpose to provide a delightful customer experience for the user of the global consumer, commercial, supply chain and enablement functions in the PepsiCo digital products application portfolio of 260+ applications, enabling a full SRE Practice incident prevention / proactive resolution model. The scope of this role is focussed on the Modern architected application portfolio, B2B pepsiconnect and Direct to Customer and other S&T roadmap applications. Ensures that PepsiCo DPA applications service performance,reliability and availability expected by our customers and internal groups It requires a blend of technical expertise on SRE tools, modern applications arhictecture, IT operations experience, and analytics & influence skills. Responsibilities Reporting directly to the SRE & Modern Operations Associate Director, is responsible to enable & execute the pre-emptive diagnosis of PepsiCo applications towards service performance, reliability and availability expected by our customers and internal groups Responsible as pro-active support engineer, diagnosing any anomalies prior to any user and driving the necessary remediations across the teams involved. Develop / leverage aggregation correlation solutions that integrates events across all eco system component of the modern architecture solution and comes up with insights to continuously improve the user journey and order flow experience collaborating with software engineering teams. Drive incident response, root cause analysis (RCA), and post-mortem processes to ensure continuous improvement. Develop and maintain robust monitoring, alerting, and observability frameworks using tools like Grafana, ELK, etc. Collaborate with product and engineering teams during the design and development phases to embed reliability and operability into new services. Participate in architecture reviews and provide SRE input on scalability, fault tolerance, and deployment strategies. Define and implement SLOs/SLIs for new services before they go live, ensuring alignment with business objectives. Work closely with customer facing support teams to evolve & empower them with SRE insights Participate in on-call support and orchestrating blameless post-mortems and encourage the practice within the organization Provides inputs to the definition, collection and analysis of data relevant products systems and their interactions towards business process resiliency especially related impacting customer satisfaction, Actively engage and drive AI Ops adoption across teams Qualifications 7-11 years of work experience evolving to a SRE engineer with 3-5 years of experience in continuously improving and transforming IT operations ways of working Bachelors degree in Computer Science, Information Technology or a related field The ideal Engineer will be highly quantitative, have great judgment, able to connect dots across ecosytems, and efficiently work cross-functionally across teams to ensure SRE orchestrating solutions are meeting customer/end-user expectations The candidate will take a pragmatic approach resolving incidents, including the ability to systemically triangulate root causes and work effectively with external and internal teams to meet objectives. A firm understanding of SRE (Software Reliability Engineering) and IT Service Management (ITSM) processes with a track record for improving service offerings pro-actively resolving incidents, providing a seamless customer/end-user experience and proactively identifying and mitigating areas of risk. Proven experience as an SRE in designing the events diagnostics, performance measures and alert solutions to meet the SLA/SLO/SLIs. Hands on experience in Python, SQL, relational or non-relational DBs, AppDynamics, Grafana, Splunk, Dynatrace, or other SRE Ops toolsets. Deep hands-on technical expertise, excellent verbal and written communication skills Differentiating Competencies Driving for ResultsDemonstrates perseverance and resilience in the pursuit of goals. Confronts and works to resolve tough issues. Exhibits a can-do attitude and a willingness to take on significant challenges Decision MakingQuickly analyses complex problems to find actionable, pragmatic solutions. Sees connections in data, events, trends, etc. Consistently works against the right priorities CollaboratingCollaborates well with others to deliver results. Keeps others informed so there are no unnecessary surprises. Effectively listens to and understands what other people are saying. Communicating and InfluencingAbility to build convincing, persuasive, and logical storyboards. Strong executive presence. Able to communicate effectively and succinctly, both verbally and on paper. Motivating and Inspiring OthersDemonstrates a sense of passion, enjoyment, and pride about their work. Demonstrates a positive attitude in the workplace. Embraces and adapts well to change. Creates a work environment that makes work rewarding and enjoyable.
Posted 2 weeks ago
5.0 - 10.0 years
12 - 17 Lacs
Pune
Work from Office
Sarvaha would like to welcome a Lead/Senior Java Developer with a minimum of 5 years, ideally 10+ years, of experience in designing and developing scalable micro-services and cloud-native applications using Java, Spring Boot, and reactive programming paradigms. Sarvaha is a niche software development company that works with some of the best funded startups and established companies across the glo be. Please visit our website at https://www.sarvaha.com to know more about us. What Youll Do Design and develop scalable micro-services using Java 17+ and Spring Boot Build reactive applications with Spring WebFlux and Project Reactor Implement event-driven architectures using Kafka and Azure Event Hub Develop secure, high-throughput REST APIs Work with AWS and Azure cloud environments to deploy and monitor services Collaborate with DevOps to ensure reliability, tracing, and observability of systems Participate in code reviews, mentor team members, and promote engineering best practices Troubleshoot and resolve production issues in distributed systems You Bring BE/BTech/MTech (CS/IT or MCA) with strong software engineering fundamentals Hands-on experience with Java, Spring Boot, and the broader Spring ecosystem Strong knowledge of Spring WebFlux, Project Reactor, and non-blocking I/O Solid understanding of Kafka (Producers, Consumers, Streams) and message-driven design Experience with AWS (EC2, S3, Lambda, SNS/SQS) or Azure SDKs and Event Hub Expertise in designing and developing high-performance, resilient, and observable systems Exposure to Docker, CI/CD pipelines, and Kubernetes (preferred) Familiarity with microservices testing strategies like contract testing, mocking, & test containers Strong problem-solving abilities & system design thinking (caching, partitioning, load balancing) Clear communication, love for documentation, and mentoring to programmers on the team What Sets You Apart Monitoring experience with Grafana, Prometheus, ELK, or Datadog Excellent collaboration with cross-functional teamsdevelopers, DevOps, QA Knowledge of both AWS and Azure is a strong plus
Posted 2 weeks ago
3.0 - 8.0 years
3 - 6 Lacs
Pimpri-Chinchwad
Work from Office
Sarvaha would like to welcome a skilled Observability Engineer with a minimum of 3 years of experience to contribute to designing, deploying, and scaling our monitoring and logging infrastructure on Kubernetes . In this role, you will play a key part in enabling end-to-end visibility across cloud environments by processing Petabyte data scales, helping teams enhance reliability, detect anomalies e arly, and drive operational excellence. Sarvaha is a niche software development company that works with some of the best funded startups and established companies across the globe. Please visit our website at What Youll Do Configure and manage observability agents across AWS, Azure & GCP Use IaC techniques and tools such as Terraform, Helm & GitOps, to automate deployment of Observability stack Experience with different language stacks such as Java, Ruby, Python and Go Instrument services using OpenTelemetry and integrate telemetry pipelines Optimize telemetry metrics storage using time-series databases such as Mimir & NoSQL DBs Create dashboards, set up alerts, and track SLIs/SLOs Enable RCA and incident response using observability data Secure the observability pipeline You Bring BE/BTech/MTech (CS/IT or MCA), with an emphasis in Software Engineering Strong skills in reading and interpreting logs, metrics, and traces Proficiency with LGTM (Loki, Grafana, Tempo, Mimi) or similar stack, Jaeger, Datadog, Zipkin, InfluxDB etc. Familiarity with log frameworks such as log4j, lograge, Zerolog, loguru etc. Knowledge of OpenTelemetry, IaC, and security best practices Clear documentation of observability processes, logging standards & instrumentation guidelines Ability to proactively identify, debug, and resolve issues using observability data Focused on maintaining data quality and integrity across the observability pipeline
Posted 2 weeks ago
5.0 - 10.0 years
13 - 23 Lacs
Noida
Hybrid
Role: As a DevOps SME, design and implement a variety of requirements to support infrastructure/platform as a service on Azure cloud platform. Your role will also be to ensure that design adequately represents and supports the needs of product teams while following industry practices like fault tolerance, availability, and observability. WHAT ARE YOUR RESPONSIBILITIES: Uphold the "Automate Everything" philosophy within the team. Engage in the development and maintenance of our cloud infrastructure, responsible for hosting all cloud applications in both development and production environments. Collaborate with colleagues across products and platforms to ensure the reliability of our cloud services, supporting customers with uninterrupted access to critical applications 24/7. Effectively manage and troubleshoot server and platform issues. Perform analysis and monitoring of the performance of cloud-based infrastructure, installed applications, and shared resources. WHAT MAKES YOU A GOOD FIT FOR THIS ROLE: Proficient in utilising DevOps tools within the Azure Cloud environment. Experience in cloud automation, employing Terraform, Helm, and ArgoCD. Skilled in application containerization, Kubernetes cluster administration and management Knowledge of cloud monitoring and alerting tools such as NewRelic, Prometheus, Azure Monitor and Grafana Strong experience in writing PowerShell and Bash scripts Familiarity with EFK log management solutions. Proficient in working with the agile methodology. Demonstrates outstanding interpersonal and communication skills, coupled with a willingness to mentor new team members. Possesses strong analytical abilities and excels in problem-solving. GOOD TO HAVE: Experience in Disaster Recover (DR) strategies for building highly available applications. Proficient in web server administration, possessing strong skills in Windows, Linux, and networking. Hand-on scripting proficiency in GoLang / Python, Hands-on experience in DNS management. Note: Good experience in Azure cloud infrastructure (IAAS and PAAS) are required for this role.
Posted 2 weeks ago
7.0 - 10.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 7 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project.
Posted 2 weeks ago
7.0 - 12.0 years
10 - 15 Lacs
Pune
Work from Office
Sarvaha would like to welcome a skilled Observability Engineer with a minimum of 7 years of experience to contribute to designing, deploying, and scaling our monitoring and logging infrastructure on Kubernetes. In this role, you will play a key part in enabling end-to-end visibility across cloud environments by processing Petabyte data scales, helping teams enhance reliability, detect anomalies early, and drive operational excellence. Sarvaha is a niche software development company that works with some of the best funded startups and established companies across the globe. What Youll Do : - Configure and manage observability agents across AWS, Azure & GCP. - Use IaC techniques and tools such as Terraform, Helm & GitOps, to automate deployment of Observability stack. - Experience with different language stacks such as Java, Ruby, Python and Go. - Instrument services using OpenTelemetry and integrate telemetry pipelines. - Optimize telemetry metrics storage using time-series databases such as Mimir & NoSQL DBs. - Create dashboards, set up alerts, and track SLIs/SLOs. - Enable RCA and incident response using observability data. - Secure the observability pipeline. You Bring : - BE/BTech/MTech (CS/IT or MCA), with an emphasis in Software Engineering. - Strong skills in reading and interpreting logs, metrics, and traces. - Proficiency with LGTM (Loki, Grafana, Tempo, Mimir) or similar stack, Jaeger, Datadog, Zipkin, InfluxDB etc. - Familiarity with log frameworks such as log4j, lograge, Zerolog, loguru etc. - Knowledge of OpenTelemetry, IaC, and security best practices. - Clear documentation of observability processes, logging standards & instrumentation guidelines. - Ability to proactively identify, debug, and resolve issues using observability data. - Focused on maintaining data quality and integrity across the observability pipeline.
Posted 2 weeks ago
5.0 - 10.0 years
11 - 21 Lacs
Pune
Work from Office
Experience 5-7 years (P3) 21 LPA NP – Immediate – 15 Days Must Skills – Devops + AWS [Primary responsibility of candidate]: Involved in supporting infrastructure architecture, system performance and overall infrastructure operating environment for either on-premise infrastructure or cloud computing platforms, or both. Support new application system initiatives and to drive development of infrastructure architecture, system integration, acceptance, performance management practice and performance testing Run infrastructure services either on premise, hybrid or public cloud environment to support application systems by working closely with Applications Service counterparts Understand systems operations environment and drive Architecture Review and Governance Process to ensure smooth and sustained operations (including resiliency requirements). [What is project requirement to qualify candidates]: Degree in Computer Science, Computer or Electronics Engineering or Information Technology or equivalent Minimum 5 years of relevant working experience, with validated records of having utilized architect design capabilities in infrastructure management for both on premise and Cloud workloads. Certification in a Cloud Technology platform (either in Architecture, DevOps or System Administration/ SysOps track) preferred. Successful candidate need to demonstrate either deep or broad (or both) level of technical expertise in the area of Infrastructure Services, with appreciation in one or more areas of Infrastructure Management, Cloud Computing and DevOps Engineering concepts. Proactive and dedicated individual with good leadership and multi-tasking capabilities Good interpersonal skills, oral and written skills, with the ability to present ideas and influence partners of different level [Candidate's Tech Stack]: AWS - EC2, S3, VPC, EKS, Lambda, CloudWatch, Transit Gateway, Network Firewall, IAM, Transfer Family IAC - Terraform Kubernetes Helm Grafana & Prometheus
Posted 2 weeks ago
3.0 - 8.0 years
5 - 10 Lacs
Pune
Work from Office
Sarvaha would like to welcome a skilled Observability Engineer with a minimum of 3 years of experience to contribute to designing, deploying, and scaling our monitoring and logging infrastructure on Kubernetes. In this role, you will play a key part in enabling end-to-end visibility across cloud environments by processing Petabyte data scales, helping teams enhance reliability, detect anomalies ea rly, and drive operational excellence. What Youll Do Configure and manage observability agents across AWS, Azure & GCP Use IaC techniques and tools such as Terraform, Helm & GitOps, to automate deployment of Observability stack Experience with different language stacks such as Java, Ruby, Python and Go Instrument services using OpenTelemetry and integrate telemetry pipelines Optimize telemetry metrics storage using time-series databases such as Mimir & NoSQL DBs Create dashboards, set up alerts, and track SLIs/SLOs Enable RCA and incident response using observability data Secure the observability pipeline You Bring BE/BTech/MTech (CS/IT or MCA), with an emphasis in Software Engineering Strong skills in reading and interpreting logs, metrics, and traces Proficiency with LGTM (Loki, Grafana, Tempo, Mimi) or similar stack, Jaeger, Datadog, Zipkin, InfluxDB etc. Familiarity with log frameworks such as log4j, lograge, Zerolog, loguru etc. Knowledge of OpenTelemetry, IaC, and security best practices Clear documentation of observability processes, logging standards & instrumentation guidelines Ability to proactively identify, debug, and resolve issues using observability data Focused on maintaining data quality and integrity across the observability pipeline.
Posted 2 weeks ago
6.0 - 10.0 years
25 - 27 Lacs
Noida
Work from Office
Hiring an OpenShift L3 Support Engineer with 6+ years of experience for 24x7 onsite support in Noida. Role involves full lifecycle OpenShift cluster management, CI/CD, monitoring, patching, and RCA delivery. Required Candidate profile Experienced OpenShift L3 Engineer with 6+ years in container management, CI/CD, monitoring (Zabbix/Grafana), patching, and RCA reporting. OCP certified and available for 24x7 onsite support in Noida.
Posted 2 weeks ago
3.0 - 7.0 years
10 - 20 Lacs
Kolkata, Pune, Bengaluru
Hybrid
Cognite Data Fusion Engineer / Consultant Industry: Oil & Gas / Energy / Manufacturing / Industrial Digital Transformation Key Responsibilities: Design, implement, and optimize data pipelines in Cognite Data Fusion (CDF) using Python SDK or CDF APIs Build and maintain data models (Asset Hierarchies, Time Series, Events, Files, Relationships) in CDF Ingest and contextualize data from OT systems (e.g., PI System, SCADA/DCS), IT systems (SAP PM, IBM Maximo), and engineering data Develop and orchestrate transformations using CDF Transformations (SQL / PySpark) Collaborate with SMEs and data scientists to develop use cases such as predictive maintenance, asset performance monitoring, and digital twins Implement access control, data lineage, and quality checks aligned with governance requirements Create dashboards, apps, or integrations using CDFs APIs, Power BI, Grafana, or other front-end tools Work with Cognites capabilities such as Cognite Functions, Data Sets, CDF Charts, and Industrial Canvas Must Have: 4+ years of experience in data engineering or industrial data platforms Proven experience working with Cognite Data Fusion SDK, APIs, or Fusion Workbench Strong skills in Python, SQL, and cloud data tools (Azure preferred) Understanding of industrial asset structures, time series data, maintenance logs, and equipment metadata Experience with data integration tools and protocols (OPC UA, Modbus, REST, MQTT, PI AF) Familiarity with industry verticals like Oil & Gas, Chemicals, Power Generation, or Manufacturing Excellent problem-solving, communication, and client engagement skills Please apply on the below link for interview process: https://careers.ey.com/job-invite/1625081/
Posted 2 weeks ago
4.0 - 6.0 years
12 - 13 Lacs
Noida
Work from Office
Looking for an OpenShift L2 Support Engineer with 4–6 years of experience for onsite support in Noida/Delhi. Must manage OpenShift clusters, container lifecycle, CI/CD, monitoring, patching, and incident resolution in a 24x7 setup. Required Candidate profile Experienced L2 Support Engineer with 4–6 years in OpenShift container platform, CI/CD pipelines, monitoring, OS patching, IAM, and troubleshooting. Available for 24x7 onsite support in Noida/Delhi.
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Lead Engineer, DevOps at Toyota Connected India, you will be part of a dynamic team that is dedicated to creating innovative infotainment solutions on embedded and cloud platforms. You will play a crucial role in shaping the future of mobility by leveraging your expertise in cloud platforms, containerization, infrastructure automation, scripting languages, monitoring solutions, networking, security best practices, and CI/CD tools. Your responsibilities will include: - Demonstrating hands-on experience with cloud platforms such as AWS or Google Cloud Platform. - Utilizing strong expertise in containerization (e.g., Docker) and Kubernetes for container orchestration. - Implementing infrastructure automation and configuration management tools like Terraform, CloudFormation, Ansible, or similar. - Proficiency in scripting languages such as Python, Bash, or Go for efficient workflow. - Experience with monitoring and logging solutions such as Prometheus, Grafana, ELK Stack, or Datadog to ensure system reliability. - Knowledge of networking concepts, security best practices, and infrastructure monitoring to maintain a secure and stable environment. - Strong experience with CI/CD tools such as Jenkins, GitLab CI, CircleCI, Travis CI, or similar for continuous integration and delivery. At Toyota Connected, you will enjoy top-of-the-line compensation, autonomy in managing your time and workload, yearly gym membership reimbursement, free catered lunches, and a casual dress code. You will have the opportunity to work on products that enhance the safety and convenience of millions of customers, all within a collaborative, innovative, and empathetic work culture that values customer-centric decision-making, passion for excellence, creativity, and teamwork. Join us at Toyota Connected India and be part of a team that is redefining the automotive industry and making a positive global impact!,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a Senior DevOps Engineer with a focus on Azure cloud infrastructure, you will play a crucial role in automation, scalability, and ensuring best practices in CI/CD and cloud deployments. Your responsibilities will include managing and modernizing medium-sized, multi-tier environments, establishing efficient CI/CD pipelines, and ensuring the reliability and security of applications and infrastructure. You will lead infrastructure development and operational support in Azure, emphasizing high availability and performance. Utilizing containerization technologies like Docker and orchestration tools such as Kubernetes will be essential. Implementing Git best practices, Infrastructure as Code (IaC) with tools like Terraform, and following DevSecOps practices for security compliance throughout the development lifecycle will be part of your daily tasks. Monitoring application and infrastructure performance using tools like Azure Monitor, Managed Grafana, and other observability tools will help you maintain the health of the systems. You will collaborate with software development teams to align with DevOps best practices and troubleshoot issues across different environments. Additionally, your role will involve providing leadership in infrastructure planning, design reviews, and incident management while working with multiple cloud platforms, primarily Azure. To excel in this position, you should have at least 5 years of hands-on DevOps experience, strong communication skills, and expertise in Azure cloud services. Proficiency in Kubernetes, experience with web servers like Nginx and Apache, and familiarity with the Azure Well-Architected Framework are essential. Knowledge of monitoring tools, DevSecOps principles, and troubleshooting infrastructure and applications will be crucial for your success. Experience with multi-tenant SaaS platforms, performance monitoring, and Azure dashboarding tools will also be beneficial. Preferred qualifications for this role include Azure certifications (e.g., AZ-400 or AZ-104), experience with cloud migration and application modernization, familiarity with tools like Prometheus, Grafana, ELK Stack, or similar, and leadership or mentoring experience. If you are passionate about DevOps, automation, and cloud infrastructure, this role offers a challenging yet rewarding opportunity to showcase your skills and contribute to the success of the team.,
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough