Jobs
Interviews

2943 Datadog Jobs - Page 15

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 14.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Security leader with a background in AWS and cloud Security, you play a crucial role in defining and enforcing the security policies and procedures of the organization. With excellent written and verbal communication skills, exceptional organizational abilities, and expert-level proficiency in IT and Cloud Security, you will be responsible for architecting and implementing IT Security policies while reporting to the Director of Information Technology. In this full-time role, your essential duties and responsibilities include providing leadership and technology vision to the IT Security team, performing internal and external security audits, documenting, implementing, and monitoring adherence to IT security standards, as well as assessing and improving security metrics. You will work on enhancing security tools and operations, monitor and manage IDS, vulnerability scanning, and assessments, and serve as the Data Privacy Officer (DPO) for the company. Creating awareness within the company regarding Security, Privacy, and compliance requirements, ensuring security and privacy training for staff involved in data processing, conducting security and privacy audits, and serving as the point of contact between the company and clients for privacy controls are key aspects of your role. Additionally, you will be responsible for log aggregation and analysis, managing Anti-Virus software, addressing security and data breach-related incidents, and ensuring customer satisfaction while being accountable for individual product/project success and quality. To qualify for this position, you must hold certifications such as CISSP, Security+, or equivalent, along with having 10+ years of Cyber Security experience, 5+ years of IT management experience, 5+ years of AWS experience, and 3+ years of experience with Identity & Access Management tools. Your extensive experience with Linux & Windows Security administration, managing Cloud and Container Security, Network and Application penetration testing, vulnerability scanners, IDS, IPS deployment and monitoring, SIEM tools, security automation, incident response & management, vulnerability management, and patch management will be essential. Moreover, your role will involve ensuring organization efficiencies through continual improvement programs, representing the organization in inspections and audits, driving action plans to closure, conducting deep dive RCAs and ensuring CAPAs are closed, and maintaining a metrics-driven approach. Additional qualifications such as experience with monitoring tools like Datadog, Change Management, Configuration Management, Infrastructure as Code tools, hardening Operating Systems and Applications, endpoint security management, working in GxP environments, and familiarity with various practices will be beneficial. With no travel expectations, this role requires a dedicated and experienced professional who can effectively lead security operations and teams, prioritize security and privacy, and drive continuous improvement initiatives to enhance organizational security posture.,

Posted 1 week ago

Apply

0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Position AWS Cloud Monitoring and Ansible Specialist Job Description Key Responsibilities AWS Cloud Monitoring & Performance Management Design, implement, and manage monitoring solutions for AWS cloud infrastructure using tools like Amazon CloudWatch, AWS X-Ray, or third-party monitoring tools (e.g., Datadog, New Relic, Nagios). Define and set up metrics, alerts, and dashboards for system health, application performance, and infrastructure reliability. Troubleshoot and resolve AWS infrastructure issues to minimize downtime and optimize system performance. Automation Using Ansible Write, manage, and maintain Ansible playbooks for automating configuration management, deployments, patching, and other operational processes. Develop and test automation workflows to ensure reliable execution across different environments. Collaborate with DevOps and development teams to streamline CI/CD pipelines using Ansible. Cloud Infrastructure Management Migration from Chef to Ansible will be added advantage Deploy and manage AWS services, including EC2, S3, RDS, Lambda, VPC, CloudFormation, etc. Optimize AWS resources for cost efficiency and performance. Stay updated on the latest AWS offerings and recommend relevant services to enhance infrastructure. Incident Management and Problem Resolution Monitor system incidents and resolve them efficiently, ensuring adherence to SLAs. Perform root cause analysis and implement preventive measures to mitigate recurring issues. Maintain and improve incident response processes and documentation. Documentation and Reporting Maintain accurate documentation of infrastructure configurations, monitoring systems, and automation scripts. Create reports to demonstrate cloud environment health, resource utilization, and compliance. Share knowledge and best practices with team members through documentation and training sessions. Security and Compliance Implement security best practices for monitoring and automation scripts. Ensure systems are compliant with organizational and regulatory requirements. Collaborate with security teams to perform vulnerability assessments and patch management. Technical Skills Required Skills and Qualifications Extensive experience in AWS services, architecture, and tools (e.g., CloudWatch, CloudFormation, IAM, EC2, S3, Lambda, etc.). Proficient in writing and managing Ansible playbooks for automation and orchestration. Experience with monitoring tools and setting up dashboards (e.g., Datadog, Prometheus, Grafana, etc.). Strong understanding of networking concepts within AWS, including VPCs, subnets, routing, and security groups. Experience with Linux/Unix environments and scripting languages like Python, Bash, or PowerShell. Familiarity with CI/CD tools like Jenkins, GitLab CI, or AWS CodePipeline. Knowledge of cloud cost optimization strategies and resource tagging. Soft Skills Strong problem-solving and troubleshooting abilities. Excellent communication and collaboration skills to work effectively with cross-functional teams. Ability to multitask and prioritize tasks in a fast-paced environment. Location: IN-GJ-Ahmedabad, India-Ognaj (eInfochips) Time Type Full time Job Category Engineering Services

Posted 1 week ago

Apply

3.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As a GCP CloudOps Engineer, you will be responsible for deploying, integrating, and testing solutions using Infrastructure as Code (IaC) and DevSecOps techniques. With over 8 years of experience in infrastructure design and delivery, including 5 years of hands-on experience in Google Cloud technologies, you will play a key role in ensuring continuous, repeatable, secure, and automated deployment processes. Your responsibilities will also include: - Utilizing monitoring tools such as Datadog, New Relic, or Splunk for effective performance analysis and troubleshooting. - Implementing container orchestration services like Docker or Kubernetes, with a preference for GKE. - Collaborating with diverse teams across different time zones and cultures. - Maintaining comprehensive documentation, including principles, standards, practices, and project plans. - Building data warehouses using Databricks and IaC patterns with tools like Terraform, Jenkins, Spinnaker, CircleCI, etc. - Enhancing platform observability and optimizing monitoring and alerting tools for better performance. - Developing CI/CD frameworks to streamline application deployment processes. - Contributing to Cloud strategy discussions and implementing best practices for Cloud solutions. Your role will involve proactive collaboration, automation of long-term solutions, and adherence to incident, problem, and change management best practices. You will also be responsible for debugging applications, enhancing deployment architectures, and measuring cost and performance metrics of cloud services to drive informed decision-making. Preferred qualifications for this role include experience with Databricks, Multicloud environments (GCP, AWS, Azure), GitHub, and GitHub Actions. Strong communication skills, a proactive approach to problem-solving, and a deep understanding of Cloud technologies and tools are essential for success in this position. Key Skills: Splunk, Terraform, Google Cloud Platform, GitHub Workflows, AWS, Datadog, Python, Azure DevOps, Infrastructure as Code (IaC), Data Warehousing (Databricks), New Relic, CircleCI, Container Orchestration (Docker, Kubernetes, GKE), Spinnaker, DevSecOps, Jenkins, etc.,

Posted 1 week ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Position AWS Cloud Monitoring and Ansible Specialist Job Description Key Responsibilities AWS Cloud Monitoring & Performance Management Design, implement, and manage monitoring solutions for AWS cloud infrastructure using tools like Amazon CloudWatch, AWS X-Ray, or third-party monitoring tools (e.g., Datadog, New Relic, Nagios). Define and set up metrics, alerts, and dashboards for system health, application performance, and infrastructure reliability. Troubleshoot and resolve AWS infrastructure issues to minimize downtime and optimize system performance. Automation Using Ansible Write, manage, and maintain Ansible playbooks for automating configuration management, deployments, patching, and other operational processes. Develop and test automation workflows to ensure reliable execution across different environments. Collaborate with DevOps and development teams to streamline CI/CD pipelines using Ansible. Cloud Infrastructure Management Migration from Chef to Ansible will be added advantage Deploy and manage AWS services, including EC2, S3, RDS, Lambda, VPC, CloudFormation, etc. Optimize AWS resources for cost efficiency and performance. Stay updated on the latest AWS offerings and recommend relevant services to enhance infrastructure. Incident Management and Problem Resolution Monitor system incidents and resolve them efficiently, ensuring adherence to SLAs. Perform root cause analysis and implement preventive measures to mitigate recurring issues. Maintain and improve incident response processes and documentation. Documentation and Reporting Maintain accurate documentation of infrastructure configurations, monitoring systems, and automation scripts. Create reports to demonstrate cloud environment health, resource utilization, and compliance. Share knowledge and best practices with team members through documentation and training sessions. Security and Compliance Implement security best practices for monitoring and automation scripts. Ensure systems are compliant with organizational and regulatory requirements. Collaborate with security teams to perform vulnerability assessments and patch management. Technical Skills Required Skills and Qualifications Extensive experience in AWS services, architecture, and tools (e.g., CloudWatch, CloudFormation, IAM, EC2, S3, Lambda, etc.). Proficient in writing and managing Ansible playbooks for automation and orchestration. Experience with monitoring tools and setting up dashboards (e.g., Datadog, Prometheus, Grafana, etc.). Strong understanding of networking concepts within AWS, including VPCs, subnets, routing, and security groups. Experience with Linux/Unix environments and scripting languages like Python, Bash, or PowerShell. Familiarity with CI/CD tools like Jenkins, GitLab CI, or AWS CodePipeline. Knowledge of cloud cost optimization strategies and resource tagging. Soft Skills Strong problem-solving and troubleshooting abilities. Excellent communication and collaboration skills to work effectively with cross-functional teams. Ability to multitask and prioritize tasks in a fast-paced environment. Location: IN-GJ-Ahmedabad, India-Ognaj (eInfochips) Time Type Full time Job Category Engineering Services

Posted 1 week ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: DevOps Engineer Location: Sector 138, Noida Experience: 4+ Years (3+ Years Relevant) Job Type: Full-Time Job Role: We’re looking for a proactive and skilled DevOps Engineer with the experience to build and maintain scalable DevOps infrastructure. You will be responsible for implementing CI/CD pipelines, automating deployments, managing cloud infrastructure, and ensuring secure, high-performance application delivery. Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field (B.E/B.Tech, MCA). Technical Skills CI/CD & DevOps Tools: Jenkins, GitLab CI/CD, Azure DevOps Infrastructure as Code (IaC): Terraform, Ansible, AWS CloudFormation Containers & Orchestration: Docker, Kubernetes (Minikube, EKS, AKS, GKE) Cloud Platforms: AWS (preferred), Azure, GCP (basic knowledge) Monitoring & Logging: Prometheus, Grafana, ELK Stack, Datadog, CloudWatch Scripting & Version Control: Bash, Shell, Python, Git (GitHub, Bitbucket) Preferred Skills Exposure to DevSecOps practices, including security scanning tools like Trivy, SonarQube Experience in secrets management and secure deployment pipelines Strong collaboration and troubleshooting abilities across dev/test/prod environments Roles & Responsibilities Design and maintain CI/CD pipelines for streamlined build and deployment Automate infrastructure using IaC tools (Terraform, Ansible) Manage containerized deployments with Docker and Kubernetes Monitor and troubleshoot system performance using modern logging and alerting tools Ensure cloud infrastructure is secure, scalable, and cost-optimized Collaborate with development, QA, and product teams to improve release cycles Implement and enforce DevSecOps best practices If you are passionate about automation, cloud infrastructure, and building secure, scalable DevOps pipelines in fast-paced environments, this role is for you. Apply Now or share your resumes at: sonam.singh@cipl.org.in Thanks! TA Team - CIPL

Posted 1 week ago

Apply

10.0 - 17.0 years

15 - 25 Lacs

Bengaluru

Hybrid

Preferred candidate profile Notice period: 0 to 30 Days Role & responsibilities AWS SysOps certification must. Expertise in AWS Services such as: Lambda Functions, CloudTrail, CloudWatch, SNS, SES, Route53, S3, AMQ, VPC, WAF, Elasticsearch, Radis, Encrypted EBS, KMS, IAM are mandatory Working knowledge of Docker: Docker setup, Container Image, Lifecycle, Registration Working knowledge of GitHub: Setup, Installation, setting up Repository, Committing changes in Repository, Branching Working knowledge of AWS Developer Tools: CI/CD, Code Commit, Code Build, Code Deploy, Code Pipeline, CloudWatch Experience working with DataDog monitoring tool Experience with Incident management tool OpsGennie

Posted 1 week ago

Apply

2.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Office Location - Office No: 403-405, Time Square, CG Road,Ellisbridge, Ahmedabad, Gujarat-380006. Duration & Type of Employment - Full Time Work Style - Hybrid In Office days - 3 days a week Relocation - Candidate must be willing to relocate to Ahmedabad GJ, with reasonable notice Immediate / Reasonable joiner preferred Requirements Backend: Node.js (TypeScript), Express.js, REST APIs, OpenAPI, JWT, OAuth2.0, OpenID Connect Infrastructure & DevOps: Docker, Docker Compose, CI/CD (MUST), ADFS, NGINX/Traefik, IaC Tools Monitoring & Logging: Grafana, Prometheus, Datadog, Winston, Pino Documentation: OpenAPI (Swagger), Confluence Design and maintain robust, secure, and high-performance backend services using Node.js and TypeScript. Build and document RESTful APIs using OpenAPI ; ensure validation, monitoring, and logging are built in. Lead the development and management of CI/CD pipelines , enabling automated builds, tests, and deployments. Package and deploy applications using Docker and Docker Compose , ensuring environment consistency and isolation. Collaborate with the infrastructure team to configure reverse proxies (NGINX/Traefik) , domain routing, and SSL certificates. Design secure authentication flows using OAuth2/OpenID Connect with enterprise SSO, and manage role-based permissions through JWT decoding. Create and maintain operational documentation , deployment runbooks, and service diagrams. Monitor systems using Grafana/Datadog , optimize performance, and manage alerts and structured logs. Actively participate in performance tuning, production debugging, and incident resolution . Contribute to infrastructure evolution, identifying opportunities to automate, secure, and improve delivery workflows. Bachelor’s in Computer Science, Engineering, or equivalent experience. 2+ years of backend development experience with Node.js, and related tools/frameworks. Solid understanding of REST principles , HTTP protocol, and secure token-based auth (JWT, OAuth2). Experience deploying and managing services with Docker and GitLab CI/CD . Ability to configure, manage, and troubleshoot Linux-based environments . Familiarity with reverse proxies and custom routing using NGINX or Traefik . Experience with OpenAPI specifications to generate and consume documented endpoints. Knowledge of Infrastructure as Code . Understanding of DevOps principles , environment variables, and automated release strategies. Hands-on experience managing logs, alerts, and performance metrics. Comfortable with agile processes , cross-functional collaboration, and code reviews. # Bonus Skills Experience with Active Directory group-based authorization . Familiarity with terminal-based or legacy enterprise platforms (e.g., MultiValue systems). Proficiency with audit logging systems , structured log formatting, and Sentry integration. Exposure to security best practices in authentication, authorization, and reverse proxy configurations. #Educational & Experience Preferred Educational Background - Bachelors of Technology in Computer Science Alternative Acceptable Educational Background - BS/MS in Computer Science Minimum Experience Required - 3 years # Ideal Candidate Traits Obsessed with automation, consistency, and secure environments . Independent problem-solver who takes ownership of both code and environment health. Detail-oriented and performance-conscious, not just focused on features. Collaborative communicator, able to bridge the backend, DevOps, and infrastructure teams. Proactively modernizes existing systems without compromising stability. Benefits Hybrid Working Culture Amazing Perks & Medical Benefits 5 Days Working Mentorship programs & Certification Courses Flexible work arrangements Free drinks, fridge and snacks Competitive Salary & recognitions

Posted 1 week ago

Apply

4.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Senior Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. Those in software engineering at PwC will focus on developing innovative software solutions to drive digital transformation and enhance business performance. In this field, you will use your knowledge to design, code, and test cutting-edge applications that revolutionise industries and deliver exceptional user experiences. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: • Relevant experience as AEM Ops engineer involved in implementation and support of AEM multiple domain sites. • Ability in finding the root cause of the issues reported in a complex environment. • Installation / configuration / maintenance of AEM Infrastructure with load balanced, replicated and fail-over capabilities. • AEM Ops experience for both Cloud (managed services) and on premise. • Experience with AEM administration, including user permissions, synchronization, sling, auditing, reporting, and workflows. • AEM DEV or DevOps certification is desirable. • Exposure to Enterprise Search like Elastic, Apache Solr, Google., Experience with CdN like Akamai. • Exposure to Monitoring & Response using tools like AppDynamics, Datadog, DynaTrace, SCOM and Splunk. • Experienced in troubleshooting and working closely with Development teams. • Dispatcher module configs. • Understand and participate in change control and change management processes. • Should be able to work independently or with minimum guidance. Mandatory skill sets: AEM Developer/Operations Preferred skill sets: Exposure to Monitoring & Response using tools like AppDynamics, Datadog, DynaTrace, SCOM and Splunk Years of experience required: 4-7 Years Education qualification: B.Tech/B.E. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Adobe Experience Manager (AEM) Optional Skills Acceptance Test Driven Development (ATDD), Acceptance Test Driven Development (ATDD), Accepting Feedback, Active Listening, Analytical Thinking, Android, API Management, Appian (Platform), Application Development, Application Frameworks, Application Lifecycle Management, Application Software, Business Process Improvement, Business Process Management (BPM), Business Requirements Analysis, C#.NET, C++ Programming Language, Client Management, Code Review, Coding Standards, Communication, Computer Engineering, Computer Science, Continuous Integration/Continuous Delivery (CI/CD), Creativity {+ 46 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 1 week ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Team Tarana Wireless India's QA team plays a crucial role in ensuring the performance, reliability, and quality of its cloud based distributed system architecture products. With a strong focus on cloud automation, usage of sophisticated tools, and detailed validation, the team works across cutting-edge technologies. Their work empowers product stability and innovation through advanced test automation, close collaboration with engineering, and a culture of continuous improvement. Job Summary We are looking for a passionate and skilled Cloud Performance QA Engineer to evaluate the scalability, responsiveness, and resilience of our large-scale distributed system — the Tarana Cloud Suite. This includes validating cloud microservices, databases, and real-time communication with intelligent radio devices. As a key member of the QA team, you will be responsible for performance, load, stress, and soak testing, along with conducting chaos testing and fault injection to ensure system robustness under real-world and failure conditions. You'll simulate production-like environments, analyze bottlenecks, and collaborate closely with development, DevOps, and SRE teams to proactively identify and address performance issues. This role requires a deep understanding of system internals, cloud infrastructure (AWS), and modern observability tools. Your work will directly influence the quality, reliability, and scalability of our next-gen wireless platform. Key Responsibilities Understand the Tarana Cloud Suite architecture — microservices, UI, data/control flows, databases, and AWS-hosted runtime. Design and implement robust load, performance, scalability, and soak tests using Locust, JMeter, or similar tools. Set up and manage scalable test environments on AWS to mimic production loads. Build and maintain performance dashboards using Grafana, Prometheus, or other observability tools. Analyze performance test results and infrastructure metrics to identify bottlenecks and optimization opportunities. Integrate performance testing into CI/CD pipelines for automated baselining and regression detection. Collaborate with cross-functional teams to define SLAs, set performance benchmarks, and resolve performance-related issues. Conduct resilience and chaos testing using fault injection tools to validate system behavior under stress and failures. Debug and root-cause performance degradations using logs, APM tools, and resource profiling. Tune infrastructure parameters (e.g., autoscaling policies, thread pools, database connections) for improved efficiency. Required Skills & Experience Bachelor's or Master’s degree in Computer Science, Engineering, or a related field. 3–8 years of experience in Performance Testing/Engineering. Hands-on expertise with Locust, JMeter, or equivalent load testing tools. Strong experience with AWS services such as EC2, ALB/NLB, CloudWatch, EKS/ECS, S3, etc. Familiarity with Grafana, Prometheus, and APM tools like Datadog, New Relic, or similar. Strong understanding of system metrics: CPU, memory, disk I/O, network throughput, etc. Proficiency in scripting and automation (Python preferred) for custom test scenarios and analysis. Experience with testing and profiling REST APIs, web services, and microservices-based architectures. Exposure to chaos engineering tools (e.g., Gremlin, Chaos Mesh, Litmus) or fault injection practices. Experience with CI/CD tools (e.g., Jenkins, GitLab CI) and integrating performance tests into build pipelines. Nice to Have Experience with Kubernetes-based environments and container orchestration. Knowledge of infrastructure-as-code tools (Terraform, CloudFormation). Background in network performance testing and traffic simulation. Experience in capacity planning and infrastructure cost optimization. About Us Tarana’s mission is to accelerate the deployment of fast, affordable internet access around the world. Through a decade of R&D and more than $400M of investment, the Tarana team has created a unique next-generation fixed wireless access (ngFWA) technology instantiated in its first commercial platform, Gigabit 1 (G1). It delivers a game-changing advance in broadband economics in both mainstream and underserved markets, using either licensed or unlicensed spectrum. G1 started production in mid-2021 and has since been embraced by more than 250 service providers in 19 countries and 41 US states. Tarana is headquartered in Milpitas, California, with additional research and development in Pune, India. Visit our website for more on G1.

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Site Reliability is about combining development and operations knowledge and skills to help make the organization better. If you have SRE or development background and have experience on improving reliability of your services/products by adding Observability to it – Cvent SRE can benefit from your skillsets. Ultimately, we are looking for passionate people who love learning, love technology and always want to make things better. As a Senior SRE on the SRE Observability team, you will be responsible for helping Cvent to achieve our reliability goals. We are looking for someone with the drive, ownership and ability to take on challenging problems, both technical and process related, in a dynamic, collaborative and highly distributed, multi-disciplinary team environment. You will use your background as a generalist to work closely with product development teams, Cloud Infrastructure and other SRE teams to ensure the effective observability and improve reliability of our products, SLDC and Infrastructure. You must be able to see the big picture and work collaboratively with teams to solve hard multi-disciplinary problems. Technical expertise in topics such as cloud operations, the software development lifecycle, and Observability tools will be of great help to you. We use SRE principals such as blameless postmortems and a focus on automation to ensure we’re constantly improving our knowledge and maintaining a good quality of life. Overall, we’re passionate about continuous improvement, learning and participating in dynamic day to day work where success is rewarded with recognition and upward mobility. What You Will Be Doing •Enlighten, Enable and Empower a fast-growing set of multi-disciplinary teams, across multiple applications and locations. •Tackle complex development, automation and business process problems. Champion Cvent standards and best practices. •Ensure the scalability, performance, and resilience of Cvent products and processes. •Work with product development teams, Cloud Automation and other SRE teams to ensure a holistic understanding of observability gaps and their effective and efficient identification and resolution. •Identify recurring problems and anti-patterns in development, operational and security processes and help respective team to build observability for those. •Develop build, test and deployment automation that seamlessly targets multiple on-premises and AWS regions. •Give back by working on and contributing to Open-Source projects. What You Need for this Position Must have skills: •Excellent communication skills and track record working in distributed teams •A passion for and track record in making things better for your peers. •Experience managing AWS services / operational knowledge of managing applications in AWS – ideally via automation. •Fluent in at least one scripting languages like Typescript, Javascript, Python, Ruby and Bash. •Experience with SDLC methodologies (preferably Agile). •Experience with Observability (Logging, Metrics, Tracing) and SLI/SLO •Working with APM, monitoring, and logging tool (Datadog, New Relic, Splunk) •Good understanding of containerization concepts - docker, ECS, EKS, Kubernetes. •Self-motivation and the ability to work under minimal supervision •Troubleshooting and responding to incidents, set a standard for others to prevent the issues in future. Good to have skills: •Experience with Infrastructure as Code (IaC) tools such as CloudFormation, CDK (preferred) and Terraform. •Experience managing 3 tier application stacks. •Understanding of basic networking concepts. •Experience on Server configuration through Chef, Puppet, Ansible or equivalent •Working experience with NoSQL databases such as MongoDB, Couchbase, Postgres etc •Use APM data to Troubleshooting and finding performance bottleneck

Posted 1 week ago

Apply

4.0 - 9.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Site Reliability Engineering Good to have skills : DevOps, AWS AdministrationMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time educationJOB SUMMARY:The Principal Site Reliability Engineer drives Vertex to implement highly reliable, scalable, and performant system across the enterprise. This is realized by relentlessly measuring the environments and finding areas that need improvement. Improvements can range from education of engineering and operational resources, creating new capabilities, providing code enhancements, or implementing processes and tools. Success is measured by data and backed by continued customer satisfaction. The Senior SRE Engineer will use their infrastructure experiences combined with development engineering best practices to build solutions to improve our environment.ESSENTIAL JOB FUNCTIONS AND RESPONSIBILITIES:Responsible for designing, developing, implementing, and optimizing the efficiency of the environment including performance, reliability, and scalability of our services.Responsible for measuring the health and performance of the environments by implementing tooling such as Datadog to achieve the proper level of visibility of the environment.Enable teams to implement observability by developing and publishing standards and best practices, and providing guidance and implementation assistance to engineering teams.Responsible for designing and implementing coding assignments related to applications, systems reliability, monitoring, alerting, and analytics.Responsible for effectively managing Incidents to quickly and efficiently restore service to Vertex customers Accountable to bridge and educate Engineering and Operations teams to ensure SRE principles are implemented consistently across the enterprise. Take a proactive approach to anticipate and correct a wide range of production issues including outages, processing slowdowns or stoppages, errors, and failures Recommend engineering and operational improvements including code enhancements, process improvements, or procedural amendments.Ability to triage, isolate, and resolve complex environmental issues in an expedient and open fashionProvide technical leadership for a wide range of projects.Assist and mentor other engineering staffKNOWLEDGE, S AND ABILITIES:Experience with multiple software development languages including C#, Go, Python or Java.Experience with platform monitoring tools like Datadog, AWS CloudWatch, or similarExperience with Software as a Service (SaaS) environments including architecture and management.Experience designing and deploying AWS services with an Infrastructure as Code (IaC) mindset with tools like Terraform.Experience with multiple hyperscalers, most notably AWS, Azure, and OCIExperience in Agile development methodology. Excellent written / verbal communication skills and presentation and project management skills.Ability to listen and understand information and communicate the same.Ability to network with key contacts outside own area of expertise.Ability to lead the work of others in the context of a project.Ability to work without supervision, working with wide latitude for independent decision making. EDUCATION, TRAINING:Undergraduate degree preferably in Computer Science or a similar technical degree.10+ years of experience in technology related roles.4+ years of experience in a DevOps culture or production SaaS environment.Other Qualifications The Winning Way behaviors that all Vertex employees need in order to meet the expectations of each other, our customers, and our partners.Communicate with Clarity - Be clear, concise and actionable. Be relentlessly constructive. Seek and provide meaningful feedback.Act with Urgency - Adopt an agile mentality - frequent iterations, improved speed, resilience. 80/20 rule better is the enemy of done. Dont spend hours when minutes are enough.Work with Purpose - Exhibit a We Can mindset. Results outweigh effort. Everyone understands how their role contributes. Set aside personal objectives for team results. Drive to Decision - Cut the swirl with defined deadlines and decision points. Be clear on individual accountability and decision authority. Guided by a commitment to and accountability for customer outcomes. Own the Outcome - Defined milestones, commitments and intended results. Assess your work in context, if youre unsure, ask. Demonstrate unwavering support for decisions.COMMENTS:The above statements are intended to describe the general nature and level of work being performed by individuals in this position. Other functions may be assigned, and management retains the right to add or change the duties at any time. Qualification 15 years full time education

Posted 1 week ago

Apply

10.0 - 16.0 years

30 - 45 Lacs

Bengaluru

Remote

- AWS & SaaS architecture - monitoring tools(Datadog, New Relic, Prometheus, Grafana) - incident mngmnt (PagerDuty, ServiceNow, Zendesk, Opsgenie) - Exp running 24x7 Cloud Ops team - DevOps processes, CI/CD pipelines, IaC tools(Terraform, Ansible)

Posted 1 week ago

Apply

0 years

6 - 8 Lacs

Hyderābād

On-site

Ready to build the future with AI? At Genpact, we don’t just keep up with technology—we set the pace. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what’s possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Lead Consultant – Cloud specialist role We are seeking a highly skilled and experienced Cloud Specialist – Lead Consultant to design, implement, and manage cloud infrastructure and solutions across public cloud platforms such as AWS, Azure, or GCP. The ideal candidate will have deep technical expertise in cloud architecture, automation, security, and best practices, along with the ability to lead client engagements, drive solution delivery, and mentor team members. Responsibilities Monitor cloud environments (Azure & AWS) using tools like CloudWatch, Azure Monitor, Datadog, Dynatrace, or similar. Lead the end-to-end design, implementation, and optimization of cloud infrastructure across AWS, Azure, or GCP platforms. Engage with stakeholders to assess current cloud infrastructure, define cloud adoption strategies, and architect secure, scalable, and cost-effective solutions. Drive infrastructure automation using Terraform, ARM, CloudFormation, or other IaC tools. Design and implement CI/CD pipelines using tools like Jenkins, GitLab CI, GitHub Actions, or Azure DevOps. Lead migration of on-prem workloads to the cloud using lift-and-shift or cloud-native approaches. Enforce cloud governance, compliance, and security controls including IAM, encryption, network security, logging, and monitoring. Collaborate with DevOps, Security, and Application teams to support modern deployment and release practices. Provide technical leadership, mentorship, and guidance to junior cloud engineers and consultants. Troubleshoot complex issues, conduct root cause analysis, and implement preventative solutions. Develop and maintain cloud architecture documents, standards, runbooks, and knowledge base articles. Stay current with emerging cloud technologies, trends, and certifications. Qualifications we seek in you! Minimum Qualifications / Skills Bachelor's degree in information technology, Computer Science, or a related field. Hands-on experience with AWS, Azure, or Google Cloud Platform (certification preferred). Strong expertise in Terraform, CloudFormation, or equivalent IaC tools. Experience with containerization and orchestration: Docker, Kubernetes, EKS/AKS/GKE. Proficiency in scripting (Python, Bash, PowerShell) and cloud automation. Deep understanding of cloud security best practices, identity/access management, and compliance (CIS, NIST, etc.). Experience in setting up and managing CI/CD pipelines. Good understanding of networking concepts (VPC, subnets, VPN, load balancing, DNS). Familiarity with tools like Ansible, Packer, Vault, Prometheus/Grafana, CloudWatch, Azure Monitor, or Stackdriver. Strong client-facing, consulting, and communication skills. Preferred Qualifications/ Skills AWS Certified Solutions Architect – Professional Microsoft Certified: Azure Solutions Architect Expert Google Cloud Professional Cloud Architect HashiCorp Certified: Terraform Associate Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career —Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Lead Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 22, 2025, 6:40:50 AM Unposting Date Ongoing Master Skills List Consulting Job Category Full Time

Posted 1 week ago

Apply

9.0 years

3 - 7 Lacs

Hyderābād

On-site

Java Developer with AWS Hyderabad Full time opportunity Job Description: Experience in Java, J2ee, Spring boot. Experience in Design, Kubernetes, AWS (Lambda, EKS, EC2) is needed. Experience in AWS cloud monitoring tools like Datadog, Cloud watch, Lambda is needed. Experience with XACML Authorization policies. Experience in NoSQL , SQL database such as Cassandra, Aurora, Oracle. Experience with Web Services SOA experience (SOAP as well as Restful with JSON formats), with Messaging (Kafka). Hands on with development and test automation tools/frameworks (e.g. BDD and Cucumber) Job Type: Full-time Pay: ₹2,000,000.00 - ₹2,500,000.00 per month Location Type: In-person Schedule: Day shift Experience: Java: 9 years (Preferred) Spring Boot: 5 years (Preferred) Microservices: 5 years (Preferred) AWS: 1 year (Preferred) Kubernetes: 1 year (Preferred) Work Location: In person

Posted 1 week ago

Apply

12.0 years

0 Lacs

Gurugram, Haryana, India

On-site

gStore is GreyOrange’s flagship SaaS platform that transforms physical retail operations through realtime, AI-driven inventory visibility and intelligent in-store task execution. It integrates advanced technologies like RFID, computer vision, and machine learning to deliver 98%+ inventory accuracy with precise spatial mapping. gStore empowers store associates with guided workflows for omnichannel fulfillment (BOPIS, ship-from-store, returns), intelligent task allocation, and real-time replenishment — significantly improving efficiency, reducing shrinkage, and driving in-store conversions. The platform is cloud-native, hardware-agnostic, and built to scale across thousands of stores globally with robust integrations and actionable analytics. Roles & Responsibilities Define and drive the overall architecture for scalable, secure, and high-performance distributed systems. Write and review code for critical modules and performance-sensitive components to set quality and architectural standards. Collaborate with engineering leads and product managers to align technology strategy with business goals. Evaluate and recommend tools, technologies, and processes to ensure the highest quality product platform. Own and evolve the system design, ensuring modularity, multi-tenancy, and future extensibility. Establish and govern best practices around service design, API development, security, observability, and performance. Review code, designs, and technical documentation, ensuring adherence to architecture and design principles. Lead design discussions and mentor senior and mid-level engineers to improve design thinking and engineering quality. Partner with DevOps to optimise CI/CD, containerization, and infrastructure-as-code Stay abreast of industry trends and emerging technologies, assessing their relevance and value. Skills 12+ years of experience in backend development Strong understanding of data structures and algorithms Good knowledge of low-level and high-level system designs and best practices Strong expertise in Java & Spring Boot , with a deep understanding of microservice architectures and design patterns. Good knowledge of databases (both SQL and NoSQL ), including schema design, sharding, and performance tuning. Expertise in Kubernetes, Helm, and container orchestration** for deploying and managing scalable applications. Advanced knowledge of Kafka for stream processing, event-driven architecture, and data integration. Proficiency in Redis for caching, session management, and pub-sub use cases. Solid understanding of API design (REST/gRPC), authentication (OAuth2/JWT), and security best practices. Strong grasp of system design fundamentals—scalability, reliability, consistency, and observability. Experience with monitoring and logging frameworks (e.g. Datadog, Prometheus, Grafana, ELK, or equivalent). Excellent problem-solving, communication, and cross-functional leadership skills. Prior experience in leading architecture for SaaS or high-scale multi-tenant platforms is highly desirable.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

We are looking for a skilled and motivated DevOps Engineer with 2+ years of experience who has practical exposure to managing cloud-based infrastructure, setting up environments, and maintaining scalable systems. The ideal candidate should demonstrate a consistent work history (maximum of two organizations), a problem-solving mindset, and hands-on experience with DevOps tools and Linux-based systems. You will be responsible for managing deployments across development, testing, and production environments using a variety of cloud services including AWS, Azure, Google Cloud, and platforms like GoDaddy VPS, Vercel, Render, and Netlify. Key Responsibilities: Cloud Infrastructure Management o Create and manage Virtual Private Servers (VPS) on GoDaddy, including setting up Linux environments, installing services, securing access, and configuring firewalls. o Set up and maintain environments on AWS, Azure, Google Cloud, and serverless platforms like Vercel, Render, Netlify. o Provision and manage development, testing, staging, and production environments. CI/CD Pipeline & Automation o Build and maintain CI/CD pipelines using tools like GitLab CI, GitHub Actions, Jenkins, etc. o Automate infrastructure provisioning using Terraform, Ansible, or similar IAC tools. Containerization & Orchestration o Work with Docker and preferably Kubernetes to manage containers and microservices. Monitoring & Logs o Set up monitoring using tools like Prometheus, Grafana, ELK Stack, Datadog, etc. o Troubleshoot system and application-level issues proactively. Linux Systems & Scripting o Comfortable with Ubuntu, Linux command-line, and sudo/root privileges for installation, updates, and system maintenance. o Script automation tasks using Bash, Shell, or Python. Collaboration & Support o Collaborate with developers, QA, and security teams to ensure smooth deployments. o Create documentation and support processes to enable streamlined operations. Required Skills & Qualifications: Minimum 2 years of hands-on DevOps experience. Must have experience creating and maintaining VPS on GoDaddy or similar hosting providers. Working knowledge of cloud platforms: AWS, Azure, Google Cloud, and deployment platforms like Vercel, Render, Netlify. Experience with Linux (Ubuntu preferred) and basic system administration commands. Strong understanding of CI/CD pipelines, Git workflows, and build automation. Familiar with Docker, version control (Git), and deployment best practices. Experience in setting up environments across dev/test/prod. Should not have worked in more than 2 companies. Excellent troubleshooting, documentation, and communication skills.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About The Role Grade Level (for internal use): 10 The Team: The TechOps team is responsible for cloud infrastructure provisioning and maintenance in addition to providing high quality Technical Support across a wide suite of products within PVR business segment. The TechOps team works closely with a highly competent Client Services team and the core project teams to resolve client issues and improve the platform. Our work helps ensure that all products are provided a high-quality service and maintaining client satisfaction. The team is responsible for owning and maintaining our cloud hosted apps. The Impact : The role is an extremely critical role to help affect positive client experience by virtue of creating and maintaining high availability of business-critical services/applications. What’s in it for you: The role provides for successful candidate to have: Opportunity to interact and engage with senior technology and operations users Work on latest in technology like AWS, Terraform, Datadog, Splunk, Grafana etc Work in an environment which allows for complete ownership and scalability What We’re Looking For Basic Required Qualifications: Total 7+ years of experience required with atleast 4+ years in infrastructure provisioning and maintenance using IaC in AWS. Building (and support) AWS infrastructure as code to support our hosted offering. Continuous improvement of infrastructure components, cloud security, and reliability of services. Operational support for cloud infrastructure including incident response and maintenance. Candidate needs to be an experienced technical resource (Java, Python, Oracle, PL/SQL, Unix) with strong understanding of ITIL standards such as incident and problem management. Ability to understand complex release dependencies and manage them automatically by writing relevant automations Drive and take responsibilities of support and monitoring tools Should have exposure to hands-on fault diagnosis, resolution, knowledge sharing and delivery in high pressure client focused environment. Extensive experience of working on mission critical systems Involve and drive RCA for repetitive incidents and provide solutions. Driving excellent levels of service to business, effective management & technology strategy development and ownership through defined process Good knowledge of SDLC, agile methodology, CI/CD and deployment tools like Gitlab, GitHub, ADO Knowledge of Networks, Database, Storage, Management Systems, services frameworks, cloud technologies Additional Preferred Qualifications Keen problem solver with analytical nature and excellent problem-solving skillset Be able to work flexible hours including some weekends and possibly public holidays to meet service level agreements Excellent communication skills, both written and verbal with ability to represent complex technical issues/concepts to non-tech stakeholders About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH103.1 - Middle Management Tier I (EEO Job Group) Job ID: 316334 Posted On: 2025-07-09 Location: Noida, Uttar Pradesh, India

Posted 1 week ago

Apply

7.0 years

0 Lacs

Ghaziabad, Uttar Pradesh, India

On-site

About The Role Grade Level (for internal use): 10 The Team: The TechOps team is responsible for cloud infrastructure provisioning and maintenance in addition to providing high quality Technical Support across a wide suite of products within PVR business segment. The TechOps team works closely with a highly competent Client Services team and the core project teams to resolve client issues and improve the platform. Our work helps ensure that all products are provided a high-quality service and maintaining client satisfaction. The team is responsible for owning and maintaining our cloud hosted apps. The Impact : The role is an extremely critical role to help affect positive client experience by virtue of creating and maintaining high availability of business-critical services/applications. What’s in it for you: The role provides for successful candidate to have: Opportunity to interact and engage with senior technology and operations users Work on latest in technology like AWS, Terraform, Datadog, Splunk, Grafana etc Work in an environment which allows for complete ownership and scalability What We’re Looking For Basic Required Qualifications: Total 7+ years of experience required with atleast 4+ years in infrastructure provisioning and maintenance using IaC in AWS. Building (and support) AWS infrastructure as code to support our hosted offering. Continuous improvement of infrastructure components, cloud security, and reliability of services. Operational support for cloud infrastructure including incident response and maintenance. Candidate needs to be an experienced technical resource (Java, Python, Oracle, PL/SQL, Unix) with strong understanding of ITIL standards such as incident and problem management. Ability to understand complex release dependencies and manage them automatically by writing relevant automations Drive and take responsibilities of support and monitoring tools Should have exposure to hands-on fault diagnosis, resolution, knowledge sharing and delivery in high pressure client focused environment. Extensive experience of working on mission critical systems Involve and drive RCA for repetitive incidents and provide solutions. Driving excellent levels of service to business, effective management & technology strategy development and ownership through defined process Good knowledge of SDLC, agile methodology, CI/CD and deployment tools like Gitlab, GitHub, ADO Knowledge of Networks, Database, Storage, Management Systems, services frameworks, cloud technologies Additional Preferred Qualifications Keen problem solver with analytical nature and excellent problem-solving skillset Be able to work flexible hours including some weekends and possibly public holidays to meet service level agreements Excellent communication skills, both written and verbal with ability to represent complex technical issues/concepts to non-tech stakeholders About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH103.1 - Middle Management Tier I (EEO Job Group) Job ID: 316334 Posted On: 2025-07-09 Location: Noida, Uttar Pradesh, India

Posted 1 week ago

Apply

12.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Job Description Are You Ready to Make It Happen at Mondelēz International? Join our Mission to Lead the Future of Snacking. Make It With Pride. Together with analytics team leaders you will support our business with excellent data models to uncover trends that can drive long-term business results. How You Will Contribute You will: Work in close partnership with the business leadership team to execute the analytics agenda Identify and incubate best-in-class external partners to drive delivery on strategic projects Develop custom models/algorithms to uncover signals/patterns and trends to drive long-term business performance Execute the business analytics program agenda using a methodical approach that conveys to stakeholders what business analytics will deliver What You Will Bring A desire to drive your future and accelerate your career and the following experience and knowledge: Using data analysis to make recommendations to senior leaders Technical experience in roles in best-in-class analytics practices Experience deploying new analytical approaches in a complex and highly matrixed organization Savvy in usage of the analytics techniques to create business impacts The Data COE Software Engineering Capability Tech Lead will be part of Data Engineering and Ingestion team and would be responsible for defining and implementing software engineering best practices, frameworks, and tools that support scalable data ingestion and engineering processes. This includes building robust backend services and intuitive front-end interfaces to enable self-service, observability, and governance in data pipelines across the enterprise. Key Responsibilities: Lead the development of reusable software components, libraries, and frameworks for data ingestion, transformation, and orchestration. Design and implement intuitive user interfaces (dashboards, developer portals, workflow managers) using React.js and modern frontend technologies. Develop backend APIs and services to support data engineering tools and platforms. Define and enforce software engineering standards and practices across the Data COE for developing and maintaining data product. Collaborate closely with data engineers, platform engineers, and other COE leads to gather requirements and build fit-for-purpose engineering tools. Integrate observability and monitoring features into data pipeline tooling. Lead evaluations and implementations of tools to support continuous integration, testing, deployment, and performance monitoring. Mentor and support engineering teams in using the frameworks and tools developed. Qualifications: Bachelor’s or master’s degree in computer science, Engineering, or related discipline. 12+ years of full-stack software engineering experience, with at least 3 years in data engineering, platform, or infrastructure roles. Strong expertise in front-end development with React.js and component-based architecture. Backend development experience in Python with exposure to microservices architecture FAST APIs and RESTful APIs. Experience working with data engineering tools such as Apache Airflow, Kafka, Spark, Delta Lake, and DBT. Familiarity with GCP cloud platforms, containerization (Docker, Kubernetes), and DevOps practices. Strong understanding of CI/CD pipelines, testing frameworks, and software observability. Ability to work cross-functionally and influence without direct authority. Preferred Skills: Proven experience with building internal developer platforms or self-service portals. Familiarity with data catalogue, metadata, and lineage tools (e.g., Collibra). Familiarity with Data Quality rules and tolls like Ataccama, Data observability tools like Datadog. Understanding of data governance and data mesh concepts. Agile delivery mindset with strong emphasis on automation and reusability. Tools and Technologies Frontend Development: React.js Backend Development: Python, FAST APIs, RESTful APIs, Microservices Architecture. Cloud Platform: GCP, AWS. Data warehousing tech: Big Query. Containerization: Docker, Kubernetes. CI/CD & DevOps: CI/CD pipelines, Testing Frameworks, Software Observability. Data Governance: Data catalogue tools (e.g., Collibra), Metadata Management, Lineage Tools (Collibra), Data Quality tools (Ataccama), Data Observability tools (Datadog), Data Mesh concepts. Optional Skills Data Engineering Tools: DBT, Apache Airflow, databricks. Familiarity with SSO tools like PingID Why Join Us? Play a strategic role in developing the engineering backbone for a next-generation enterprise Data COE. Work with cutting-edge data and software technologies in a highly collaborative and innovative environment. Drive meaningful change and enable data-driven decision-making across the business. Within Country Relocation support available and for candidates voluntarily moving internationally some minimal support is offered through our Volunteer International Transfer Policy Business Unit Summary At Mondelēz International, our purpose is to empower people to snack right by offering the right snack, for the right moment, made the right way. That means delivering a broad range of delicious, high-quality snacks that nourish life's moments, made with sustainable ingredients and packaging that consumers can feel good about. We have a rich portfolio of strong brands globally and locally including many household names such as Oreo , belVita and LU biscuits; Cadbury Dairy Milk , Milka and Toblerone chocolate; Sour Patch Kids candy and Trident gum. We are proud to hold the top position globally in biscuits, chocolate and candy and the second top position in gum. Our 80,000 makers and bakers are located in more than 80 countries and we sell our products in over 150 countries around the world. Our people are energized for growth and critical to us living our purpose and values. We are a diverse community that can make things happen—and happen fast. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Analytics & Modelling Analytics & Data Science

Posted 1 week ago

Apply

7.0 - 12.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Position: Team Lead-SOC, Noida Department: Information Technology | Role: Full-time | Experience: 7 to 12 Years | Number of Positions: 1 | Location: Noida Skillset: SOC Lead, Team Lead, Threat monitoring, Cyber Security, Forensics Services, Audit Trails, SIEM, ITSM Tools, Excellent English communication skills Job Description: We are seeking for SOC Lead to support threat monitoring, detection, event analysis, incident response/reporting, brand monitoring, forensics and threat hunting activities for its SOC, which is a 24/7 environment. The individual must be able to rapidly respond to security incidents and should have at least 7 years of relevant experience in Cyber security incident response. Should have deeper understanding with some hands-on experience on enterprise IT infra components such as advanced firewalls, IPS/IDS/WIPS/HIPS, routers/switches, TACACS, VPN, proxy, AV/EDR, DNS, DHCP, multi factor authentication, virtualization, Email systems/security, Web Proxy, DLP etc. along with cloud environments like AWS (Must), Azure etc. Responsibilities: • Should be able to manage a SOC L1/L2 team • Providing incident response/investigation and remediation support for escalated security alerts/incidents • Work with various stakeholders for communicating and remediating the cyber incidents • Use emerging threat intelligence IOCs, IOAs, etc.to identify affected systems and the scope of the attack and perform threat hunting, end user’s systems and AWS infrastructure • Provides support for complex computer/network exploitation and defense techniques to include deterring, identifying and investigating computer, applications and network intrusions • Provides technical support for forensics services to include evidence capture, computer forensic analysis and data recovery, in support of computer crime investigation. • Should be able to safeguard and custody of audit trails in case of any security incident • Researches and maintains proficiency in open and closed source computer exploitation tools, attack techniques, procedures and trends. • Performs research into emerging threat sources and develops threat profiles. Keep updated on latest cyber security threats. • Demonstrates strong evidence of analytical ability and attention to detail. Has a broad understanding of all stages of incident response. • Performing comprehensive computer monitoring, identifying vulnerabilities, Target mapping and profiling. • Has a sound understanding of SIEM (Splunk, Datadog, Arcsight etc), PIM/PAM, EDR, O365 security suite and other threat detection platforms and Incident Response tools. • Should have knowledge of integrating security solutions to SIEM tool and crate the use cases as per the best practices and customized requirements • Has knowledge on working on ITSM tools such as JIRA, Service NOW etc • Has a logical, disciplined and analytical approach to problem solving • Has knowledge of current threat landscape such as APTs • Has basic knowledge of Data Loss Prevention monitoring • Has basic knowledge of audit requirements (SOC2, HIPPA, ISO27001, etc.) • Should be flexible to work in 24*7 environment Preferred qualifications: Security Certifications Preferred (but not limited to): CISSP, CHFI, CEH Additional Information: • This is 5 days work from office role.(No Hybrid/ Remote options available) • There are 2-3 rounds in the interview process. • Final round will be F2F only (Strictly) Required Qualification: Bachelor of Engineering - Bachelor of Technology (B.E./B.Tech.) - IT/CS/E&CE/MCA With a Top Pharmacovigilance IT Products MNC

Posted 1 week ago

Apply

2.0 - 4.0 years

0 Lacs

Lalitpur, Uttar Pradesh, India

On-site

Who We Are Alaya is a place where dreams take shape and grow. Originally founded as Home Loan Experts in 2007, the company expanded its reach by establishing HLE Nepal in 2012, which later rebranded to Alaya. This transformation signifies Alaya's unwavering commitment to supporting not only its customers and clients but also its community and colleagues. Alaya goes beyond being a mortgage industry leader, offering a boundless place of limitless potential where individuals can dedicate their hearts, flourish, and shine. Alaya redefines the power of place, becoming “Your Place” to dedicate your heart, to flourish, and to shine. Alaya and Home Loan Experts offer more than mortgage expertise; it's a place where you can thrive. Here, we value Passion, Care, and Oneness—caring deeply for our team and embracing unity. Together, the Alaya team works as one family, united in their mission to help people realise their dreams. To know more, visit our websites: Alaya and Home Loan Experts About You As a DevOps Engineer at Alaya, you’ll be a critical member of our tech team, helping to design, build, and maintain secure, scalable, and observable AWS cloud infrastructure. You’ll play a hands-on role in improving and automating our CI/CD pipelines, collaborating closely with developers to ensure production readiness and enhancing our internal tooling and workflows. What are your deliverables? Design, develop, and optimise cloud-native AWS infrastructure with cost-efficiency and security in mind. Build and maintain Infrastructure as Code using Terraform and CloudFormation. Automate CI/CD pipelines to support seamless integration, delivery, and testing (e.g., GoCD, Jenkins, GitHub Actions). Proactively monitor infrastructure and application performance using tools like Datadog and CloudWatch. Implement and maintain robust logging, alerting, and monitoring systems to support high availability. Collaborate with developers to ensure production readiness and improve the developer experience. Conduct root cause analyses and post-mortems, implementing preventive improvements. Continuously assess infrastructure scalability and drive cost-optimisation strategies. Stay up-to-date with the latest industry trends and propose relevant tools and processes. Requirements Essential: Minimum 2-4 years of DevOps experience, including hands-on experience with AWS cloud infrastructure. Strong skills in Linux system administration, Build & Release Engineering, and automation scripting (Python or Bash). Proven expertise in AWS services such as EC2, RDS, S3, ELB, VPC, IAM, CloudWatch, KMS. Experience with serverless frameworks and AWS Lambda. Proficiency in tools like Terraform (preferred), CloudFormation, GoCD, CodePipeline, CodeBuild, Jenkins, GitHub Actions. Hands-on experience with Docker, container orchestration, and infrastructure monitoring. Familiarity with web/network fundamentals (DNS, HTTPS, NGINX, Apache). Solid version control practices using Git. Why should you join us? Alaya is not just another job opportunity – it's an immersive experience that empowers you to unleash your potential and make a meaningful impact in the home loan industry. We're passionate about helping you bring your dreams to life . Here, you'll find a vibrant team of individuals who celebrate your unique talents and foster an environment where you can be your authentic self . It's a place where genuine connections are formed and lifelong friendships are forged. If you're looking for a place that embraces your authenticity and encourages you to soar to new heights, Alaya is the perfect fit. We celebrate diversity, foster creativity , and provide a platform for you to make a meaningful impact . Oh did we mention, we only work 5 days a week? Mon- Fri. Besides the list of benefits that the Labour Law mandates, we also offer; Attractive Salary Package Exclusive leaves and bonuses Flexible working hours Festival, profit, and book reading bonus Team building activities and social events Accident and medical insurance coverage International working environment exposure Continuous learning and development opportunities Customer Referral - Refer your friends and relatives in Australia to use our services and we’ll reward you! Disclaimer: By submitting your job application, you are consenting to the retention of your personal data in our database for recruitment purposes. Your data will be held securely and will only be accessible to authorized personnel. What’s the next step? If you’re passionate about AWS, automation, and system reliability, and enjoy working in a fast-paced, collaborative environment where your contributions truly matter, this role is for you.

Posted 1 week ago

Apply

6.0 years

0 Lacs

Delhi, India

On-site

DevOps Engineer Experience : 6 - 10 Years Exp Salary: Confidential Preferred Notice Period : 45 Days Shift : 9:00 AM to 6:00 PM IST Opportunity Type: Hybrid (Bengaluru) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients.) Must have skills required : CI/CD OR Docker OR AWS OR Jenkins AND Kubernetes UptimeAI (One of Uplers' Clients) is Looking for: DevOps Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview About Us We are a fast-growing, AI-first SaaS startup backed by top-tier investors and operating across India and the US. Our platform helps enterprises optimize critical business functions using cutting-edge AI and automation. As we scale, we’re looking for a hands-on DevOps Engineer who thrives in startup environments and can take ownership of cloud infrastructure, deployment, and CI/CD workflows. Key Responsibilities Design, implement, and manage cloud infrastructure across AWS and Azure for both internal platforms and customer-specific deployments Configure and maintain VPCs, VPNs, and peering to enable secure, scalable, and isolated environments Build and automate CI/CD pipelines for application and ML workloads Manage multi-tenant vs single-tenant deployments based on customer requirements Implement monitoring, alerting, logging, and disaster recovery strategies Work closely with engineering to ensure seamless Dev→Prod flows and secure release management Set up and manage infrastructure as code (e.g., Terraform, Pulumi, Bicep, CloudFormation) Optimize costs, performance, and availability for both internal and customer-facing cloud workloads Enforce security best practices, access control, and compliance across infrastructure Requirements 6+ years of experience as a DevOps/SRE/Cloud Engineer in high-growth SaaS or product startups AWS Certified (at least Solutions Architect - Associate) and Azure Certified (e.g., AZ-104 or higher) Strong experience with AWS and Azure networking, including: VPC, VPNs, Subnets, Route Tables, Security Groups, NAT Gateways Site-to-site VPN setups for enterprise customers Proven experience deploying applications to customer-controlled cloud environments (BYOC) and company-controlled SaaS environments Expertise with tools like: CI/CD: GitHub Actions, GitLab CI, Azure Pipelines; IaC: Terraform, Bicep, or Pulumi; Containers: Docker, Kubernetes (EKS/AKS preferred) Familiarity with Secrets Management, IAM, Role-based Access Control, and SSO/SAML integration Strong scripting skills in Bash, Python, or PowerShell Comfortable working in a fast-paced, ambiguous startup environment Good to Have Experience with AI/ML pipeline deployment or GPU workloads Exposure to SOC2, ISO27001, or GDPR compliance in a cloud environment Familiarity with tools like Prometheus, Grafana, Datadog, ELK, or Azure Monitor Why Join Us Be part of a foundational team shaping infrastructure strategy from the ground up Solve real-world scaling and deployment problems for enterprise AI workloads Competitive salary + equity Fast-track learning with ownership, autonomy, and purpose How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: UptimeAI Inc. is a dynamic and innovative tech company headquartered in Bangalore, India. UptimeAI uniquely combines Artificial Intelligence with Subject Matter Knowledge from 200+ years of cumulative Heavy Industry experience to explain interrelations across upstream/downstream equipment, adapt to changes, identify problems, and give prescriptive diagnoses like a human expert would. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

You will be responsible for¿ ¿ Proposing and implementing advanced cloud architectures, in continuous alignment with key cloud providers, such as AWS and Google ¿ Building and setting up new development tools and infrastructure. ¿ Support the continuous evolution of bolttech¿s infrastructure as code approach, ensuring advanced levels of automation throughout the lifecycle of all workloads. ¿ Ensuring there are no single points of failure in the infrastructure. ¿ Continuously improving efficiency when setting up new cloud infrastructure and maintaining existing instances, ensuring automation, resilience, and security. For you to be successful¿ ¿ You will be passionate about the infrastructure as code paradigm ¿ You will hate doing things more than once and feel the need to automate everything ¿ You will have nightmares about single points of failure in the infrastructure ¿ You will feel the need to be challenged and you hate being blocked from delivering your full potential ¿ You will proactively explore new technologies and tools to improve process automation ¿ You will want be involved in the decision-making process, and not just a pulley in a big machine ¿ You will feel the need to be part of an international project, spawning multiple continents You will require the following qualifications and skills ¿ Experience with Docker ¿ Experience with Linux systems management ¿ Proven experience in Kubernetes ¿ Experience with Git, GitHub, BitBucket or Stash ¿ Experience with CI and CD tools (Jenkins, Bamboo or similar) ¿ Solid Experience with at least a programming/scripting language (Python, Ruby, Bash, GO, Java, Node.js, PHP) ¿ Experience with application and system monitoring tools (such as Datadog, Splunk, Dynatrace, Grafana, Prometheus, ElasticSearch, Kibana) ¿ Experience in the configuration, management and troubleshooting of web servers (Apache, NGINX), database systems (SQL and Non SQL), caching (Cassandra, Redis, Memcache or others) and queues (RabbitMQ, ZeroMQ or similar) ¿ Experience working as a DevOps or Cloud Engineer in an agile and fast paced environment ¿ Experience in a continuous integration/deploy environment ¿ Experience with AWS ¿ Automation experience in the context of a micro-service/SOA architecture. The following skills are not mandatory but will be valued ¿ Experience with ArgoCD, Argo Rollouts, Argo Workflows, Argo Events, FluxCD, Tekton ¿ Knowledge about Kyverno, OPA (Open Policy Agent) ¿ Experience with the implementation of a Service Mesh (for example, Istio, Linkerd, Kong Mesh) ¿ Knowledge about Consul ¿ Experience with Kubernetes native "infrastructure provider" tools such as Crossplane, AWS\'s ACK (AWS Controllers for Kubernetes), Google\'s Config Connector, Azure\'s ASO (which stands for Azure Service Operator) ¿ Knowledge about one or more of the following: Helm, Kustomize, Grafana\'s Tanka, VMWare\'s carvel ytt ¿ Experience with Backup tools like Velero for Kubernetes ¿ Experience with Vault

Posted 1 week ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Hi All, Looking for ITSM with Change Management Experience Location - Chennai Qualifications Required: • 4+ years of experience in ITSM • Strong knowledge of change management principles • Experience with CI/CD platforms (e.g., Jenkins, Spinnaker, ArgoCD) • Proficiency with monitoring and observability tools (e.g., Datadog, Splunk, Prometheus) • Excellent stakeholder management and communication skills Preferred: • Background in high-availability or regulated industries (e.g., fintech) • Experience with automated risk scoring, canary analysis, or feature flag systems • SRE training is a plus We are seeking ITSM manager to lead and evolve our change management strategy, ensuring software and infrastructure changes are delivered safely, reliably, and with minimal risk to business operations. You will collaborate with engineering, DevOps, SRE, security, and compliance teams to drive process maturity, automation, and cultural adoption of safe change practices. Key Responsibilities • Change Governance o Own and continuously improve the change management framework across the organization. o Lead or participate daily/weekly Change Review Board (CRB) meetings and ensure timely approvals. • Risk & Reliability Oversight o Assess the risk of planned changes and verify readiness of rollout, rollback, and validation plans. o Track key reliability metrics such as change failure rate, MTTR, and deployment lead time. • Incident Correlation & Analysis o Investigate change-related incidents and contribute to post-incident reviews. o Identify patterns and systemic issues in failed or high-risk changes. • Automation & Tooling o Partner with DevOps/SRE teams to integrate change validation, canary rollouts, and automated approvals into CI/CD pipelines. o Champion use of observability tools to monitor live changes and detect anomalies early. • Stakeholder Communication o Provide clear and actionable reporting to leadership on change success, risk trends, and improvement areas. o Coordinate with product, engineering, and operations teams for major releases or changes during high-risk periods. • Compliance & Audit Support o Ensure adherence to regulatory or internal audit requirements (e.g., SOX, ISO, PCI-DSS). o Maintain documentation and audit trails for all changes. • Review the day’s scheduled changes (deployments, infrastructure updates, config changes). • Identify high-risk or customer-impacting changes. • Coordinate with change owners and SRE/DevOps teams. • Host or participate in a daily CRB meeting. • Evaluate risk, rollback plans, testing coverage, and change windows. • Approve or defer changes based on readiness and risk appetite. • Analyze failed change incidents from the last 24–48 hours. • Correlate incidents with recent changes using observability tools. • Identify improvement opportunities or recurring patterns. • Work with platform/DevOps teams to improve automated change validation, canary analysis, and rollout tooling. • Identify steps to reduce manual change overhead (e.g., templated CRs, automated risk scoring). • Assist with writing rollout plans, defining blast radius, or preparing for peak hour deployments. • Update leadership on change volume, failure rates, success rate of automated validations, etc. • Create or refine dashboards (e.g., Change Failure Rate, MTTR, lead time for changes). • Ensure compliance with internal controls or regulatory requirements (SOX, ISO 27001, etc.). • Periodic audits of bypassed change processes or emergency fixes.

Posted 1 week ago

Apply

6.0 - 11.0 years

10 - 20 Lacs

Hyderabad

Hybrid

Walk -in date : 25th July'25(Friday) Walk in timings : 10:30 am to 12 PM walk-in venue : Cigniti Technologies (A Coforge Company) 7th floor, Vega Block, International Tech Park Plot no 17, Software Units Layout, Madhapur Hyderabad, India 500081 If anyone Interested please share your details to mounika.tungala@coforge.com Total Exp : Rel Exp in Jmeter : monitoring tools : current Location : ctc : Exp ctc : Notice period : Please do reach us before coming for the interview call letter is mandatory for the walk in. Experience in JMeter, Locust, k6 performance tools with custom coding skills. Ability to analyze the locust framework and performance requirements Create realistic performance test scenarios and workload profiles Experience in working on API and Web/Mobile App Performance testing Experience in working on AWS Services, AWS Cloud Watch, Lambda and AWS S3 Strong knowledge on public cloud platforms AWS Analyze and interpret test results to identify and address performance issues. Generate detailed performance test reports, including findings, recommendations, and performance metrics. Identify root causes of performance issues and propose solutions. Strong interpersonal, oral, and written communication skills Stay current with industrys best practices and emerging technologies in performance testing.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies