Home
Jobs
Companies
Resume

98 Cloudwatch Jobs - Page 2

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 - 3.0 years

1 - 3 Lacs

Kochi, Chennai, Coimbatore

Hybrid

Naukri logo

Job Description: Knowledge of IaaS Compute Networking Storage. Basic Understanding Security or Containerization. Assist in management of AWS services including EC2 S3 EFS Lambda etc. Proactively monitor cloud infrastructure performance alerts warnings and troubleshoot and mitigate incidents. Incident Investigation and Reservation. Provide Operational inputs to L3 SMEs for streamlining operations. Collaborate with SMEs to implement security measures and compliance protocols. Document Incident resolutions processes configurations changes in the cloud environment. Mandatory Leaf Skills For Hiring: AWS Infrastructure Services Stack Upskilling Tracks: Track Scripting PowerShell AWS CloudWatch Track Terraform Terraform Basics Azure CloudWatch Track Windows Windows Track Linux Linux Useful Certifications Optional: AWS Cloud Practitioner Foundational

Posted 3 weeks ago

Apply

5.0 - 7.0 years

35 - 40 Lacs

Mumbai, Pune, Gurugram

Work from Office

Naukri logo

Must have 5+ years of experience.Implement & maintain Kubernetes clusters, ensuring high availability and scalability. Established real-time monitoring with Grafana, Prometheus, and CloudWatch Night Shift Location-Mumbai,Gurugram,Chennai,Indore,Remote Bangalore , Delhi,kolkata

Posted 3 weeks ago

Apply

7.0 - 10.0 years

30 - 45 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

Company Overview: We are a global empathy-led technology services company where software and people transformations go hand-in-hand. Product innovation and mature software engineering are part of our core DNA. Our mission is to help our customers accelerate their digital journeys through a global, diverse, and empathetic talent pool following outcome-driven agile execution. Respect, Fairness, Growth, Agility, and Inclusiveness are the core values that we aspire to live by each day.. Responsibilities: Design and Development: Develop robust, scalable, and maintainable backend services using Python frameworks like Django, Flask, and FastAPI. Cloud Infrastructure: Work with GCP to deploy, manage, and optimize our cloud infrastructure. Software Architecture: Participate in defining and implementing software architecture best practices, including design patterns, coding standards, and testing methodologies. Database Management: Proficiently work with relational databases (e.g., PostgreSQL) and NoSQL databases (e.g., DynamoDB, Neptune) to design and optimize data models and queries. Experience with ORM tools. Automation: Design, develop, and maintain automation scripts (primarily in Python) for various tasks, including: Data updates and processing. Scheduling cron jobs. Integrating with communication platforms like Slack and Microsoft Teams for notifications and updates. Implementing business logic through automated scripts. Monitoring and Logging: Implement and manage monitoring and logging solutions using tools like GCP. Production Support: Participate in on-call rotations and provide support for production systems, troubleshooting issues and implementing fixes.? Proactively identify and address potential production issues. Team Leadership and Mentorship: Lead and mentor junior backend developers, providing technical guidance, code reviews, and support their professional growth. Required Skills and Experience: 7+ years of experience in backend development, with at least 2+ years in a leadership or senior role. Strong proficiency in Python and at least two of the following frameworks: Django, Flask, FastAPI, with good experience in Artificial Intelligence Hands-on experience with GCP cloud. Experience with relational databases (e.g., PostgreSQL) and NoSQL databases (e.g., DynamoDB, Neptune). Strong experience with monitoring and logging tools, specifically GCP. Location: Remote, Delhi NCR, Bangalore, Chennai, Pune, Kolkata, Ahmedabad, Mumbai, Hyderabad Work Timings: 2:30PM-11:30PM(Monday-Friday) Experience: 7+Years

Posted 3 weeks ago

Apply

8.0 - 13.0 years

10 - 20 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

Platforms: AWS PaaS, AWS DevOps Engineer Programming: Java, Monitoring Tools: Thousand Eyes, App Dynamics, CloudWatch, Grafana, Prometheus Java development (coding / scripting – 5-10 yrs) + AWS PaaS (min 3+ years) – SRE experience is advantage.

Posted 3 weeks ago

Apply

3 - 5 years

5 - 7 Lacs

Pune

Work from Office

Naukri logo

We are looking for DevOps Engineer How do you craft the future Smart Buildings? Were looking for the makers of tomorrow, the hardworking individuals ready to help Siemens transform entire industries, cities and even countries. Get to know us from the inside, develop your skills on the job. Youll make a difference by Designing, deploying, and managing AWS cloud infrastructure, including compute, storage, networking, and security services. Implementing and maintaining CI/CD pipelines using tools like GitLab CI, Jenkins, or similar technologies to automate build, test, and deployment processes. Collaborating with development teams to streamline development workflows and improve release cycles. Monitor and troubleshoot infrastructure and application issues, ensuring high availability and performance. Implementing infrastructure as code (IaC) using tools like Terraform or CloudFormation to automate provisioning and configuration management. Maintaining version control systems and Git repositories for codebase management and collaboration. Implementing and enforce security best practices and compliance standards in cloud environments. Continuously evaluate and embrace new technologies, tools, and best practices to improve efficiency and reliability. There are a lot of learning opportunities for our new team member. An openness to learn more about data analytics (including AI) offerings is part of your motivation Your defining qualities A University degree in Computer Science or a comparable education, we are flexible if a high quality of code is ensured. Proven experience (3-5 years) with common DevOps practices such as CI/CD pipelines (GitLab), Container and orchestration (Docker, ECS, EKS, Helm) and infrastructure as code (Terraform) Working knowledge of TypeScript, JavaScript, and Node.js. Good exposure to AWS cloud Thriving in working independently, i.e., can break down high-level objectives into concrete key results and implement those. Able to work with AWS from day one, familiarity with AWS services beyond EC2 (e.g., Fargate, RDS, IAM, Lambda) is something we expect from applicants. Having good knowledge of configuring logging and monitoring infrastructure with ELK, Prometheus, CloudWatch, Grafana. When it comes to methodologies, having knowledge of agile software development processes would be highly valued. Having the right demeanor, allowing you to navigate within a complex global organization and getting things done. We need a person with an absolute willingness to support the team, a proactive and stress-resistant personality. Business fluency in English

Posted 1 month ago

Apply

5 - 10 years

5 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

Looking for MLOps Engineer to build and scale ML pipelines on AWS using SageMaker, EKS, Docker, and Terraform. Drive CI/CD, model tracking, automation & GPU-based training. Required Candidate profile Sagemaker, EKS, EC2, IAM, Cloudwatch, ECR, Docker, Kubernetes, Helm, Jenkins, Terraform, Kubeflow, Mlflow, Wandb Volcano Scheduler

Posted 1 month ago

Apply

5 - 10 years

12 - 19 Lacs

Pune, Bengaluru, Delhi / NCR

Hybrid

Naukri logo

Job Location- Delhi NCR/Bangalore/Hyderabad/Pune/Mumbai/Chennai Shift timings- 1:30PM -11:30PM Work mode- Hybrid Exp- 5-8 years We are looking for AWS experts with the following exp- Experience Requirements: candidates with 5+ years of experience in AWS and cloud services. Python knowledge for the role, specifically for scripting to handle infrastructure and manipulate AWS services. Looking for candidates with experience in Python scripting for AWS services, not just web application development . Now is the time to bring your expertise to Insight. We are not just a tech company; we are a people-first company. We believe that by unlocking the power of people and technology, we can accelerate transformation and achieve extraordinary results. As a Fortune 500 Solutions Integrator with deep expertise in cloud, data, AI, cybersecurity, and intelligent edge, we guide organisations through complex digital decisions. Responsibilities Design and implement cloud security solutions using AWS services, ensuring compliance with industry standards Design and implement microservice based solutions using Serverless and Containerization services. Develop and maintain automation scripts and tools using Python to streamline security processes and enhance operational efficiency. Collaborate with DevOps, development, and security teams to integrate security best practices into the software development lifecycle (SDLC). Monitor cloud environments for security incidents and respond to alerts, conduct investigations and implement corrective actions, as required. Stay up to date with the latest cloud security trends, threats, and best practices, and provide recommendations for continuous improvement. Create and maintain documentation related to security policies, procedures, and compliance requirements. Provide mentorship and guidance to junior engineers and team members on cloud security and compliance practices. Key Skills Bachelor/masters degree in computer science, Information Technology, or a related field. 5+ years of experience in cloud engineering, with a focus on AWS services and cloud security. Strong proficiency in Python programming for automation and scripting. Hands-on experience with working on Python Automation testing using Unit tests and BDDs. In-depth knowledge of AWS security services (AWS Lambda, AWS IAM, S3, CloudWatch, SNS, SQS, Step Functions) is a must. Experience with microservice and containerization in AWS using Amazon EKS is a plus. Experience with Infrastructure as Code (IaC) tools such as AWS CloudFormation. Strong understanding of networking, encryption, and security protocols in cloud environments. Basic understanding of tools like Jenkins, artifactory is required. Excellent problem-solving skills and the ability to work independently and collaboratively in a fast-paced environment. Relevant certifications (e.g., AWS Certified Solutions Architect Associate, AWS Certified Developer Associate, AWS) are a plus. Experience on the AI services in the AWS environment in a plus. Excellent communication skills. What you can expect We’re legendary for taking care of you, your family and to help you engage with your local community. We want you to enjoy a full, meaningful life and own your career at Insight. Some of our benefits include: Freedom to work from another location—even an international destination—for up to 30 consecutive calendar days per year.

Posted 1 month ago

Apply

2 - 7 years

4 - 8 Lacs

Chennai

Work from Office

Naukri logo

Role & responsibilities DevOps engineer with hands-on experience in architecting, deploying, and managing secure and high-performance infrastructure for PHP-based applications. Operating Systems : Ubuntu, AlmaLinux, CentOS Web Servers : Apache, Nginx Databases : MySQL, MariaDB PHP Versions : 5.x to 8.x Monitoring & ; Observability: AWS CloudWatch, New Relic Development Environments : Local PHP stack configuration, EC2 instance provisioning and configuration, load balancer (ELB) setup for high availability and traffic distribution, Version Control: Git setup and handling (branching, merging, conflict resolution, hook scripting) Log Debugging & Performance Tuning Proficient in analyzing system, web server, and application logs to identify critical errors, bottlenecks, or misconfigurations. Experienced in isolating slow or failed API/web requests using tools like AWS CloudWatch Logs, New Relic APM, and ELK Stack. Investigates key performance metrics such as Time to First Byte (TTFB), page load time, database query latency, and PHP execution time. Identifies and fixes issues related to high TTFB by tuning PHP-FPM, optimizing Apache/Nginx configurations, and managing concurrent requests. Implements and audits caching strategies at various layers OPcache for PHP, Redis/Memcached for object/session caching, and Varnish for full-page caching. Monitors and reduces repeated queries, long query execution, and large payload responses by analyzing SQL logs and query plans. Performs root cause analysis across infrastructure and application layers to ensure stability and reduce downtime. Follows a structured debugging process to replicate and resolve speed-related issues under real load conditions. Preferred candidate profile Core Competencies Infrastructure Design & Deployment: Expertise in designing and deploying scalable infrastructure for PHP applications using the LAMP stack (Linux, Apache, MySQL, PHP). Hands-on experience in provisioning EC2 instances, configuring web servers (Apache, Nginx), and setting up load balancers (ELB) for high availability. CI/CD Implementation : Skilled in implementing and managing automated CI/CD pipelines using tools like GitLab CI, Jenkins, and GitHub Actions to streamline application deployment and updates. Caching : Extensive knowledge in optimizing application performance through caching techniques, including: Redis/Memcached for session and object caching. Varnish for full-page HTTP caching . OPcache for PHP script caching to improve response times and reduce server load. CDN Integration: Proficient in integrating Cloudflare and AWS CloudFront for Content Delivery Network (CDN) to enhance global content distribution, reduce latency, and improve site performance. Session Management: Expertise in configuring centralized session management using Redis in load-balanced environments to ensure session persistence across multiple servers. Log Aggregation & Monitoring: Strong experience in configuring log aggregation, monitoring, and alerting systems using AWS CloudWatch and New Relic, ensuring proactive issue detection and real-time response to performance bottlenecks or errors. DNS Configurations: Experienced in managing DNS records (A, CNAME, MX, TXT) using platforms like AWS Route 53 and Cloudflare, ensuring high availability, Configured DNS failover strategies to ensure high availability and minimize downtime, utilizing features like latency-based routing and geo-routing. WAF Management: Proficient in configuring AWS WAF, Cloudflare WAF, and ModSecurity to safeguard applications from security vulnerabilities like SQL injection, XSS, and CSRF. Ability to create custom WAF rules, monitor traffic, and optimize security without impacting performance.

Posted 1 month ago

Apply

3 - 8 years

5 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

The Core AI BI & Data Platforms Team has been established to create, operate and run the Enterprise AI, BI and Data that facilitate the time to market for reporting, analytics and data science teams to run experiments, train models and generate insights as well as evolve and run the CoCounsel application and its shared capability of CoCounsel AI Assistant. The Enterprise Data Platform aims to provide self service capabilities for fast and secure ingestion and consumption of data across TR. At Thomson Reuters, we are recruiting a team of motivated Cloud professionals to transform how we build, manage and leverage our data assets. The Data Platform team in Bangalore is seeking an experienced Software Engineer with a passion for engineering cloud-based data platform systems. Join our dynamic team as a Software Engineer and take a pivotal role in shaping the future of our Enterprise Data Platform. You will develop and implement data processing applications and frameworks on cloud-based infrastructure, ensuring the efficiency, scalability, and reliability of our systems. In this opportunity as the Software Engineer, you will: Develop data processing applications and frameworks on cloud-based infrastructure in partnership with Data Analysts and Architects with guidance from Lead Software Engineer. Innovate with new approaches to meet data management requirements. Make recommendations about platform adoption, including technology integrations, application servers, libraries, and AWS frameworks, documentation, and usability by stakeholders. Contribute to improving the customer experience. Participate in code reviews to maintain a high-quality codebase Collaborate with cross-functional teams to define, design, and ship new features Work closely with product owners, designers, and other developers to understand requirements and deliver solutions. Effectively communicate and liaise across the data platform & management teams Stay updated on emerging trends and technologies in cloud computing About You You're a fit for the role of Software Engineer, if you meet all or most of these criteria: Bachelor's degree in Computer Science, Engineering, or a related field 3+ years of relevant experience in Implementation of data lake and data management of data technologies for large scale organizations. Experience in building & maintaining data pipelines with excellent run-time characteristics such as low-latency, fault-tolerance and high availability. Proficient in Python programming language. Experience in AWS services and management, including Serverless, Container, Queueing and Monitoring services like Lambda, ECS, API Gateway, RDS, Dynamo DB, Glue, S3, IAM, Step Functions, CloudWatch, SQS, SNS. Good knowledge in Consuming and building APIs Business Intelligence tools like PowerBI Fluency in querying languages such as SQL Solid understanding in Software development practices such as version control via Git, CI/CD and Release management Agile development cadence Good critical thinking, communication, documentation, troubleshooting and collaborative skills.

Posted 1 month ago

Apply

2 - 3 years

4 - 5 Lacs

Noida, Gurugram

Work from Office

Naukri logo

About the Role: Grade Level (for internal use): 10 The Role: Cloud DevOps Engineer The Impact: This role is crucial to the business as it directly contributes to the development and maintenance of cloud-based DevOps solutions on the AWS platform. Whats in it for you: Drive Innovation : Join a dynamic and forward-thinking organization at the forefront of the automotive industry. Contribute to shaping our cloud infrastructure and drive innovation in cloud-based solutions on the AWS platform. Technical Growth : Gain valuable experience and enhance your skills by working with a team of talented cloud engineers. Take on challenging projects and collaborate with cross-functional teams to define and implement cloud infrastructure strategies. Impactful Solutions : Contribute to the development of solutions that directly impact the scalability, reliability, and security of our cloud infrastructure. Play a key role in delivering high-quality products and services to our clients. We are seeking a highly skilled and driven Cloud DevOps Engineer to join our team. Candidate should have experience with developing and deploying native cloud-based solutions, possess a passion for container-based technologies, immutable infrastructure, and continuous delivery practices in deploying global software. Responsibilities: Deploy scalable, highly available, secure, and fault tolerant systems on AWS for the development and test lifecycle of AWS Cloud Native solutions Configure and manage AWS environment for usage with web applications Engage with development teams to document and implement best practice (low maintenance) cloud-native solutions for new products Focus on building Dockerized application components and integrating with AWS ECS Contribute to application design and architecture, especially as it relates to AWS services Manage AWS security groups Collaborate closely with the Technical Architects by providing input into the overall solution architecture Implement DevOps technologies and processes i.e., containerization, CI/CD, infrastructure as code, metrics, monitoring etc. Experience of networks, security, load balancers, DNS and other infrastructure components and their application to cloud (AWS) environments Passion for solving challenging issues Promote cooperation and commitment within a team to achieve common goals What you will need: Understanding of networking, infrastructure, and applications from a DevOps perspective Infrastructure as code ( IaC ) using Terraform and CloudFormation Deep knowledge of AWS especially with services like ECS/Fargate, ECR, S3/CloudFront, Load Balancing, Lambda, VPC, Route 53, RDS, CloudWatch, EC2 and AWS Security Center Experience managing AWS security groups Experience building scalable infrastructure in AWS Experience with one or more AWS SDKs and/or CLI Experience in Automation, CI/CD pipelines, DevOps principles Experience with Docker containers Experience with operational tools and ability to apply best practices for infrastructure and software deployment Software design fundamentals in data structures, algorithm design and performance analysis Experience working in an Agile Development environment Strong written and verbal communication and presentation skills Education and Experience: Bachelor's degree in Computer Science, Information Systems, Information Technology, or a similar major or CertifiedDevelopment Program 2-3 years of experience managing AWS application environment and deployments 5+ years of experience working in a development organization

Posted 1 month ago

Apply

2 - 3 years

5 - 6 Lacs

Hyderabad

Work from Office

Naukri logo

Job Overview: We are seeking an enthusiastic and highly motivated Associate Software Engineer to join our team, working on a modern data platform at massive scale. This position will focus on database management, SQL query optimization, and maintenance tasks using AWS technologies, particularly AWS Aurora RDS, and contributing to the overall health and performance of our cloud-based data infrastructure. You will be an integral part of our growing team, helping to ensure high availability, scalability, and performance of our databases while working on cutting-edge technologies in a fast-paced, dynamic environment. Key Responsibilities: Database Management & Maintenance : Manage, monitor, and optimize AWS Aurora RDS databases. Perform routine maintenance tasks such as backups, patching, and upgrades. Ensure high availability, fault tolerance, and performance of databases in production environments. SQL Development & Optimization : Write, optimize, and troubleshoot SQL queries for performance and efficiency. Work on database schema design, indexing strategies, and data migration. Perform query tuning and optimization to enhance database performance. Database Administration (DBA) Activities : Assist in database provisioning, configuration, and monitoring on AWS Aurora RDS. Handle user access management, security, and compliance tasks. Assist in database health monitoring, alerting, and disaster recovery planning. AWS Cloud Technologies : Leverage AWS services, including Aurora, RDS, S3, Lambda, and others to support a robust cloud infrastructure. Participate in cloud-based data infrastructure management and scaling. Assist in implementing cost optimization strategies for database operations on AWS. Collaboration & Continuous Improvement : Work closely with cross-functional teams, including software engineers, data engineers, and operations, to ensure efficient database usage and high-quality code. Contribute to database-related best practices and automation initiatives. Participate in on-call rotations for database support, as needed. Modern Data Platform Support : Work with large-scale, distributed data systems and support the continuous evolution of our data platform. Support data integration, ETL pipelines, and data processing workflows. Qualifications: Education : Bachelors degree in Computer Science, Engineering, Information Technology, or related field (or equivalent experience). Technical Skills : Proficiency in SQL, with a strong understanding of database design, normalization, and optimization. Experience with AWS cloud services, particularly AWS Aurora RDS . Basic knowledge of database administration tasks (e.g., backups, replication, patching, performance tuning). Familiarity with AWS cloud infrastructure and services such as EC2 , S3 , Lambda , CloudWatch , IAM , and VPC . Experience with modern data platforms, distributed systems, and high-volume databases is a plus. Experience : 2 - 3 years of experience working with relational databases (preferably AWS Aurora RDS or MySQL, PostgreSQL). Familiarity with database monitoring tools and techniques. Hands-on experience with version control tools (e.g., Git) and CI/CD pipelines is a plus. Problem Solving & Analytical Skills : Strong troubleshooting skills, especially related to databases and performance bottlenecks. Ability to analyze complex issues and come up with solutions in a timely manner. Soft Skills : Strong written and verbal communication skills. Ability to work independently and as part of a team. Strong attention to detail and a commitment to high-quality work. Nice to Have : Experience with NoSQL databases (e.g.,ElasticSearch, MongoDB) or big data technologies (e.g., Apache Kafka, Hadoop, Spark). Familiarity with Infrastructure as Code (IaC) tools like Terraform or CloudFormation . Experience with containerization and Kubernetes for deploying database services. Why Join Us? Innovative Projects : Youll work on cutting-edge technology that powers large-scale data platforms and cloud-based services. Career Growth : We provide ample opportunities for skill development, mentorship, and career advancement. Collaborative Environment : Join a supportive team that fosters knowledge sharing, learning, and growth. Impact : Your work will directly contribute to the scalability and performance of our data infrastructure.

Posted 1 month ago

Apply

6 - 11 years

10 - 20 Lacs

Mumbai, Hyderabad, Bengaluru

Hybrid

Naukri logo

Hi, We are hiring for Java Developer with one of the Leading MNC for Hyderabad, Bangalore & Mumbai Location. Experience - 6 Years - 12 Years Location - Bangalore , Hyderabad , Chennai CTC - As per company norms Please find below the Job Description: Mandatory Skills - Java, Spring-boot, Microservices, AWS. Kubernetes - Good to have.. Description : Expertise in development using Core Java, J2EE, Spring Boot, Microservices, and Web Services SOA experience SOAP as well as Restful with JSON formats, with Messaging Kafka. Working proficiency in enterprise developmental toolsets like Jenkins, Git/Bitbucket, Sonar, Black Duck, Splunk, Apigee, etc. Experience in AWS cloud monitoring tools like Datadog, Cloudwatch, and Lambda is needed. Experience with XACML authorization policies. Experience in NoSQL and SQL databases such as Cassandra, Aurora, and Oracle. Good understanding of React JS, Photon framework, Design, Kubernetes Working with GIT/Bitbucket, Maven, Gradle, and Jenkins tools to build and deploy code deployment to production environments. Primary Location: IN-KA-Bangalore Schedule: Full Time Shift: Experienced Employee Status: Individual Contributor Job Type: Full-time Kindly fill in the below-mentioned details to proceed ahead with your profile Total Experience - Relevant in Java - Exp in Multithreading - Exp in Microservices - Exp in Spring Boot - Exp in Kafka - Exp in AWS - Exp in Kubernetes - Current Designation - Current Organization - Current Location - Current CTC + Variable - Any Offer in hand - Expected CTC + Variable - Notice Period / LWD - Reason for Relocation to Bangalore - If interested Kindly share your resume at nupur.tyagi@mounttalent.com

Posted 1 month ago

Apply

3 - 5 years

4 - 6 Lacs

Kolkata

Hybrid

Naukri logo

Location: Onsite / Hybrid Job Type: Full-time Experience Required: 3+ years Department: DevOps / Cloud Infrastructure About the Role Were looking for a DevOps Engineer with solid experience in AWS, Terraform, Docker, and CI/CD automation. If you're passionate about infrastructure as code, automation, and building secure, scalable cloud platforms, this role is for you. You’ll collaborate with senior DevOps engineers and developers to manage deployments, automate infrastructure, and ensure operational excellence across Linux and Windows environments. Key Responsibilities Automate and manage AWS infrastructure using Terraform (IaC). Build and maintain CI/CD pipelines with Jenkins, Bitbucket Pipelines, or GitHub Actions. Apply Git branching strategies for efficient code integration and release management. Deploy and manage Docker containers on EKS/ECS, and EC2 environments. Write and manage Bash scripts for automation, monitoring, and system maintenance. Administer Linux and Windows servers for cloud-based application environments. Set up and maintain MySQL, PostgreSQL, FileMaker Pro, and Neo4j databases. Implement monitoring and alerting solutions with CloudWatch and AWS observability tools. Tech Stack Cloud: AWS (VPC, EKS, EC2, RDS, IAM, S3, ALB, NLB, SQS, SNS, Systems Manager, CloudWatch) IaC: Terraform Containers: Docker, Docker Compose CI/CD: Jenkins, GitHub Actions, Bitbucket Pipelines Scripting: Bash (Advanced), Python (Preferred) Version Control: Git, GitHub, Bitbucket Project Management: Jira OS: Linux (Ubuntu, CentOS), Windows Server Databases: MySQL, PostgreSQL, FileMaker Pro, Neo4j Must-Have Skills 3+ years of experience in DevOps or AWS Cloud Engineering Proven expertise in AWS core services (VPC, EC2, RDS, IAM, EKS) Strong CI/CD pipeline experience using Jenkins, GitHub Actions, or Bitbucket Advanced skills in Bash scripting and automation Hands-on with Docker and container deployment Proficient in Git workflows, version control, and Jira for task management Comfortable managing both Linux and Windows server environments Experience working with MySQL and PostgreSQL Nice to Have Python scripting for automation Exposure to AWS Lambda, API Gateway, Bedrock, or SageMaker Familiarity with SonarQube, AWS Config, Vault Experience with FileMaker Pro and Neo4j Why Join Us Work in a modern DevOps environment with cutting-edge AWS architecture Collaborate with experienced cloud engineers and software teams Hybrid work flexibility Opportunities for growth, certifications, and cloud-native projects

Posted 1 month ago

Apply

4 - 9 years

11 - 21 Lacs

Pune, Bengaluru, Delhi / NCR

Work from Office

Naukri logo

4+ years of experience performance testing Expert in designing advanced test scripts (modular scripting, JavaScript, Scala, Python, etc.) and utilizing testing tools such as Gatling, JMeter, K6, Neoload, BlazeMeter, SOASTA, and LoadRunner. Experience in application monitoring and reporting using tools such as AppDynamics, DynaTrace, New Relic, CloudWatch, AppInsights, Stackdriver, Datadog etc.

Posted 1 month ago

Apply

3 - 5 years

12 - 22 Lacs

Bengaluru, Hyderabad

Work from Office

Naukri logo

We are looking for a passionate and skilled System Development Engineer to join our dynamic engineering team. The ideal candidate will be responsible for troubleshooting, diagnosing, and fixing production software issues, developing tools, automations and monitoring solutions, performing software maintenance and configuration, implementing fixes for internally developed code (primarily in Java and Ruby on Rails), resolving technical challenges, and improving operational excellence and system readiness. Key Responsibilities: - Provide support for incoming tickets, including extensive troubleshooting tasks, across multiple products, features, and services - Work on operations and maintenance-driven coding projects, primarily in Ruby, Rails, Java, Python, shell scripts, AWS, and web technologies - Support software deployment in staging and production environments - Develop internal tools to aid operations and maintenance - Prepare system and support status reports - Take ownership of one or more digital products or components - Coordinate customer notifications and workflow follow-ups to maintain service level agreements - Collaborate with the support team for hand-offs or escalation of active support issues - Contribute to building a team-specific knowledge base and skill set A Day in the Life: - Support incoming system tickets with hands-on troubleshooting - Participate in coding and automation tasks using AWS and web technologies - Assist with software deployments in various environments - Continuously improve internal tooling and operational workflows - Maintain clear status updates and documentation of support efforts - Contribute to and enhance team knowledge-sharing practices Basic Qualifications: - Knowledge of at least one modern programming language such as C, C++, Java, or Perl - Hands-on experience in at least one modern language such as C++, C#, Java, Python, Golang, PowerShell, or Ruby - Experience with tools for automation (building, testing, releasing, or monitoring) Preferred Qualifications: - Proficiency in Python scripting language - Experience working with highly concurrent, high throughput systems - Understanding of complex distributed systems Critical Technical Skills (must-haves): - Java, Ruby on Rails, Python, AWS, Automation tools - Troubleshooting and debugging production systems - Monitoring solutions and system reporting - Web technologies, shell scripting *Only women, including those returning to work after a career break, are eligible to apply for this job.

Posted 2 months ago

Apply

4 - 9 years

15 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

Expertise in CI/CD tools (Jenkins, GitLab CI, etc.). Experience with containerization (Docker, Kubernetes). Strong scripting skills (Python, Bash, etc.). Knowledge of cloud platforms and IAC (Terraform, Ansible). Healthcare domain Exp Preferred

Posted 2 months ago

Apply

5 - 9 years

12 - 17 Lacs

Chennai

Work from Office

Naukri logo

Youll make a difference by: Will & ability to be a technical driver/ leader & a team player. Capability and willingness to software program the customer requirements, using an object-oriented approach and to thoroughly test them before delivery. Having Hands-on experience in container concepts (Docker, Kubernetes) and cloud (like AWS). Having ability to effectively communicate in English, both written and spoken. Having Good hands-on experience on tools, like Eclipse, IntelliJ or Visual Studio Code. Desirable to have: Hands-on experience in the recent versions of Python. Hands-on experience in some of the following AWS services. Lambda: Ability to write and deploy serverless functions DynamoDB: Familiarity with managing NoSQL databases API Gateway: Experience with creating and managing APIs SNS/SQS: Understanding of messaging services CloudWatch: Proficiency in monitoring and logging Serverless: Ability to write infrastructure as code Working experience in NoSQL, Terraform scripts. Experience in test automation (viz. unit & integration tests). Hands-on experience on tools, like RTC Jazz, GIT, SonarQube. Working experience in agile software development (daily scrum, pair sessions, sprint planning, retro & review, clean code and self-organized), configuration, testing and release management. Working experience in test driven development, test first development, code refactoring and Profiling.

Posted 2 months ago

Apply

2 - 6 years

4 - 8 Lacs

Pune

Work from Office

Naukri logo

Job Profile/Position Overview The candidate will support the Siemens Xcelerator platform and will be responsible for identifying, managing, improving, and reporting on availability, resiliency, reliability, and stability efficiencies. This includes providing technical guidance and leadership to drive solutions, build & improve processes that deliver excellence. A positive relationship with the various product teams of the Xcelerator platform is vital to support core objectives. This roles success will be defined by product teams within DISW business units meeting their SLAs. Responsibilities/Tasks Provide & own the design, deployment, automation, and scripting solutions to drive new capabilities, visibility, and efficiency Collaborate with other technical platforms and partners to engineer automated and integrated solutions between tools, services, teams that increase availability, reliability, and performance. Own and ensure the internal and external SLAs meet and exceed expectations Be part of maintaining a 24x7, global, highly available SaaS environment Participate in an on-call rotation that supports our production infrastructure Troubleshoot production availability incidents that often span across multiple teams and services. Lead production incident post-mortems, and contribute to solutions to prevent problem recurrence; with the goal of automated response to all non-exceptional service conditions Communicate to business and technical partners on incidents as they occur when they impact system performance or availability at a critical level Required Education, and Experience Education: Bachelors Degree or equivalent experience with at least two years in IT. Experience: Automation and Scripting: Over 4 years of experience in automation, including scripting and API development. Cloud Software Development: At least 3 years of experience in software development in cloud environments. Observability Tools: A minimum of 2 years of experience with observability tools such as Datadog, CloudWatch, CloudTrail, Elastic Stack, Grafana, or similar tools. Over 2 years of experience with containerization, specifically Kubernetes 2+ years of expertise in Amazon Web Services (AWS) services 2+ years of expertise with tools such as Terraform, CloudFormation, Ansible, or similar. 2+ years proficiency with Python Preferred Knowledge/Skills **Siemens Teamcenter software** Desired certifications include: Datadog, Kubernetes, AWS or Azure certification More than 2 years of proficiency as a Site Reliability Engineer or equivalent role 2+ years experience with issue/incident tracking tool (ServiceNOW, ServiceDesk, Jira or equivalent tools) 2+ years with log management tools (ie ELK Stack) 2+ years experience Enterprise IT environment with distributed environments Networking concepts, including firewalls, VPN, routing, load balancers, security and DNS Senior level system administration experience, including troubleshooting, support, mentorship/training, and oversight Attachments

Posted 2 months ago

Apply

2 - 6 years

4 - 8 Lacs

Pune

Work from Office

Naukri logo

The core infrastructure team is responsible for this infrastructure, spread across 10 production deployments across the globe, 24/7, with 4 nines of uptime. Our infrastructure is managed using Terraform (for IaC), GitLab CI and monitored using Prometheus and Datadog. We're looking for you if: You are strong infrastructure engineer with specialty in networking and site reliability. You have strong networking fundamentals (DNS, subnets, VPN, VPCs, security groups, NATs, Transit Gateway etc) You have extensive and deep experience (~4 years) with IaaS Cloud Providers. AWS is ideal, but GCP/Azure would be fine too. You have experience with running cloud orchestration technologies like Kubernetes and/or Cloud Foundry, and designing highly resilient architectures for these. You have strong knowledge of Unix/Linux fundamentals You have experience with infrastructure as code tools. Ideally Terraform, OpenTofu but CloudFormation or Pulumi are fine too. You have experience designing cross Cloud/on-prem connectivity and observability You have a DevOps mindset: you build it, you run it. You care about code quality, and know how to lead by example: from a clean Git history, to well thought-out unit and integration tests. Even better (but not essential!) if you have experience with: Monitoring tools that we use, such as Datadog and Prometheus CI/CD tooling such as GitLab CI You have programming experience with (ideally) Golang or Python You are willing and able to use your technical expertise to mentor, train, and lead other engineers Youll help drive digital innovation by: Continually improving our security + operational excellence. Work directly with customers to set up connectivity between Mendix Cloud platform and customers backend infrastructure. Rapidly scaling our infrastructure to match our rapidly increasing customer base. Continuously improving the observability of our platform, so that we can fix problems before they occur. Improving our automation and surrounding tooling to further streamline deployments + platform upgrade. Improving the way we use AWS resources, and defining cost optimization strategies Here are many of the tools we make use of: Amazon Web Services (EC2, Fargate, RDS, S3, ELB, VPC, CloudWatch, Lambda, IAM, and more !) PaaS: (Open Source) Kubernetes, Docker, Open Service Broker API Eventing: AWS MSK and Confluent Warpstream BYOK Monitoring: Prometheus, InfluxDB, Grafana, Datadog CI/CD: GitLab CI, ArgoCD Automation: Terraform, Helm Programming languages: mostly Golang and Python, with a sprinkling of Ruby and Lua Scripting: Bash, Python Version Control: Git + GitLab Database: PostgreSQL

Posted 2 months ago

Apply

6 - 8 years

8 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Role Overview: Job Description Summary At Skyhigh Security, we are building ground-breaking technology to help enterprises enable and accelerate the safe adoption of cloud services. SSE products help the worlds largest organizations unleash the power of the cloud by providing real-time protection for enterprise data and users across all cloud services. The Data Analytics team of our cloud service BU is looking for a capable, enthusiastic Big Data Test Engineer who will be a creative, innovative and results-oriented person willing to go the extra mile in a fast-paced environment. Take ownership of major big data components/services and all backend aspects of the software life cycle in a SaaS environment. Data Analytics team manages big data pipelines and machine learning systems pertaining to our Skyhigh Security Cloud. We are responsible for analysing more than 40 terabytes of data per day, and we inspect close to billion activities of users in real time for threat protection and monitoring. As a member of our engineering team, youll provide technical expertise (architecture, design, development, code reviews, use of modern static analysis tools, unit testing & system integration, automated testing, etc.). The role requires frequent use of ingenuity, creativity and thinking outside-the-box, in order to effectively contribute to our outstanding analytics solution and capabilities. We firmly believe in our values, and it is what makes us tick as one of the successful team within Skyhigh Security. The more these values resonate with you better the chance of you thriving within our environment. You find clarity and make right decisions despite ambiguity You are curious in general and fascinated by how things work in this world You listen well before you respond to others You want to make an impact on the team and the company You are not afraid to speak your mind and willing to put the team ahead of yourself You are humble, and genuinely want to help your team members You can remain calm even in a most stressful situation You will aim for simplicity in whatever you do The successful candidate possesses the excellent interpersonal and communication skills required to partner with other teams across the business to identify opportunities and risks and develop and deliver solutions that support business strategies. This individual will report into the Senior Engineering Manager within the Cloud Business Unit and will be based in Bangalore, India. About the role: Test, Automate, build, maintenance, and production support of big data pipelines and Hadoop ecosystem Recognize the big picture and take initiative to solve the problem and Automate. Being aware of current big data technology trends & factoring this into current design and implementation. Document Test Plan, Automation Strategy and present it to the stakeholders Identifies, recommends, coordinates, deliver timely knowledge to the globally distributed teams regarding technologies, processes, and tools Proactively identify and communicate roadblocks. About You- Minimum Requirements for SDET(Testing): Bachelor's degree in Computer Science or equivalent degree. Master's degree is a plus Overall 6 to 8 years of experience. Individual contribution as needed and coordinate with other teams Good exposure to test frameworks like JUnit, Test NG, Cucumber and mocking frameworks. Test, Develop and implement automated tests using Python Experience in any Automation Framework. Hands on experience on Robot Framework will be a plus. Having Big data experience will be a plus. Exposure to Agile development, TDD, and Lean development Experience with AWS CloudFormation, Cloudwatch, SQS, Lambda is a plus. Company Benefits and Perks: We work hard to embrace diversity and inclusion and encourage everyone to bring their authentic selves to work every day. We offer a variety of social programs, flexible work hours and family-friendly benefits to all of our employees. Retirement Plans Medical, Dental and Vision Coverage Paid Time Off Paid Parental Leave Support for Community Involvement

Posted 2 months ago

Apply

5 - 8 years

7 - 10 Lacs

Chennai

Work from Office

Naukri logo

We are looking for a Senior Software Engineer (Golang, Angular). This position is available for Chennai Location. Youll make a difference by: Having hands-on experience with designing and developing cloud native backend services using Golang. Having Hands on experience with Golang Technology. Having ability to Write efficient, maintainable code in Golang, adhering to best practices. Having ability to Design, build, and maintain scalable and reliable backend services. Having good hands-on experience with Angular development. Participating in code and design reviews to maintain high development standards. Having ability to Design, develop, and deploy secure cloud applications using AWS services and architectures. Lambda: Ability to write and deploy serverless functions. DynamoDB: Familiarity with managing NoSQL databases. API Gateway: Experience with creating and managing APIs. SNS/SQS: Understanding of messaging services. CloudWatch: Proficiency in monitoring and logging. Serverless: Ability to write infrastructure as code. Youll win us over by: Having An engineering degree B.E/B.Tech/MCA/M.Tech/M.Sc with good academic record. 5-8 years of demonstrable experience in software development. Writing automated unit tests and integration tests for the implemented features. Ensuring conformance to quality processes to help project in meeting quality goals.

Posted 2 months ago

Apply

2 - 3 years

5 - 10 Lacs

Pune

Work from Office

Naukri logo

** Must Have: Must have good knowledge in below services: I. AWS Core Services: EC2, VPC, S3, RDS. II. Cloud Security: IAM III. Networking: VPC configuration and management IV. Monitoring: Cloud Watch for monitoring and alerting Linux: Patch Management, User Management, File System Management, Backup and restore. Should be able to follow the SOPs and Complete the allocated Task. Knowledge of database administration (e.g., MySQL) and web servers (e.g., Apache, Nginx). ** Need to Have: Proficiency in scripting languages (e.g., Shell, Python) for automation and configuration management. Excellent communication skills and ability to work effectively in a team environment. ** Preferred: DevOps tools terraform, Docker, CI/CD, Kubernetes. Create and maintain technical documentation, including system configurations, procedures, and troubleshooting guides.

Posted 2 months ago

Apply

5 - 10 years

15 - 30 Lacs

Bengaluru

Hybrid

Naukri logo

Exp :- 5-11 Years Work Location :- Bangalore ( Embassy Tech Village) Preferred Technical Skill Amazon Connect : Connect Flow : Essential for designing and managing contact flows. Lex, Polly, Comprehend : Useful for integrating AI-driven features like chatbots, text-to-speech, and sentiment analysis. AWS Services :: Kinesis : For real-time data streaming. Lambda : For serverless computing and executing code in response to events. Athena : For querying data stored in S3 using SQL. CloudWatch : For monitoring and logging. S3 : For scalable storage solutions Basics of Telephony Systems : Understanding how telephony works is crucial for managing call flows and ensuring smooth operations. Amazon Bedrock : For leveraging generative AI capabilities. Integration with Applications : API (Rest, GraphQL) : For connecting with various applications and services. Workforce Management (WFM) : For managing staff schedules and performance. Softphone Configuration : For setting up virtual phone systems Scripting Languages :: NodeJs,Python, Java : For developing custom solutions and integrations. Database : DynamoDB, Postgres : For managing data storage and retrieval.

Posted 2 months ago

Apply

7 - 12 years

10 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

8+ Years of exp in Database Technologies: AWS Aurora-PostgreSQL, NoSQL,DynamoDB, MongoDB,Erwin data modeling Exp in pg_stat_statements, Query Execution Plans Exp in Apache Kafka,AWS Kinesis,Airflow,Talend.AWS Exp in CloudWatch,Prometheus,Grafana, Required Candidate profile Exp in GDPR, SOC2, Role-Based Access Control (RBAC), Encryption Standards. Exp in AWS Multi-AZ, Read Replicas, Failover Strategies, Backup Automation. Exp in Erwin, Lucidchart, Confluence, JIRA.

Posted 2 months ago

Apply

2 - 3 years

4 - 5 Lacs

Gurgaon, Noida

Work from Office

Naukri logo

About the Role: Grade Level (for internal use): 10 The Role: Cloud DevOps Engineer The Impact: This role is crucial to the business as it directly contributes to the development and maintenance of cloud-based DevOps solutions on the AWS platform. Whats in it for you: Drive Innovation : Join a dynamic and forward-thinking organization at the forefront of the automotive industry. Contribute to shaping our cloud infrastructure and drive innovation in cloud-based solutions on the AWS platform. Technical Growth : Gain valuable experience and enhance your skills by working with a team of talented cloud engineers. Take on challenging projects and collaborate with cross-functional teams to define and implement cloud infrastructure strategies. Impactful Solutions : Contribute to the development of solutions that directly impact the scalability, reliability, and security of our cloud infrastructure. Play a key role in delivering high-quality products and services to our clients. We are seeking a highly skilled and driven Cloud DevOps Engineer to join our team. Candidate should have experience with developing and deploying native cloud-based solutions, possess a passion for container-based technologies, immutable infrastructure, and continuous delivery practices in deploying global software. Responsibilities: Deploy scalable, highly available, secure, and fault tolerant systems on AWS for the development and test lifecycle of AWS Cloud Native solutions Configure and manage AWS environment for usage with web applications Engage with development teams to document and implement best practice (low maintenance) cloud-native solutions for new products Focus on building Dockerized application components and integrating with AWS ECS Contribute to application design and architecture, especially as it relates to AWS services Manage AWS security groups Collaborate closely with the Technical Architects by providing input into the overall solution architecture Implement DevOps technologies and processes i.e., containerization, CI/CD, infrastructure as code, metrics, monitoring etc. Experience of networks, security, load balancers, DNS and other infrastructure components and their application to cloud (AWS) environments Passion for solving challenging issues Promote cooperation and commitment within a team to achieve common goals What you will need: Understanding of networking, infrastructure, and applications from a DevOps perspective Infrastructure as code ( IaC ) using Terraform and CloudFormation Deep knowledge of AWS especially with services like ECS/Fargate, ECR, S3/CloudFront, Load Balancing, Lambda, VPC, Route 53, RDS, CloudWatch, EC2 and AWS Security Center Experience managing AWS security groups Experience building scalable infrastructure in AWS Experience with one or more AWS SDKs and/or CLI Experience in Automation, CI/CD pipelines, DevOps principles Experience with Docker containers Experience with operational tools and ability to apply best practices for infrastructure and software deployment Software design fundamentals in data structures, algorithm design and performance analysis Experience working in an Agile Development environment Strong written and verbal communication and presentation skills Education and Experience: Bachelor's degree in Computer Science, Information Systems, Information Technology, or a similar major or CertifiedDevelopment Program 2-3 years of experience managing AWS application environment and deployments 5+ years of experience working in a development organization

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies