Jobs
Interviews

35 Auto Scaling Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

7 - 14 Lacs

Bengaluru

Work from Office

We are seeking a talented DevOps Engineer with expertise in the technical skills outlined in the provided document. Below are the updated responsibilities and qualifications tailored to this role. Responsibilities: Design, deploy, and manage scalable AWS infrastructure utilizing services such as EC2, S3, RDS, Lambda, CloudFormation, IAM, VPC, Load Balancers, and Auto Scaling. Build and maintain robust CI/CD pipelines using Jenkins and AWS CodePipeline to streamline development and deployment processes. Implement and manage configuration management tools, including Ansible, Chef, and Puppet, to ensure system consistency and automation. Deploy and orchestrate containerized applications using Docker, Kubernetes, and Amazon ECS. Maintain and administer version control systems, primarily Git and GitHub. Develop and manage Infrastructure as Code (IaC) solutions using Terraform to automate provisioning and resource management. Write and maintain automation scripts in Python, Bash, and PowerShell to support operational workflows. Configure and manage networking components such as VPCs, subnets, security groups, and Route 53 for DNS management. Implement and oversee monitoring and logging solutions using AWS CloudWatch, the ELK Stack (Elasticsearch, Logstash, Kibana), and Prometheus. Administer various operating systems, including Linux, Ubuntu, and Windows, ensuring optimal performance and security. Configure and manage web servers, including Apache Tomcat and Nginx, for application hosting and performance optimization. Criteria: Proven hands-on experience with a broad range of AWS services, including EC2, S3, RDS, Lambda, CloudFormation, IAM, VPC, Load Balancers, and Auto Scaling. Proficient in implementing and managing CI/CD pipelines using tools like Jenkins and AWS CodePipeline. Strong background in configuration management with tools such as Ansible, Chef, and Puppet. In-depth knowledge and practical experience with containerization and orchestration technologies, including Docker, Kubernetes, and Amazon ECS. Skilled in version control systems, particularly Git and GitHub. Extensive experience in developing and managing Infrastructure as Code (IaC) using Terraform. Advanced scripting abilities in Python, Bash, and PowerShell for automation and system management. Solid understanding of networking fundamentals and hands-on experience configuring VPCs, subnets, security groups, and DNS management with Route 53. Familiarity with monitoring and logging tools such as AWS CloudWatch, ELK Stack (Elasticsearch, Logstash, Kibana), and Prometheus. Experience managing and troubleshooting across multiple operating systems, including Linux, Ubuntu, and Windows. Competency in configuring and administering web servers such as Apache Tomcat and Nginx. Experience 3 5 years

Posted 2 months ago

Apply

2.0 - 4.0 years

4 - 6 Lacs

Chennai

Work from Office

The AWSProducts(Competency->AmazonWebService(AWS)CloudComputing)E0 role involves working with relevant technologies, ensuring smooth operations, and contributing to business objectives. Responsibilities include analysis, development, implementation, and troubleshooting within the AWSProducts(Competency->AmazonWebService(AWS)CloudComputing)E0 domain.

Posted 2 months ago

Apply

2.0 - 6.0 years

4 - 8 Lacs

Hyderabad

Work from Office

The Oracle EBS Supply Chain Management - Distribution role involves working with relevant technologies, ensuring smooth operations, and contributing to business objectives. Responsibilities include analysis, development, implementation, and troubleshooting within the Oracle EBS Supply Chain Management - Distribution domain.

Posted 2 months ago

Apply

4.0 - 6.0 years

7 - 9 Lacs

Chennai

Work from Office

You want more out of a career. A place to share your ideas freely even if theyre daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together lifting our communities and building trust in how we show up, everywhere always. Want inJoin the #VTeamLife. What youll be doing... You will be playing a prominent role in supporting middleware products across all business portfolios. You will be involved in engineering activities, join CMDs to resolve Critical Blockers, Capacity planning, Performance Fine tuning, Middleware Product Upgrades etc. You will focus on developing automated self-healing solutions to make the application more resilient based on root cause analysis. The role requires good problem solving and automation skills to get deeper into the issues and improve MTTR. Being responsible for availability and stability of applications. Coordinating with multiple stakeholders for onboarding new application into cloud, application migration from On-Prem to Cloud, middleware product migration. Performing application performance fine tuning. Troubleshooting critical issues and performing root cause analysis. Performing middleware upgrades as part of maintaining security standards. Providing technical recommendations to improve application performance. Remediating middleware product vulnerabilities across various applications. Guiding and supporting fellow team members to ensure tasks activities / projects are tracked and completed on time. Where you'll be working... In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Youll need to have: Bachelors degree or four or more years of work experience. Four or more years of relevant experience required, demonstrated through work experience Good experience in middleware technologiesincluding but not limited to Weblogic, Apache HTTPD, Apache Tomcat, Nginx etc. Strong end to end middleware product managementknowledge. Good experience in DevOps tools like Jenkins Artifactory Gitlab Good experience in AWS cloud (EC2, ELB, Auto Scaling, RDS, S3, CloudWatch, IAM, Cloud formation template etc..) Good knowledge in automation (shell scripting Ansible) Good experience in all Linux flavors and Solaris Even better if you have: A Masters degree.

Posted 2 months ago

Apply

8.0 - 12.0 years

35 - 60 Lacs

Bengaluru

Work from Office

Job Summary Member of a software engineering team involved in development & design of AI Data Platform built on NetApp’s flagship storage operating ONTAP. ONTAP is a feature rich stack with its rich data management capabilities that has tremendous value to our customers and are used in mission critical applications across the world. You will work as part of a team responsible for the development, testing and debugging of distributed software that drives NetApp cloud, hybrid-cloud and on-premises solutions. As part of the Research and Development function, the overall focus of the group is on competitive market and customer requirements, supportability, technology advances, product quality, product cost and time-to-market. Software engineers focus on new product development along with enhancements to existing products. This is a mid-level technical lead position that requires an individual to be broad-thinking, systems-focused, creative, team-oriented, technologically savvy, able to lead large cross-functional teams, and driven to produce results. Job Requirements Proficiency in programming languages like GO/Golang. Experience with Machine Learning Libraries and Frameworks: PyTorch, TensorFlow, Keras, Open AI, LLMs ( Open Source), LangChain etc. Experience working in Linux, AWS/Azure/GCP, Kubernetes – Control plane, Auto scaling, orchestration, containerization is a must. Experience with No Sql Document Databases (e.g., Mongo DB, Cassandra, Cosmos DB, Document DB). Experience working building Micro Services, REST APIs and related API frameworks. Experience with Big Data Technologies: Understanding big data technologies and platforms like Spark, Hadoop and distributed storage systems for handling large-scale. datasets and parallel processing. Experience with Filesystems or Networking or file/cloud protocols is a must. Proven track record of leading mid to large sized projects. This position requires an individual to be creative, team-oriented, a quick learner and driven to produce results. Responsible for providing support in the development and testing activities of other engineers that involve several inter-dependencies. Participate in technical discussions within the team and with other groups within Business Units associated with specified projects. Willing to work on additional tasks and responsibilities that will contribute towards team, department and company goals. A strong understanding and experience with concepts related to computer architecture, data structures and programming practices. Experience with AI/ML frameworks like PyTorch or TensorFlow is a Plus. Education IC - Typically requires a minimum of 8 years of related experience.Mgr & Exec - Typically requires a minimum of 6 years of related experience.

Posted 2 months ago

Apply

8.0 - 10.0 years

10 - 14 Lacs

Bengaluru

Work from Office

GKE Engineer|| 8 yrs|| Pan India. Containerization & Docker Expertise Advanced Docker Knowledge - Deep understanding of Dockerfile optimization, multi-stage builds, and image layer caching for efficient image creation.- Proficiency in Docker Compose for multi-container application orchestration (useful for local testing and migration planning).- Security best practices for Docker images, including vulnerability scanning and secure registry management. Container Runtime Understanding - Knowledge of container runtimes like containerd and their implications on GKE.- Debugging container runtime issues. Kubernetes (GKE) Mastery GKE Cluster Architecture & Design - Ability to design highly available, scalable, and secure GKE clusters based on workload requirements.- Expertise in node pool management, autoscaling, and resource optimization.- Understanding of GKE networking, including VPCs, subnets, and network policies.- Knowledge of GKE security features, such as IAM, RBAC, and workload identity.- Understanding of GKE Autopilot vs Standard mode, and when to use each. Kubernetes Core Concepts - In-depth knowledge of Kubernetes objects (Pods, Deployments, Services, ConfigMaps, Secrets, etc.- Proficiency in using kubectl for cluster management and troubleshooting.- Understanding of Kubernetes scheduling, resource management, and service discovery.- Advanced understanding of custom resource definitions(CRDs) and operators. GKE Networking - VPC native clusters, and private clusters.- Understanding of how to set up ingress and egress.- Understanding of Cilium, and other CNI's. GKE Security - Workload Identity.- Binary Authorization.- Network Policies.- Secret Management. DevOps & Infrastructure as Code (IaC) CI/CD Pipelines - Designing and implementing robust CI/CD pipelines for containerized applications using tools like Cloud Build, Jenkins, GitLab CI, or Argo CD.- Automating image building, testing, and deployment to GKE.- Implementing blue/green deployments or canary releases. Infrastructure as Code (IaC) - Proficiency in using Terraform or Deployment Manager to automate GKE cluster provisioning and configuration.- Managing infrastructure changes through version control and code reviews.- Using tools like Config Connector to manage GCP resources with kubernetes. Monitoring & Logging - Setting up comprehensive monitoring and logging solutions using Cloud Monitoring and Cloud Logging.- Implementing alerting and dashboards for proactive issue detection.- Understanding of distributed tracing. GitOps Understanding and implementation of GitOps methodologies.

Posted 2 months ago

Apply

8.0 - 10.0 years

9 - 13 Lacs

Bengaluru

Work from Office

GKE Engineer|| 8 yrs|| Pan India. Containerization & Docker Expertise Advanced Docker Knowledge - Deep understanding of Dockerfile optimization, multi-stage builds, and image layer caching for efficient image creation.- Proficiency in Docker Compose for multi-container application orchestration (useful for local testing and migration planning).- Security best practices for Docker images, including vulnerability scanning and secure registry management. Container Runtime Understanding - Knowledge of container runtimes like containerd and their implications on GKE.- Debugging container runtime issues. Kubernetes (GKE) Mastery GKE Cluster Architecture & Design - Ability to design highly available, scalable, and secure GKE clusters based on workload requirements.- Expertise in node pool management, autoscaling, and resource optimization.- Understanding of GKE networking, including VPCs, subnets, and network policies.- Knowledge of GKE security features, such as IAM, RBAC, and workload identity.- Understanding of GKE Autopilot vs Standard mode, and when to use each. Kubernetes Core Concepts - In-depth knowledge of Kubernetes objects (Pods, Deployments, Services, ConfigMaps, Secrets, etc.- Proficiency in using kubectl for cluster management and troubleshooting.- Understanding of Kubernetes scheduling, resource management, and service discovery.- Advanced understanding of custom resource definitions(CRDs) and operators. GKE Networking - VPC native clusters, and private clusters.- Understanding of how to set up ingress and egress.- Understanding of Cilium, and other CNI's. GKE Security - Workload Identity.- Binary Authorization.- Network Policies.- Secret Management. DevOps & Infrastructure as Code (IaC) CI/CD Pipelines - Designing and implementing robust CI/CD pipelines for containerized applications using tools like Cloud Build, Jenkins, GitLab CI, or Argo CD.- Automating image building, testing, and deployment to GKE.- Implementing blue/green deployments or canary releases. Infrastructure as Code (IaC) - Proficiency in using Terraform or Deployment Manager to automate GKE cluster provisioning and configuration.- Managing infrastructure changes through version control and code reviews.- Using tools like Config Connector to manage GCP resources with kubernetes. Monitoring & Logging - Setting up comprehensive monitoring and logging solutions using Cloud Monitoring and Cloud Logging.- Implementing alerting and dashboards for proactive issue detection.- Understanding of distributed tracing. GitOps Understanding and implementation of GitOps methodologies.ApplyInsightsFollow-upSave this job for future referenceDid you find something suspiciousReport Here! Hide This JobClick here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 2 months ago

Apply

8.0 - 13.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Cloud Expertise: Google Cloud Platform (GCP) Mandatory, AWS Experience Required Key Responsibilities - Provision, manage, and support GCP sandbox environments for testing and development. - Ensure sandbox governance, security, and compliance with Citi policies. - Engage with Google Cloud & AWS support teams to troubleshoot and resolve issues. - Ensure sandbox isolation from production workloads and enforce resource lifecycle management (deletion/suspension of unused resources). - Onboard Citi teams and developers to new or existing AWS/GCP accounts. - Manage user access for single/multiple cloud accounts, ensuring least privilege access. - Assign and audit IAM roles and permissions for security and compliance. - Remove user access to specific accounts as needed. - Configure real-time alerts for sandbox activities and send to Citi Sandbox Email DL IDs. - Set up budget alerts (soft/hard limits) to prevent overspending. - Monitor security incidents, unauthorized access attempts, and anomalies. - Implement cost tracking mechanisms and automate resource cleanup to prevent cost overruns. - Implement GCP/AWS cost control measures (budgets, quotas, auto-scaling). - Track spending patterns and optimize resource allocation. - Ensure compliance with financial industry regulations (SOC 2, ISO 27001, GDPR). - Conduct periodic security and cost audits. - Automate cloud operations using Terraform, CloudFormation, or Deployment Manager. - Use Python/Bash scripting for process automation and cost/resource optimization. The job is for Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 2 months ago

Apply

5 - 8 years

27 - 42 Lacs

Bengaluru

Work from Office

Job Summary Member of a software engineering team involved in development & design of the AI Data Platform built on NetApp’s flagship storage operating ONTAP. ONTAP is a feature rich stack with its rich data management capabilities that has tremendous value to our customers and are used in mission critical applications across the world. You will work as part of a team responsible for the development, testing and debugging of distributed software that drives NetApp cloud, hybrid-cloud, and on-premises AI/ML solutions. As part of the Research and Development function, the overall focus of the group is on competitive market and customer requirements, supportability, technology advances, product quality, product cost and time-to-market. Software engineers focus on enhancements to existing products as well as new product development. This is a mid-level technical position that requires an individual to be broad-thinking, systems-focused, creative, team-oriented, technologically savvy, able to work in a small and large cross-functional teams, willing to learn and driven to produce results. Job Requirements Proficiency in programming languages like GO, Python. Experience with Machine Learning Libraries and Frameworks: PyTorch, TensorFlow, Keras, Open AI, LLMs ( Open Source), LangChain etc. Hands-on experience working with Rest APIs and Micro Services – Flask, API frameworks. Experience working in Linux, AWS/Azure/GCP, Kubernetes – Control plane, Auto scaling, orchestration, containerization is a must. Experience with No Sql Document Databases e.g., Mongo DB, Cassandra, Cosmos DB, Document DB. Experience working building Micro Services, REST APIs and related API frameworks. Experience with Big Data Technologies: Understanding big data technologies and platforms like Spark, Hadoop and distributed storage systems for handling large-scale datasets and parallel processing. Proven track record of working on mid to large sized projects. Responsible for providing support in the development and testing activities of other engineers that involve several inter-dependencies. Participate in technical discussions within the team and across cross-functional teams. Willing to work on additional tasks and responsibilities that will contribute towards team, department and company goals. A strong understanding and experience with concepts related to computer architecture, data structures and programming practices. Experience with AI/ML frameworks like PyTorch or TensorFlow is a Plus. Education Typically requires a minimum of 4-7 years of related experience with a bachelor’s degree or a master’s degree; or a PhD with relevant experience.

Posted 2 months ago

Apply

1 - 6 years

2 - 7 Lacs

Noida, Gurugram

Work from Office

About the Role: As a DevOps Engineer at EaseMyTrip.com, you will be pivotal in optimizing and maintaining our IT infrastructure and deployment processes. Your role involves managing cloud environments, implementing automation, and ensuring seamless deployment of applications across various platforms. You will collaborate closely with development teams to enhance system reliability, security, and efficiency, supporting our mission to provide exceptional travel experiences through robust technological solutions. This position is critical for maintaining high operational standards and driving continuous innovation. Role & responsibilities: Cloud Computing Mastery : Expert in managing Amazon Web services (AWS) environments, with skills in GCP and Azure for comprehensive cloud solutions and automation. Windows Server Expertise : Profound knowledge of configuring and maintaining Windows Server systems and Internet Information Services (IIS). Deployment of .NET Applications : Experienced in deploying diverse .NET applications such as ASP.Net, MVC, Web API, and WCF using Jenkins. Proficiency in Version Control : Skilled in utilizing GitLab or GitHub for effective version control and collaboration. Linux Server Management : Capable of administering Linux servers with a focus on security and performance optimizations. Scripting and Automation : Ability to write and maintain scripts for automation of routine tasks to improve efficiency and reliability. Monitoring and Optimization : Implement monitoring tools to ensure high availability and performance of applications and infrastructure. Security Best Practices : Knowledge of security protocols and best practices to safeguard systems and data. Continuous Integration/Continuous Deployment (CI/CD) : Develop and maintain CI/CD pipelines to streamline software updates and deployments. Collaboration and Support : Work closely with development teams to troubleshoot deployment issues and enhance the overall operational efficiency. Preferred candidate profile: Migration Project Leadership : Experienced in leading significant migration projects from planning through to execution. Database Expertise : Strong foundation in both SQL and NoSQL database technologies. Experience with Diverse Tech Stacks : Managed projects involving various technologies, including 2-tier, 3-tier, and microservices architectures. Proficiency in Automation Tools : Hands-on experience with automation and deployment tools such as Jenkins, Bamboo, and Code Deploy. Advanced Code Management : Highly skilled in managing code revisions and maintaining code integrity across multiple platforms. Strategic DevOps Experience : Proven track record in developing and implementing DevOps strategies at an enterprise level. Configuration Management Skills : Proficient in using tools like Ansible, Chef, or Puppet for configuration management. Technology Versatility : Experience working with a range of programming languages and frameworks, including .NET, MVC, LAMP, Python, and NodeJS. Problem Solving and Innovation : Ability to solve complex technical issues and innovate new solutions to enhance system reliability and performance. Effective Communication : Strong communication skills to collaborate with cross-functional teams and articulate technical challenges and solutions clearly.

Posted 2 months ago

Apply
Page 2 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies