Jobs
Interviews

35 Auto Scaling Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

0 Lacs

karnataka

On-site

Would being part of a digital transformation excite you Are you passionate about infrastructure security Join our digital transformation team. We operate at the heart of the digital transformation of our business. Our team is responsible for the cybersecurity, architecture, and data protection for our global organization. We advise on the design and validation of all systems, infrastructure, technologies, and data protection. Partner the best. As a Staff Infrastructure Architect, you will support the design and execution of our infrastructure security roadmap. Collaborating with global teams and customers, you will help architect solutions that enable our business to grow in an agile and secure manner. You will be responsible for supporting and improving our tools/process for continuous deployment management, supporting solution Infra Architect to deploy the application and infra to customer private/public cloud, debugging the Docker images/containers, Kubernetes clusters issues, building monitoring tools around Kubernetes/AKS clusters, and developing process tools to track the customer releases and create update plans. You will also be responsible for developing processes to ensure the patching/updates take place without affecting the operation SLA, meeting availability SLA working with Infra and application team responsible for 24x7, profiling deployment process and identifying bottlenecks, demonstrating expertise in writing scripts to automate tasks, implementing Continuous Integration/Deployment build principles, providing expertise in the quality engineering, test planning, and testing methodology for developed code/images/containers, and helping businesses develop an overall strategy for deploying code. To be successful in this role, you will need a Bachelor's education in Computer Science, IT, or Engineering, at least 4+ years of production experience providing hands-on technical expertise to design, deploy, secure, and optimize Cloud services, hands-on experience with containerization technologies (Docker, Kubernetes) is a must (minimum 2 years), experience with creating, maintaining, and deploying automated build tools for a minimum of 2 years, in-depth knowledge of Clustering, Load Balancing, High Availability, and Disaster Recovery, Auto Scaling, Infrastructure-as-a-code (IaaC) using Terraform/CloudFormation, good knowledge of Application & Infrastructure Monitoring Tools like Prometheus, Grafana, Kibana, New Relic, Nagios, hands-on experience of CI/CD tools like Jenkins, understanding of standard networking concepts such as DNS, DHCP, subnets, Server Load Balancing, Firewalls, knowledge of Web-based application development, strong knowledge of Unix/Linux and/or Windows operating systems, experience with common scripting languages (Bash, Perl, Python, Ruby), and the ability to assess code, build it, and run applications locally on his/her own. Additionally, you should have experience with creating and maintaining automated build tools, facilitating and coaching software engineering team sessions on requirements estimation and alternative approaches to team sizing and estimation, publishing guidance and documentation to promote adoption of design, proposing design solutions based on research and synthesis, creating general design principles that capture the vision and critical concerns for a program, and demonstrating mastery of the intricacies of interactions and dynamics in Agile teams. We recognize that everyone is different and that the way in which people want to work and deliver at their best is different for everyone too. In this role, we can offer flexible working patterns, including working remotely from home or any other work location and flexibility in your work schedule to help fit in around life. Talk to us about your desired flexible working options when you apply. Our people are at the heart of what we do at Baker Hughes. We know we are better when all of our people are developed, engaged, and able to bring their whole authentic selves to work. We invest in the health and well-being of our workforce, train and reward talent, and develop leaders at all levels to bring out the best in each other. About Us: We are an energy technology company that provides solutions to energy and industrial customers worldwide. Built on a century of experience and conducting business in over 120 countries, our innovative technologies and services are taking energy forward making it safer, cleaner, and more efficient for people and the planet. Join Us: Are you seeking an opportunity to make a real difference in a company that values innovation and progress Join us and become part of a team of people who will challenge and inspire you! Let's come together and take energy forward.,

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

Join us as a Cloud Data Engineer at Barclays, where you'll spearhead the evolution of the digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize digital offerings, ensuring unparalleled customer experiences. You may be assessed on key critical skills relevant for success in the role, such as risk and control, change and transformations, business acumen, strategic thinking, and digital technology, as well as job-specific skill sets. To be successful as a Cloud Data Engineer, you should have experience with: - Experience on AWS Cloud technology for data processing and a good understanding of AWS architecture. - Experience with computer services like EC2, Lambda, Auto Scaling, VPC, EC2. - Experience with Storage and container services like ECS, S3, DynamoDB, RDS. - Experience with Management & Governance KMS, IAM, CloudFormation, CloudWatch, CloudTrail. - Experience with Analytics services such as Glue, Athena, Crawler, Lake Formation, Redshift. - Experience with Solution delivery for data processing components in larger End to End projects. Desirable skill sets/good to have: - AWS Certified professional. - Experience in Data Processing on Databricks and unity catalog. - Ability to drive projects technically with right first deliveries within schedule and budget. - Ability to collaborate across teams to deliver complex systems and components and manage stakeholders" expectations well. - Understanding of different project methodologies, project lifecycles, major phases, dependencies and milestones within a project, and the required documentation needs. - Experienced with planning, estimating, organizing, and working on multiple projects. This role will be based out of Pune. Purpose of the role: To build and maintain systems that collect, store, process, and analyze data, such as data pipelines, data warehouses, and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities: - Build and maintenance of data architecture pipelines that enable the transfer and processing of durable, complete, and consistent data. - Design and implementation of data warehouses and data lakes that manage appropriate data volumes and velocity and adhere to required security measures. - Development of processing and analysis algorithms fit for the intended data complexity and volumes. - Collaboration with data scientists to build and deploy machine learning models. Analyst Expectations: - Will have an impact on the work of related teams within the area. - Partner with other functions and business areas. - Takes responsibility for end results of a team's operational processing and activities. - Escalate breaches of policies/procedure appropriately. - Take responsibility for embedding new policies/procedures adopted due to risk mitigation. - Advise and influence decision making within own area of expertise. - Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. - Deliver your work and areas of responsibility in line with relevant rules, regulations, and codes of conduct. - Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organization's products, services, and processes within the function. - Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organization sub-function. - Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. - Guide and persuade team members and communicate complex/sensitive information. - Act as a contact point for stakeholders outside of the immediate function, while building a network of contacts outside the team and external to the organization. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,

Posted 1 week ago

Apply

3.0 - 7.0 years

5 - 10 Lacs

Kolkata

Work from Office

AWS Certified required with IAM, VPC, ELB, ALB, Autoscaling, Lambda.should know EC2, EKS, ECS, ECR, Route 53, SES, Elasticache, RDS, Redshift,Strong in Serverless Development Architecture.Build & maintain highly available production system.

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

As an excellent hands-on engineer, you will be working on developing next-generation media server software as a part of a core team dedicated to revolutionizing video sharing technology over the internet. Your role will involve contributing significantly to the development of server-side components, providing a learning opportunity in the video streaming space. You should have good hands-on experience with AWS, solid programming skills in C/C++ and Python, along with knowledge of AWS services like Lambda, EFS, auto-scaling, and load balancing. Experience in building and provisioning dockerized applications is highly preferable, along with a good understanding of the HTTP protocol. Familiarity with Web Servers (Apache, Nginx), Databases (MySQL, Redis, MongoDB, Firebase), Python frameworks (Django, Flask), Source Control (Git), REST APIs, and strong understanding of memory management, file I/O, network I/O, concurrency, and multithreading is expected. Your specific responsibilities will include working on scalable video deployments, extending the Mobile Application Backend for customer-specific features, maintaining and extending existing software components in the Media Server software, and fostering a multi-paradigm engineering culture with a cross-functional team. To excel in this role, you should bring strong coding skills and experience with Python and cloud functions, at least 1-2 years of experience with AWS services and GitHub, 6 to 12 months of experience in S3 or other storage/CDN services, exposure to NoSQL databases for developing mobile backends, and proficiency in Agile and Jira tools. A BS or equivalent in Computer Science or Engineering is preferred. If you are ready to take on this exciting opportunity, please send your CV to careers@crunchmediaworks.com.,

Posted 1 week ago

Apply

4.0 - 6.0 years

6 - 12 Lacs

Hyderabad

Work from Office

Role Summary We are hiring a skilled Cloud Engineer with 4-6 years of experience in building and managing solutions on AWS & Azure . The ideal candidate must have hands-on experience in DevOps tools , CI/CD automation , and mandatory expertise in AWS. Health insurance Provident fund

Posted 2 weeks ago

Apply

3.0 - 6.0 years

5 - 8 Lacs

Hyderabad

Work from Office

About the Role: Grade Level (for internal use): 10 One of the most valuable asset in today's Financial industry is the data which can provide businesses the intelligence essential to making business and financial decisions with conviction. This role will provide an opportunity to you to work on Ratings and Research related data. You will get an opportunity to work on cutting edge big data technologies and will be responsible for development of both Data feeds as well as API work. Location: Hyderabad The Team: RatingsXpress is at the heart of financial workflows when it comes to providing and analyzing data. We provide Ratings and Research information to clients . Our work deals with content ingestion, data feeds generation as well as exposing the data to clients via API calls. This position in part of the Ratings Xpresss team and is focused on providing clients the critical data they need to make the most informed investment decisions possible. Impact: As a member of the Xpressfeed Team in S&P Global Market Intelligence, you will work with a group of intelligent and visionary engineers to build impactful content management tools for investment professionals across the globe. Our Software Engineers are involved in the full product life cycle, from design through release. You will be expected to participate in application designs , write high-quality code and innovate on how to improve the overall system performance and customer experience. If you are a talented developer and want to help drive the next phase for Data Management Solutions at S&P Global and can contribute great ideas, solutions and code and understand the value of Cloud solutions, we would like to talk to you. Whats in it for you: We are currently seeking a Software Developer with a passion for full-stack development. In this role, you will have the opportunity to work on cutting-edge cloud technologies such as Databricks , Snowflake , and AWS , while also engaging in Scala and SQL Server -based database development. This position offers a unique opportunity to grow both as a Full Stack Developer and as a Cloud Engineer , expanding your expertise across modern data platforms and backend development. Responsibilities: Analyze, design and develop solutions within a multi-functional Agile team to support key business needs for the Data feeds Design, implement and test solutions using AWS EMR for content Ingestion. Work on complex SQL server projects involving high volume data Engineer components, and common services based on standard corporate development models, languages and tools Apply software engineering best practices while also leveraging automation across all elements of solution delivery Collaborate effectively with technical and non-technical stakeholders. Must be able to document and demonstrate technical solutions by developing documentation, diagrams, code comments, etc. Basic Qualifications: Bachelors degree in Computer Science, Information Technology, Engineering, or a related field. 3-6 years of experience in application development. Minimum of 2 years of hands-on experience with Scala. Minimum of 2 years of hands-on experience with Microsoft SQL Server. Solid understanding of Amazon Web Services (AWS) and cloud-based development. In-depth knowledge of system architecture, object-oriented programming, and design patterns. Excellent communication skills, with the ability to convey complex ideas clearly both verbally and in writing. Preferred Qualifications: Familiarity with AWS Services, EMR, Auto scaling, EKS Working knowledge of snowflake. Preferred experience in Python development. Familiarity with the Financial Services domain and Capital Markets is a plus. Experience developing systems that handle large volumes of data and require high computational performance.

Posted 2 weeks ago

Apply

6.0 - 10.0 years

4 - 9 Lacs

Bengaluru

Work from Office

Role & responsibilities Implement security and compliance controls in the AWS environment. Monitor and optimize AWS resources usage to ensure cost efficiency. Automate infrastructure deployment, configuration management, and infrastructure-as-code (IaC) practices. Troubleshoot and resolve AWS infrastructure issues. Collaborate with development teams to support their cloud-based applications. Stay up to date with new AWS services and best practices for cloud infrastructure design and operations. Knowledge on Dev-ops it will be advantage. Good knowledge On Automate Aws service deployment using CloudFormation / Terraform Good experience on Patch Management, Volume Management, Linux firewall, Route tables, Kernel management, Troubleshooting OS issues. Good knowledge on transport layer security (SSL, TLS etc.) Experience with monitoring solutions such as CloudWatch, PagerDuty, Site24x7 and the ELK stack or any other Monitoring tool is advantage. Preferred candidate profile Strong experience with Amazon Web Services (AWS) infrastructure and services, such as EC2, S3, VPC, IAM, Lambda, CloudWatch, Config, CloudTrail, IAM, KMS and CloudFormation. Knowledge of cloud security and networking concepts, including VPC networking, security groups, and network ACLs. Experience with cloud migrations, including the design and implementation of scalable, highly available, and secure AWS solutions. Knowledge of infrastructure automation tools and technologies, such as Terraform, Ansible, and Chef. Ability to write scripts in at least one scripting language, such as Python, Bash, or PowerShell. Understanding of ITIL processes and experience with incident management, change management, and problem management. . Strong interpersonal and communication skills, with the ability to work effectively with technical and non-technical stakeholders. Continuously seeking to improve processes and procedures and stay up to date with the latest developments in cloud computing. Operating system knowledge: Expertise in multiple operating systems such as Windows, Linux, Unix, etc. 3 years of System admin (Linux/Windows) experience. Perks and benefits

Posted 2 weeks ago

Apply

6.0 - 9.0 years

3 - 6 Lacs

Bengaluru

Work from Office

Job Title:Aws Admin/AWS Cloud Engineer Experience6-9YearsLocation:Bangalore : Aws Admin, AWS Cloud Engineer, AWS services, Cloudwatch/EC2/S3.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

5 - 15 Lacs

Mumbai, Mumbai Suburban, Mumbai (All Areas)

Work from Office

3+ yrs in AWS Cloud Computing, 5+ yrs in DevOps Strong in EC2, VPC, LB, Auto Scaling Hands-on with AWS Lambda, CloudFormation, CI/CD ,Python, Bash ,PowerShell Knows IAM, KMS, security ,monitoring, networking, blue-green & canary deployments , Neo4J

Posted 3 weeks ago

Apply

5.0 - 10.0 years

8 - 12 Lacs

Gurugram

Work from Office

Production experience on AWS (IAM, ECS, EC2, VPC, ELB, RDS, Autoscaling, Cost Optimisation, Trusted Advisors, Guard Duty, Security, etc.) Must have monitoring experience on tools like Nagios/ Prometheus/ Grafana/ Datadog/ New Relic, etc. Required Candidate profile Must have experience in Linux Administration. Must have a working knowledge of scripting (Python/Shell).

Posted 1 month ago

Apply

6.0 - 11.0 years

5 - 9 Lacs

Hyderabad, Bengaluru

Work from Office

Immediate Job Openings on # Performance Testing _ Hyderabad, Bangalore_ Contract Experience: 6+ Years Skill: Performance Testing Location: Hyderabad, Bangalore Notice Period: Immediate . Employment Type: Contract Work Mode: WFH Job Description Strong experience in using JMeter tool. Experience in APM tool NewRelic and Log analysis tool Splunk . Experience in various protocol HTTP/HTML, Web service. Exposure to MuleSoft and AWS. Well versed in Test Management tool and requirement management tools like Jira . Knowledge in converting REST API to JMeter script. Good Knowledge in Autoscaling, Failover load testing.

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 15 Lacs

Gurugram

Work from Office

production experience on AWS (IAM, ECS, EC2, VPC, ELB, RDS, Autoscaling, Cost Optimisation, Trusted Advisors, Guard Duty, Security, etc.) Must have monitoring experience on tools like Nagios/ Prometheus/ Grafana/ Datadog/ New Relic, etc. Required Candidate profile Must have experience in Linux Administration.

Posted 1 month ago

Apply

5.0 - 10.0 years

8 - 12 Lacs

Gurugram

Work from Office

production experience on AWS (IAM, ECS, EC2, VPC, ELB, RDS, Autoscaling, Cost Optimisation, Trusted Advisors, Guard Duty, Security, etc.) Must have monitoring experience on tools like Nagios/ Prometheus/ Grafana/ Datadog/ New Relic, etc. Must have experience in Linux Administration. Must have a working knowledge of scripting (Python/Shell).

Posted 1 month ago

Apply

3.0 - 6.0 years

5 - 8 Lacs

Mumbai

Work from Office

Notice Period: Immediate to 15 Days Employee type: Contract to Hire Have been providing production Operations support experience preferably with a cloud services provider Experience working in Public Cloud infrastructure (AWS, Azure, GCP). MS Azure will be more preferable. Have experience with Automation, CICD, Jenkins, Containers, Cloud Formation, ARM, Bicep, Terraform or other similar Have experience with Git, GitHub, Git Runner Have understanding on standard networking concept such as DNS, DHCP, subnets, Server Load Balancing, Firewalls,SAN. Have in-depth knowledge of Clustering, Load Balancing, High Availability, and Disaster Recovery, Auto Scaling Having experience in Configuring and using Open-Source monitoring and trending systems: Redis, Prometheus, Grafana, Kibana, New Relic, Nagios others Have experience on configuration management tool Ansible/Puppet/Chef Work with the Product team and DevOps Build CI/CD pipelines using MS AZURE CodeBuild, CodeDeploy, and CodePipeline. Integrate 3rd party tools with CICD processes (e.g. SonarQube). Config manage environments using industry standard DevOps tools Implement scripting to extend build\deployment\monitoring processes

Posted 1 month ago

Apply

6.0 - 11.0 years

2 - 5 Lacs

Hyderabad

Work from Office

Greetings from PVT LTD Immediate Openings on# AWS CLOUD ADMIN_Panindia_Contract 6+ Years AWS CLOUD ADMINPanindia PeriodImmediate. TypeContract Description AWS Infra Services VPN CDN VPC EC2 S3 RDS IAM SNS. Compute AWS Infra with Cloud Formation Templates and Auto Scaling Policies. Compute EBS extend/modify Snapshots and Restoration and data backups. Daily AMI Backups and database AWS Instances for Backup and DR purposes. Compute S3 (Create Buckets Upload/Download files) Backup of S3 Buckets. Compute EFS as File System and mount shared systems (VMs) as per the Environment. Compute and Provision RDS (Database instance Performance parameter groups) Instances. Compute Key Management System for EBS S3 Git Secret and Access Keys. Compute Customized IAM Roles and Policies for AWS IAM Users Privileges. Virtualization with Docker ECS ECR for high availability of Applications. Monitor the AWS Infrastructure with CloudWatch email alerts and SNS notifications. Configure and fine-tune cloud infrastructure systems (account/regions/zones etc.) Develop scripts for automating Cloud/server tasks Establish and improve metrics monitor cloud AWS utilization of resources using CloudWatch Perform on premise resources backup by utilizing AWS Services Working as Cloud Administrator on Microsoft Azure involved in configuring virtual machines storage accounts resource groups. Managing Windows 2016 2012 and Linux (RHEL Ubuntu and SuSE) servers Managing day to day activity of the cloud environment supporting development teams with their requirements. Additional knowledge in VMware Active directory WSUS Firewall and Symantec will be added advantage AWS Application Migration Services AWS Server Migration Services AWS ADS Agents AWS MGN Agents AWS CloudFormation Terraform Templates

Posted 1 month ago

Apply

5.0 - 7.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Design, develop, and maintain system integration solutions that connect various applications and platforms using APIs, middleware, or other integration tools Collaborate with business analysts, architects, and development teams to gather requirements and implement robust integration workflows Monitor and troubleshoot integration processes, ensuring data consistency, accuracy, and performance Create technical documentation, perform testing, and resolve any integration-related issues Ensure compliance with security and data governance standards while optimizing system connectivity and scalability Stay updated with integration trends and tools to enhance system interoperability

Posted 1 month ago

Apply

4.0 - 8.0 years

15 - 20 Lacs

Bengaluru

Work from Office

Job Summary. Member of a software engineering team involved in development & design of the AI Data Platform built on NetApp’s flagship storage operating ONTAP.. ONTAP is a feature rich stack with its rich data management capabilities that has tremendous value to our customers and are used in mission critical applications across the world. You will work as part of a team responsible for the development, testing and debugging of distributed software that drives NetApp cloud, hybrid-cloud, and on-premises AI/ML solutions.. As part of the Research and Development function, the overall focus of the group is on competitive market and customer requirements, supportability, technology advances, product quality, product cost and time-to-market. Software engineers focus on enhancements to existing products as well as new product development.. This is a mid-level technical position that requires an individual to be broad-thinking, systems-focused, creative, team-oriented, technologically savvy, able to work in a small and large cross-functional teams, willing to learn and driven to produce results.. Job Requirements. Proficiency in programming languages like GO/Golang.. Experience with Machine Learning Libraries and Frameworks: PyTorch, TensorFlow, Keras, Open AI, LLMs ( Open Source), LangChain etc.. Hands-on experience working with Rest APIs and Micro Services – Flask, API frameworks.. Experience working in Linux, AWS/Azure/GCP, Kubernetes – Control plane, Auto scaling, orchestration, containerization is a must.. Experience with No Sql Document Databases e.g., Mongo DB, Cassandra, Cosmos DB, Document DB.. Experience working building Micro Services, REST APIs and related API frameworks.. Experience with Big Data Technologies: Understanding big data technologies and platforms like Spark, Hadoop and distributed storage systems for handling large-scale datasets and parallel processing.. Proven track record of working on mid to large sized projects.. Responsible for providing support in the development and testing activities of other engineers that involve several inter-dependencies.. Participate in technical discussions within the team and across cross-functional teams.. Willing to work on additional tasks and responsibilities that will contribute towards team, department and company goals.. A strong understanding and experience with concepts related to computer architecture, data structures and programming practices.. Experience with AI/ML frameworks like PyTorch or TensorFlow is a Plus.. Education. Typically requires a minimum of 4-7 years of related experience with a bachelor’s degree or a master’s degree; or a PhD with relevant experience.. At NetApp, we embrace a hybrid working environment designed to strengthen connection, collaboration, and culture for all employees. This means that most roles will have some level of in-office and/or in-person expectations, which will be shared during the recruitment process.. Equal Opportunity Employer. NetApp is firmly committed to Equal Employment Opportunity (EEO) and to compliance with all laws that prohibit employment discrimination based on age, race, color, gender, sexual orientation, gender identity, national origin, religion, disability or genetic information, pregnancy, and any protected classification.. Why NetApp?. We are all about helping customers turn challenges into business opportunity. It starts with bringing new thinking to age-old problems, like how to use data most effectively to run better but also to innovate. We tailor our approach to the customer's unique needs with a combination of fresh thinking and proven approaches.. We enable a healthy work-life balance. Our volunteer time off program is best in class, offering employees 40 hours of paid time off each year to volunteer with their favourite organizations. We provide comprehensive benefits, including health care, life and accident plans, emotional support resources for you and your family, legal services, and financial savings programs to help you plan for your future. We support professional and personal growth through educational assistance and provide access to various discounts and perks to enhance your overall quality of life.. If you want to help us build knowledge and solve big problems, let's talk.. Show more Show less

Posted 1 month ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Chennai

Work from Office

Hello Visionary ! We empower our people to stay resilient and relevant in a constantly changing world. We’re looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future. We are looking for Associate Software Architect with 8+ years of experience in AWS Cloud Infrastructure design, maintenance, and operations. Key Responsibilities: Infrastructure Architecture, Design & Management Understand the existing architecture to identify and implement improvements. Design and execute the initial implementation of infrastructure. Defining end-to-end DevOps architecture aligned with business goals and technical requirements. Architect and manage AWS cloud infrastructure for scalability, high availability, and cost efficiency using services like EC2, Auto Scaling, Load Balancers, and Route 53 to ensure high availability and fault tolerance. Design and implement secure network architectures using VPCs, subnets, NAT gateways, security groups, NACLs, and private endpoints. CI/CD Pipeline Management- Design, build, test and maintain AWS DevOps pipelines for automated deployments across multiple environments (dev, staging, production). Security & Compliance-Enforce least privilege access controls to enhance security. Monitoring & Optimization-Centralize monitoring with AWS CloudWatch, CloudTrail, and third-party tools. And set up metrics, dashboards, alerts Infrastructure as Code (IaC) Write, maintain, and optimize Terraform templates/AWS CloudFormation/AWS CDK for infrastructure provisioning. Automate resource deployment across multiple environments (DEV, QA, UAT & Prod) and configuration management. Managing infrastructure lifecycle through version-controlled code Modular and reusable IaC design. License Management Use AWS License Manager to track and enforce software license usage Manage BYOL (Bring Your Own License) models for third-party tools like GraphDB Integrate license tracking with AWS Systems Manager, EC2, and CloudWatch Define custom license rules and monitor compliance across accounts using AWS Organizations Documentation & Governance Create and maintain detailed architectural documentation. Participate in code and design reviews to ensure compliance with architectural standards. Establish architectural standards and best practices for scalability, security, and maintainability across development and operations teams. Interpersonal Skills Effective communication and collaboration with stakeholders to gather and understand technical and business requirements Strong grasp of Agile and Scrum methodologies for iterative development and team coordination Mentoring and guiding DevOps engineers while fostering a culture of continuous improvement and DevOps best practices Make your mark in our exciting world at Siemens . This role, based in Chennai , is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. We are dedicated to equality and welcome applications that reflect the diversity of the communities we serve. All employment decisions at Siemens are based on qualifications, merit, and business need. Bring your curiosity and imagination, and help us shape tomorrow We’ll support you with Hybrid working opportunities. Diverse and inclusive culture. Variety of learning & development opportunities. Attractive compensation package. Find out more about Siemens careers at www.siemens.com/careers

Posted 1 month ago

Apply

7.0 - 10.0 years

27 - 42 Lacs

Bengaluru

Work from Office

Job Summary Proven hands on experience in Cloud Security technology and suites with Platforms GCP Azure OCI GCP and Kubernetes is a must Hands on experience and expertise with Prisma Cloud suite with CSPM and Compute modules CI or CD pipeline integration and security tooling SAST DAST OSS scanning Strong understanding of Kubernetes architecture clusters workloads RBAC networking auto scaling deployment Familiarity with cloud native DevOps environments Azure OCI and GCP Responsibilities Hands on experience working with various Cloud platforms GCP Azure and OCI GCP is a must with an understanding of native controls suite part of Google. Drive Cloud security initiatives around particularly around Prisma Cloud controls into CI or CD workflows runtime and CSPM. Define and enforce policies for secure build and deploy processes across cloud and various enforcement points CI or CD CSPM Runtime Gatekeep policies Azure tenant policies Assess and monitor Kubernetes environments for misconfigurations and risks Respond to security alerts and recommend remediation strategies Partner with DevOps and engineering to strengthen security posture across SDLC Strong understanding of cloud-native security concepts including network security identity and access management IAM container security vulnerability scanning threat management and incident response.

Posted 1 month ago

Apply

2.0 - 5.0 years

4 - 9 Lacs

Bengaluru

Work from Office

As a L2 Cloud Engineer in Acqueon you will need. Ensure the highest uptime for customers in our SaaS environment Provision Customer Tenants & Manage SaaS Platform, Memos to the Staging and Production Environments Infrastructure Management: Design, deploy, and maintain secure and scalable AWS cloud infrastructure using services like EC2, S3, RDS, Lambda, and CloudFormation. Monitoring & Incident Response: Set up monitoring solutions (e.g., CloudWatch, Grafana) to detect, respond, and resolve issues quickly, ensuring uptime and reliability. Cost Optimization: Continuously monitor cloud usage and implement cost-saving strategies such as Reserved Instances, Spot Instances, and resource rightsizing. Backup & Recovery: Implement robust backup and disaster recovery solutions using AWS tools like AWS Backup, S3, and RDS snapshots. Security Compliance: Configure security best practices, including IAM policies, security groups, and encryption, while adhering to organizational compliance standards. Infrastructure as Code (IaC): Use Terraform, CloudFormation, or AWS CDK to provision, update, and manage infrastructure in a consistent and repeatable manner. Automation & Configuration Management: Automate manual processes and system configurations using Ansible, Python, or shell scripting. Containerization & Orchestration: Manage containerized applications using Docker and Kubernetes (EKS) for scaling and efficient deployment. Skills & Qualifications: Experience: 3-5 years of experience in CloudOps roles with a strong focus on AWS. Proficient in AWS services, including EC2, S3, RDS, Lambda, IAM, CloudFront, and VPC. Hands-on experience with Terraform, CloudFormation, or other IaC tools. Revenue Execution Platform Strong knowledge of CI/CD pipelines (e.g., AWS CodePipeline, Jenkins, GitHub Actions). Experience with container technologies like Docker and orchestration tools like Kubernetes (EKS). Scripting knowledge (e.g., Python, Bash, PowerShell) for automation and tooling. Monitoring & Logging: Experience with monitoring tools like AWS CloudWatch, ELK stack, Prometheus, or Grafana. Security: Strong understanding of cloud security principles, including IAM, encryption, and AWS security tools (e.g., AWS WAF, GuardDuty). Collaboration Tools: Familiarity with tools like Git, Jira, and Confluence. Good Knowledge on Windows Servers, Linux & able to troubleshoot critical issues

Posted 1 month ago

Apply

8.0 - 10.0 years

15 - 30 Lacs

Bengaluru

Work from Office

As a L2 Cloud Engineer in Acqueon you will need. Ensure the highest uptime for customers in our SaaS environment Provision Customer Tenants & Manage SaaS Platform, Memos to the Staging and Production Environments Infrastructure Management: Design, deploy, and maintain secure and scalable AWS cloud infrastructure using services like EC2, S3, RDS, Lambda, and CloudFormation. Monitoring & Incident Response: Set up monitoring solutions (e.g., CloudWatch, Grafana) to detect, respond, and resolve issues quickly, ensuring uptime and reliability. Cost Optimization: Continuously monitor cloud usage and implement cost-saving strategies such as Reserved Instances, Spot Instances, and resource rightsizing. Backup & Recovery: Implement robust backup and disaster recovery solutions using AWS tools like AWS Backup, S3, and RDS snapshots. Security Compliance: Configure security best practices, including IAM policies, security groups, and encryption, while adhering to organizational compliance standards. Infrastructure as Code (IaC): Use Terraform, CloudFormation, or AWS CDK to provision, update, and manage infrastructure in a consistent and repeatable manner. Automation & Configuration Management: Automate manual processes and system configurations using Ansible, Python, or shell scripting. Containerization & Orchestration: Manage containerized applications using Docker and Kubernetes (EKS) for scaling and efficient deployment. Skills & Qualifications: Experience: 5+ years of experience in CloudOps roles with a strong focus on AWS. Proficient in AWS services, including EC2, S3, RDS, Lambda, IAM, CloudFront, and VPC. Hands-on experience with Terraform, CloudFormation, or other IaC tools. Revenue Execution Platform Strong knowledge of CI/CD pipelines (e.g., AWS CodePipeline, Jenkins, GitHub Actions). Experience with container technologies like Docker and orchestration tools like Kubernetes (EKS). Scripting knowledge (e.g., Python, Bash, PowerShell) for automation and tooling. Monitoring & Logging: Experience with monitoring tools like AWS CloudWatch, ELK stack, Prometheus, or Grafana. Security: Strong understanding of cloud security principles, including IAM, encryption, and AWS security tools (e.g., AWS WAF, GuardDuty). Collaboration Tools: Familiarity with tools like Git, Jira, and Confluence. Good Knowledge on Windows Servers, Linux & able to troubleshoot critical issues

Posted 1 month ago

Apply

5.0 - 8.0 years

27 - 42 Lacs

Bengaluru

Work from Office

Job Summary Member of a software engineering team involved in development & design of the AI Data Platform built on NetApp’s flagship storage operating ONTAP. ONTAP is a feature rich stack with its rich data management capabilities that has tremendous value to our customers and are used in mission critical applications across the world. You will work as part of a team responsible for the development, testing and debugging of distributed software that drives NetApp cloud, hybrid-cloud, and on-premises AI/ML solutions. As part of the Research and Development function, the overall focus of the group is on competitive market and customer requirements, supportability, technology advances, product quality, product cost and time-to-market. Software engineers focus on enhancements to existing products as well as new product development. This is a mid-level technical position that requires an individual to be broad-thinking, systems-focused, creative, team-oriented, technologically savvy, able to work in a small and large cross-functional teams, willing to learn and driven to produce results. Job Requirements Proficiency in programming languages like GO/Golang. Experience with Machine Learning Libraries and Frameworks: PyTorch, TensorFlow, Keras, Open AI, LLMs ( Open Source), LangChain etc. Hands-on experience working with Rest APIs and Micro Services – Flask, API frameworks. Experience working in Linux, AWS/Azure/GCP, Kubernetes – Control plane, Auto scaling, orchestration, containerization is a must. Experience with No Sql Document Databases e.g., Mongo DB, Cassandra, Cosmos DB, Document DB. Experience working building Micro Services, REST APIs and related API frameworks. Experience with Big Data Technologies: Understanding big data technologies and platforms like Spark, Hadoop and distributed storage systems for handling large-scale datasets and parallel processing. Proven track record of working on mid to large sized projects. Responsible for providing support in the development and testing activities of other engineers that involve several inter-dependencies. Participate in technical discussions within the team and across cross-functional teams. Willing to work on additional tasks and responsibilities that will contribute towards team, department and company goals. A strong understanding and experience with concepts related to computer architecture, data structures and programming practices. Experience with AI/ML frameworks like PyTorch or TensorFlow is a Plus. Education Typically requires a minimum of 4-7 years of related experience with a bachelor’s degree or a master’s degree; or a PhD with relevant experience.

Posted 1 month ago

Apply

3.0 - 6.0 years

5 - 8 Lacs

Hyderabad

Work from Office

About the Role: Grade Level (for internal use): 08 One of the most valuable asset in today's Financial industry is the data which can provide businesses the intelligence essential to making business and financial decisions with conviction. This role will provide an opportunity to you to work on Ratings and Research related data. You will get an opportunity to work on cutting edge big data technologies and will be responsible for development of both Data feeds as well as API work. Location: Hyderabad The Team: RatingsXpress is at the heart of financial workflows when it comes to providing and analyzing data. We provide Ratings and Research information to clients . Our work deals with content ingestion, data feeds generation as well as exposing the data to clients via API calls. This position in part of the Ratings Xpresss team and is focused on providing clients the critical data they need to make the most informed investment decisions possible. Impact: As a member of the Xpressfeed Team in S&P Global Market Intelligence, you will work with a group of intelligent and visionary engineers to build impactful content management tools for investment professionals across the globe. Our Software Engineers are involved in the full product life cycle, from design through release. You will be expected to participate in application designs , write high-quality code and innovate on how to improve the overall system performance and customer experience. If you are a talented developer and want to help drive the next phase for Data Management Solutions at S&P Global and can contribute great ideas, solutions and code and understand the value of Cloud solutions, we would like to talk to you. Whats in it for you: We are currently seeking a Software Developer with a passion for full-stack development. In this role, you will have the opportunity to work on cutting-edge cloud technologies such as Databricks , Snowflake , and AWS , while also engaging in Scala and SQL Server -based database development. This position offers a unique opportunity to grow both as a Full Stack Developer and as a Cloud Engineer , expanding your expertise across modern data platforms and backend development. Responsibilities: Analyze, design and develop solutions within a multi-functional Agile team to support key business needs for the Data feeds Design, implement and test solutions using AWS EMR for content Ingestion. Work on complex SQL server projects involving high volume data Engineer components, and common services based on standard corporate development models, languages and tools Apply software engineering best practices while also leveraging automation across all elements of solution delivery Collaborate effectively with technical and non-technical stakeholders. Must be able to document and demonstrate technical solutions by developing documentation, diagrams, code comments, etc. Basic Qualifications: Bachelors degree in Computer Science, Information Technology, Engineering, or a related field. 3--6 years of experience in application development. Minimum of 2 years of hands-on experience with Scala. Minimum of 2 years of hands-on experience with Microsoft SQL Server. Solid understanding of Amazon Web Services (AWS) and cloud-based development. In-depth knowledge of system architecture, object-oriented programming, and design patterns. Excellent communication skills, with the ability to convey complex ideas clearly both verbally and in writing. Preferred Qualifications: Familiarity with AWS Services, EMR, Auto scaling, EKS Working knowledge of snowflake. Preferred experience in Python development. Familiarity with the Financial Services domain and Capital Markets is a plus. Experience developing systems that handle large volumes of data and require high computational performance.

Posted 1 month ago

Apply

5.0 - 8.0 years

3 - 7 Lacs

Coimbatore

Work from Office

Job Information Job Opening ID ZR_2228_JOB Date Opened 20/04/2024 Industry Technology Job Type Work Experience 5-8 years Job Title Cloud Developer City Coimbatore Province Tamil Nadu Country India Postal Code 638103 Number of Positions 4 Cloud Skills: AWSCompute, Networking, Security, EC2, S3, IAM, VPC, LAMBDA, RDS, ECS, EKS, CLOUDWATCH, LOAD BALANCERS,Autoscaling, CloudFront, Route53, Security Groups, DynamoDB, CloudTrail, REST API's, Fast-API, Node.js (Mandatory) Azure (Overview) (Optional) GCP (Overview) (Optional) Programming/IAC Skills: Python (Mandatory) Chef (Mandatory) Ansible (Mandatory) Terraform (Mandatory) Go (optional) Java (Optional) Candidate should be more than 4 years on cloud Development. Presently should be working on cloud Development. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested

Posted 1 month ago

Apply

5.0 - 8.0 years

27 - 42 Lacs

Bengaluru

Work from Office

Job Summary Member of a software engineering team involved in development & design of the AI Data Platform built on NetApp’s flagship storage operating ONTAP. ONTAP is a feature rich stack with its rich data management capabilities that has tremendous value to our customers and are used in mission critical applications across the world. You will work as part of a team responsible for the development, testing and debugging of distributed software that drives NetApp cloud, hybrid-cloud, and on-premises AI/ML solutions. As part of the Research and Development function, the overall focus of the group is on competitive market and customer requirements, supportability, technology advances, product quality, product cost and time-to-market. Software engineers focus on enhancements to existing products as well as new product development. This is a mid-level technical position that requires an individual to be broad-thinking, systems-focused, creative, team-oriented, technologically savvy, able to work in a small and large cross-functional teams, willing to learn and driven to produce results. Job Requirements Proficiency in programming languages like GO. Experience with Machine Learning Libraries and Frameworks: PyTorch, TensorFlow, Keras, Open AI, LLMs (Open Source), LangChain etc. Hands-on experience working with Rest APIs and Micro Services – Flask, API frameworks. Experience working in Linux, AWS/Azure/GCP, Kubernetes – Control plane, Auto scaling, orchestration, containerization is a must. Experience with No Sql Document Databases e.g., Mongo DB, Cassandra, Cosmos DB, Document DB. Experience working building Micro Services, REST APIs and related API frameworks. Experience with Big Data Technologies: Understanding big data technologies and platforms like Spark, Hadoop and distributed storage systems for handling large-scale datasets and parallel processing. Proven track record of working on mid to large sized projects. Responsible for providing support in the development and testing activities of other engineers that involve several inter-dependencies. Participate in technical discussions within the team and across cross-functional teams. Willing to work on additional tasks and responsibilities that will contribute towards team, department and company goals. A strong understanding and experience with concepts related to computer architecture, data structures and programming practices. Experience with AI/ML frameworks like PyTorch or TensorFlow is a Plus. Education IC -Typically requires a minimum of 4-7 years of related experience with a Bachelor’s degree or a Master’s degree; or a PhD with relevant experience.

Posted 2 months ago

Apply
Page 1 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies