Home
Jobs

834 Aws Cloud Jobs - Page 11

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 11.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Requirements 6+ years of experience Proficiency in AWS Cloud, Kubernetes, Rancher, Terraform, ElasticSearch Operator, and CI/CD (Continuous Integration/Continuous Deployment) Experience in building scalable, efficient, high availability infrastructure Responsibilities Maintenance of a large modern search platform Handling production issues and incidents Assisting devs and QAs with their work Optimizing and maintaining existing infrastructure for performance and availability Ensuring high system performance and availability Team Information Work within a SAFe, scrum / kanban methodology, and agile approach Collaborative and friendly atmosphere Utilization of microservice architecture and extensive CI/CD automation

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Pune, Bengaluru

Hybrid

Naukri logo

PySpark coding skill Proficient in AWS Data Engineering Services Experience in Designing Data Pipeline & Data Lake Having good communications and capable of leading & mentoring team Role & responsibilities Share profiles to afreen.banu@in.experis.com

Posted 1 week ago

Apply

7.0 - 12.0 years

5 - 13 Lacs

Pune

Hybrid

Naukri logo

So, what’s the role all about? NICE APA is a comprehensive platform that combines Robotic Process Automation, Desktop Automation, Desktop Analytics, AI and Machine Learning solutions as Neva Discover NICE APA is more than just RPA, it's a full platform that brings together automation, analytics, and AI to enhance both front-office and back-office operations. It’s widely used in industries like banking, insurance, telecom, healthcare, and customer service We are seeking a Senior/Specialist Technical Support Engineer with a strong understanding of RPA applications and exceptional troubleshooting skills. The ideal candidate will have hands-on experience in Application Support, the ability to inspect and analyze RPA solutions and Application Server (e.g., Tomcat, Authentication, certificate renewal), and a solid understanding of RPA deployments in both on-premises and cloud-based environments (such as AWS). You should be comfortable supporting hybrid RPA architectures, handling bot automation, licensing, and infrastructure configuration in various environments. Familiarity with cloud-native services used in automation (e.g., AMQ queues, storage, virtual machines, containers) is a plus. Additionally, you’ll need a working knowledge of underlying databases and query optimization to assist with performance and integration issues. You will be responsible for diagnosing and resolving technical issues, collaborating with development and infrastructure teams, contributing to documentation and knowledge bases, and ensuring a seamless and reliable customer experience across multiple systems and platforms How will you make an impact? Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address and resolve product issues. Maintain quality and on-going internal and external communication throughout your investigation. Provide high level of support and minimize R&D escalations. Prioritize daily missions/cases and mange critical issues and situations. Contribute to the Knowledge Base, document troubleshooting and problem resolution steps and participate in Educating/Mentoring other support engineers. Willing to perform on call duties as required. Excellent problem-solving skills with the ability to analyze complex issues and implement effective solutions. Good communication skills with the ability to interact with technical and non-technical stakeholders. Have you got what it takes? Minimum of 8 to 12 years of experience in supporting global enterprise customers. Monitor, troubleshoot, and maintain RPA bots in production environments. Monitor, troubleshoot, system performance, application health, and resource usage using tools like Prometheus, Grafana, or similar Data Analytics - Analyze trends, patterns, and anomalies in data to identify product bugs Familiarity with ETL processes and data pipelines - Advantage Provide L1/L2/L3 support for RPA application, ensuring timely resolution of incidents and service requests Familiarity applications running on Linux-based Kubernetes clusters Troubleshoot and resolve incidents related to pods, services, and deployments Provide technical support for applications running on both Windows and Linux platforms, including troubleshooting issues, diagnosing problems, and implementing solutions to ensure optimal performance. Familiarity with Authentication methods like WinSSO and SAML. Knowledge in Windows/Linux Hardening like TLS enforcement, Encryption Enforcement, Certificate Configuration Working and Troubleshooting knowledge in Apache Software components like Tomcat, Apache and ActiveMQ. Working and Troubleshooting knowledge in SVN/Version Control applications Knowledge in DB schema, structure, SQL queries (DML, DDL) and troubleshooting Collect and analyze logs from servers, network devices, applications, and security tools to identify Environment/Application issues. Knowledge in terminal server (Citrix)- advantage Basic understanding on AWS Cloud systems. Network troubleshooting skills (working with different tools) Certification in RPA platforms and working knowledge in RPA application development/support – advantage. NICE Certification - Knowledge in RTI/RTS/APA products – Advantage Integrate NICE's applications with customers on-prem and cloud-based 3rd party tools and applications to ingest/transform/store/validate data. Shift- 24*7 Rotational Shift (include night shift) Other Required Skills: Excellent verbal and written communication skills Strong troubleshooting and problem-solving skills. Self-motivated and directed, with keen attention to details. Team Player - ability to work well in a team-oriented, collaborative environment. Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7326 Reporting into: Tech Manager Role Type: Individual Contributor

Posted 1 week ago

Apply

6.0 - 9.0 years

4 - 9 Lacs

Pune

Hybrid

Naukri logo

So, what’s the role all about? NICE APA is a comprehensive platform that combines Robotic Process Automation, Desktop Automation, Desktop Analytics, AI and Machine Learning solutions as Neva Discover NICE APA is more than just RPA, it's a full platform that brings together automation, analytics, and AI to enhance both front-office and back-office operations. It’s widely used in industries like banking, insurance, telecom, healthcare, and customer service We are seeking a Senior/Specialist Technical Support Engineer with a strong understanding of RPA applications and exceptional troubleshooting skills. The ideal candidate will have hands-on experience in Application Support, the ability to inspect and analyze RPA solutions and Application Server (e.g., Tomcat, Authentication, certificate renewal), and a solid understanding of RPA deployments in both on-premises and cloud-based environments (such as AWS). You should be comfortable supporting hybrid RPA architectures, handling bot automation, licensing, and infrastructure configuration in various environments. Familiarity with cloud-native services used in automation (e.g., AMQ queues, storage, virtual machines, containers) is a plus. Additionally, you’ll need a working knowledge of underlying databases and query optimization to assist with performance and integration issues. You will be responsible for diagnosing and resolving technical issues, collaborating with development and infrastructure teams, contributing to documentation and knowledge bases, and ensuring a seamless and reliable customer experience across multiple systems and platforms How will you make an impact? Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address and resolve product issues. Maintain quality and on-going internal and external communication throughout your investigation. Provide high level of support and minimize R&D escalations. Prioritize daily missions/cases and mange critical issues and situations. Contribute to the Knowledge Base, document troubleshooting and problem resolution steps and participate in Educating/Mentoring other support engineers. Willing to perform on call duties as required. Excellent problem-solving skills with the ability to analyze complex issues and implement effective solutions. Good communication skills with the ability to interact with technical and non-technical stakeholders. Have you got what it takes? Minimum of 5 to 7 years of experience in supporting global enterprise customers. Monitor, troubleshoot, and maintain RPA bots in production environments. Monitor, troubleshoot, system performance, application health, and resource usage using tools like Prometheus, Grafana, or similar Data Analytics - Analyze trends, patterns, and anomalies in data to identify product bugs Familiarity with ETL processes and data pipelines - Advantage Provide L1/L2/L3 support for RPA application, ensuring timely resolution of incidents and service requests Familiarity applications running on Linux-based Kubernetes clusters Troubleshoot and resolve incidents related to pods, services, and deployments Provide technical support for applications running on both Windows and Linux platforms, including troubleshooting issues, diagnosing problems, and implementing solutions to ensure optimal performance. Familiarity with Authentication methods like WinSSO and SAML. Knowledge in Windows/Linux Hardening like TLS enforcement, Encryption Enforcement, Certificate Configuration Working and Troubleshooting knowledge in Apache Software components like Tomcat, Apache and ActiveMQ. Working and Troubleshooting knowledge in SVN/Version Control applications Knowledge in DB schema, structure, SQL queries (DML, DDL) and troubleshooting Collect and analyze logs from servers, network devices, applications, and security tools to identify Environment/Application issues. Knowledge in terminal server (Citrix)- advantage Basic understanding on AWS Cloud systems. Network troubleshooting skills (working with different tools) Certification in RPA platforms and working knowledge in RPA application development/support – advantage. NICE Certification - Knowledge in RTI/RTS/APA products – Advantage Integrate NICE's applications with customers on-prem and cloud-based 3rd party tools and applications to ingest/transform/store/validate data. Shift- 24*7 Rotational Shift (include night shift) Other Required Skills: Excellent verbal and written communication skills Strong troubleshooting and problem-solving skills. Self-motivated and directed, with keen attention to details. Team Player - ability to work well in a team-oriented, collaborative environment. Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7556 Reporting into: Tech Manager Role Type: Individual Contributor

Posted 1 week ago

Apply

5.0 - 7.0 years

8 - 18 Lacs

Pune

Work from Office

Naukri logo

Key Responsibilities Should understand the business needs, write and drive roles and responsibilities for the team members, effectively groom them and make the team self-running. Should be highly collaborative with the team, and other stakeholders. Understand the compliance and regulatory terms/requirements of the business, learn and adapt to them, and ensure team is obliged to follow, and system errors deviating from those are given immediate attention and resolution. Should be flexible to share extended duties when need arises to deliver consistent results. Take initiatives on continuous improvement and quality of operations, finding gaps and fixing them to decrease the turnaround times. Provide technical support of our incoming tickets from our users, including extensive troubleshooting and root cause assessment Develop tools to aid operations and maintenance Should bring industry practices of documenting all processes of the team. Should have managed a team for at least 2 year or so. Should have driven technical projects. Document and maintain system configuration documents Document and maintain process documents. Desirable Skills: Basic knowledge of Java Should have very good knowledge on code repositories (git, bitbucket, svn) Should be very good in MongoDB, MySQL / SQL Good knowledge of Linux environment, shell scripting Implemented application monitoring for web services, with ELK, Grafana, Zabbix. Partnered with development to enhance release, change, and configuration management orchestration capabilities, employing Puppet, Ansible, Jenkins, and Git. Deployed and supported multiple virtualization environments including AWS, VMware/ vSphere, Vagrant, Docker, and VirtualBox, Kubernetes. Should have worked in project management tools like JIRA. Excellent written and oral communication skills Knowledge on Incidents and escalation practices. Exposure to cloud services like AWS is a big plus. Monitoring System Performance related to Virtual memory, Swap Space, Disk utilization and CPU utilization, Network-Related Configuration. Manage and configure FTP, Web Server (Apache), Samba and SSH servers. Configure DNS server and client Knowledge on configuring Dynamic Host Configuration Protocol (DHCP) in Linux environment. Managing Swap Configuration Configuring Access Control List (ACL) Install and configure Kernel-based Virtual Machine as per company policy. Performed troubleshooting in Linux Servers, resolving boot issues and maintenance of server issues using rescue mode and single user modes. Experience 5-7 years

Posted 1 week ago

Apply

15.0 - 20.0 years

10 - 15 Lacs

Ahmedabad

Work from Office

Naukri logo

( GenAI, JAVA, AI/ML, AWS,Saas is Must ) 15 years of experience in software engineering, with at least 5 years of experience in a leadership role. Strong technology expertise in Java, Microservices architecture, AWS cloud platform, AI, and the Angular framework. Solid background in building scalable and distributed systems, with expertise in technologies such as Spring boot (Spring (Core, AOP, Transactions, Data, Security), Cassandra, Kubernetes (K8s), Kafka, Docker and others. Experience with security best practices and protocols (e.g., SSL/TLS, OAuth) Hand on experience towards Architecture and Design patterns. Practice industry's leading best guidelines/processes in building enterprise products/components Proven track record of successfully leading and managing high-performing engineering teams. Excellent communication, interpersonal, and leadership skills. Ability to mentor and coach others, helping them develop their technical and leadership skills. Strong problem-solving and analytical skills. Experience with Agile development methodologies. Ability to prioritize effectively and manage multiple tasks simultaneously. Experience in building and scaling software applications. Experience in recruiting and hiring top-tier engineering talent. Ability to work effectively in a cross-functional team environment. Skills: angular,ssl/tls,agile development methodologies,spring boot,kafka,aws cloud platform,docker,leadership,java,kubernetes,ai,microservices architecture,software,cassandra,security best practices,agile methodologies,oauth,kubernetes (k8s),aws

Posted 1 week ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Chennai

Work from Office

Naukri logo

Hello Visionary ! We empower our people to stay resilient and relevant in a constantly changing world. We’re looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future. We are looking for Associate Software Architect with 8+ years of experience in AWS Cloud Infrastructure design, maintenance, and operations. Key Responsibilities: Infrastructure Architecture, Design & Management Understand the existing architecture to identify and implement improvements. Design and execute the initial implementation of infrastructure. Defining end-to-end DevOps architecture aligned with business goals and technical requirements. Architect and manage AWS cloud infrastructure for scalability, high availability, and cost efficiency using services like EC2, Auto Scaling, Load Balancers, and Route 53 to ensure high availability and fault tolerance. Design and implement secure network architectures using VPCs, subnets, NAT gateways, security groups, NACLs, and private endpoints. CI/CD Pipeline Management- Design, build, test and maintain AWS DevOps pipelines for automated deployments across multiple environments (dev, staging, production). Security & Compliance-Enforce least privilege access controls to enhance security. Monitoring & Optimization-Centralize monitoring with AWS CloudWatch, CloudTrail, and third-party tools. And set up metrics, dashboards, alerts Infrastructure as Code (IaC) Write, maintain, and optimize Terraform templates/AWS CloudFormation/AWS CDK for infrastructure provisioning. Automate resource deployment across multiple environments (DEV, QA, UAT & Prod) and configuration management. Managing infrastructure lifecycle through version-controlled code Modular and reusable IaC design. License Management Use AWS License Manager to track and enforce software license usage Manage BYOL (Bring Your Own License) models for third-party tools like GraphDB Integrate license tracking with AWS Systems Manager, EC2, and CloudWatch Define custom license rules and monitor compliance across accounts using AWS Organizations Documentation & Governance Create and maintain detailed architectural documentation. Participate in code and design reviews to ensure compliance with architectural standards. Establish architectural standards and best practices for scalability, security, and maintainability across development and operations teams. Interpersonal Skills Effective communication and collaboration with stakeholders to gather and understand technical and business requirements Strong grasp of Agile and Scrum methodologies for iterative development and team coordination Mentoring and guiding DevOps engineers while fostering a culture of continuous improvement and DevOps best practices Make your mark in our exciting world at Siemens . This role, based in Chennai , is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. We are dedicated to equality and welcome applications that reflect the diversity of the communities we serve. All employment decisions at Siemens are based on qualifications, merit, and business need. Bring your curiosity and imagination, and help us shape tomorrow We’ll support you with Hybrid working opportunities. Diverse and inclusive culture. Variety of learning & development opportunities. Attractive compensation package. Find out more about Siemens careers at www.siemens.com/careers

Posted 1 week ago

Apply

6.0 - 11.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

Requirements 6+ years of experience Proficiency in Java, Spring Boot, object databases, ElasticSearch/Solr, Practice in using AWS cloud, Docker & Kubernetes, REST APIs Experience in building scalable and high-performance systems Strong communication skills in English (B2+) Nice-to-have: Knowledge of Python, ETL experience, and big data solutions Responsibilities Maintenance of a large modern search platform Handling production issues and incidents Optimizing and maintaining existing code for performance and availability Ensuring high performance and availability of the system Engage in the Release Process Team Information Work within a SAFe, scrum / kanban methodology, and agile approach Collaborative and friendly atmosphere Utilization of microservice architecture and extensive CI/CD automation Tools used: git, IntelliJ, Jira, Confluence, i3 by Tieto as search backend Skills: aws cloud, docker & kubernetes, rest apis,strong communication skills in english (b2+),rest apis,java,kubernetes,docker,elasticsearch,java, spring boot, object databases, elasticsearch/solr,object databases,big data solutions,aws cloud,python,etl,solr,spring boot

Posted 1 week ago

Apply

3.0 - 6.0 years

10 - 15 Lacs

Pune

Hybrid

Naukri logo

Role & responsibilities Description: A Cloud Engineer (DevOps) in AWS is responsible for designing, implementing, and managing AWS-based solutions. This role involves ensuring the scalability, security, and efficiency of AWS infrastructure to support business operations and development activities. Collaborate with cross-functional teams to optimize cloud services and drive innovation. Tasks Design and implement scalable, secure, and reliable AWS cloud infrastructure Manage and optimize AWS resources to ensure cost-efficiency Develop and maintain Infrastructure as Code (IaC) scripts Monitor system performance and troubleshoot issues Implement security best practices and compliance measures Collaborate with development teams to support application deployment Automate operational tasks using scripting and automation tools Conduct regular system audits and generate reports Stay updated with the latest AWS features and industry trends Provide technical guidance and support to team members General Requirements At least 5 years of experience as a AWS cloud engineer or AWS architect, preferably in the automotive sector. Degree in Computer Science (or similar), alternatively well-founded professional experience in the desired field Business fluent in English (at least C1) Very good communication and presentation skills Preferred candidate profile Hard skills Proficiency in AWS services (S3, ECS, Lambda, Glue, Athena, EC2, SageMaker, Batch Processing, Bedrock, API Gateway, Security Hub, AWS Inspector, etc.) Strong understanding of cloud architecture and best practices Experience with infrastructure as code (IaC) tool AWS CDK with a programming language like Python or Typescript Knowledge of networking concepts, security protocols and sonarqube Familiarity with CI/CD pipelines and DevOps practices in Gitlab Ability to troubleshoot and resolve technical issues Scripting skills (Python, Bash, etc.) Experience with monitoring and logging tools (CloudWatch, CloudTrail) Understanding of containerization (Docker, ECS) Excellent communication and collaboration skills

Posted 1 week ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Kochi

Work from Office

Naukri logo

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / Data Bricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers

Posted 1 week ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

YouA fearless and dynamic engineer on an upward path: You have ambitious career goals and are looking for a company and team where these goals will be fulfilled. You are not afraid to take risks. You are computer science literate and can demonstrate competency in the qualifications you put on your resume. You can quickly figure out how things work. You can read code and learn from it; you can write code so others can learn from it. You can quickly integrate into a team and start contributing. You are comfortable working in multiple code bases and languages. What we want you to do: You will work with a talented team to deliver a market-leading FinOps based Cloud goverenance, Optimize and Control solutions. You will be a part of a team that has end-to-end ownership of the multiple microservices from design, development, deployment to operations and interacting with customers. You will collaborate with other teams to integrate our product within the overall technical solution and ecosystem. Required education Bachelor's Degree Required technical and professional expertise 4 to 6 years of proven experience in design and development of enterprise level software, testing and supporting software applications. Experience working with Software Engineers to deliver roadmap items Distributed processing and distributed store processing large datasets as in Peta Bytes Proficiency with system design, software defined infrastructure, microservices Demonstrable computer science literacyalgorithms, data structures Database implementations (query optimization, index generation, caching) or NoSQL DBs Proficiency with Java is essential Proficient with product development experience hosted as SaaS solution on AWS cloud Proficiency in AWS cloud with insights in to Cloud infrastructure like Pricing, Cost, Utilization, AWS Resources Exposure to system design, software defined infrastructure, microservices Experience with relational databases, schema design, SQL Experience working in a DevOps model Proven application development skills with web or enterprise scale software. Strong knowledge of data structures, algorithms, object-oriented programming Excellent communication skills, collaboration across teams and critical thinking. Demonstrable track record of dealing well with ambiguity, prioritizing needs, and delivering results in a dynamic environment Bachelor’s degree in Computer Science or equivalent experience. Preferred technical and professional experience Exprience in Azure, GCP and other cloud technologies Experience with Cloud Governance, cost management and cloud Pricing /Cost/ Utilization data Proficiency in Full Stack product development Experience in ReactJS Experience in Python, Go languages Experience with distributed source control systems such as Git Experience with test-driven development and frameworks (e.g. JUnit)

Posted 1 week ago

Apply

1.0 - 3.0 years

3 - 5 Lacs

Bengaluru

Work from Office

Naukri logo

YouA fearless and dynamic engineer on an upward path: You have ambitious career goals and are looking for a company and team where these goals will be fulfilled. You are not afraid to take risks. You are computer science literate and can demonstrate competency in the qualifications you put on your resume. You can quickly figure out how things work. You can read code and learn from it; you can write code so others can learn from it. You can quickly integrate into a team and start contributing. You are comfortable working in multiple code bases and languages. What we want you to do: You will work with a talented team to deliver a market-leading FinOps based Cloud goverenance, Optimize and Control solutions. You will be a part of a team that has end-to-end ownership of the multiple microservices from design, development, deployment to operations and interacting with customers. You will collaborate with other teams to integrate our product within the overall technical solution and ecosystem. Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise 1 to 3 years of proven experience in design and development of enterprise level software, testing and supporting software applications. Experience working with Software Engineers to deliver roadmap items Distributed processing and distributed store processing large datasets as in Peta Bytes Demonstrable computer science literacyalgorithms, data structures Database implementations (query optimization, index generation, caching) or NoSQL DBs Proficiency with Java is essential Proficient with product development experience hosted as SaaS solution on AWS cloud Proficiency in AWS cloud with insights in to Cloud infrastructure like Pricing, Cost, Utilization, AWS Resources Exposure to system design, software defined infrastructure, microservices Experience with relational databases, schema design, SQL Experience working in a DevOps model Proven application development skills with web or enterprise scale software. Strong knowledge of data structures, algorithms, object-oriented programming Excellent communication skills, collaboration across teams and critical thinking. Demonstrable track record of dealing well with ambiguity, prioritizing needs, and delivering results in a dynamic environment Bachelor’s degree in Computer Science or equivalent experience. Preferred technical and professional experience Experience in Azure, GCP and other cloud technologies Experience with Cloud Governance, cost management and cloud Pricing /Cost/ Utilization data Proficiency in Full Stack product development Experience in ReactJS Experience in Python, Go languages Experience with distributed source control systems such as Git Experience with test-driven development and frameworks (e.g. JUnit)

Posted 1 week ago

Apply

1.0 - 2.0 years

1 - 2 Lacs

Kolkata

Work from Office

Naukri logo

Job Title: Cloud Techno-Commercial Assistant Location: Kolkata Job Type: Full-time Work from Office Experience: 1- 2 years Job Summary: We are seeking a skilled and dynamic Techno-Commercial Cloud Assistant to bridge the gap between cloud technical teams and business stakeholders. The ideal candidate will support cloud proposals, pricing, and customer engagementproviding both technical insights and commercial value propositions. You will work closely with sales, engineering, and product teams to deliver cloud solutions that meet client needs while aligning with business goals. Role & responsibilities Assist in tracking client inquiries and coordinate with the sales and technical teams to ensure timely and accurate responses. Support cloud usage analysis by generating basic reports using tools like AWS Cost Explorer, Azure Cost Management. Maintain and regularly update cloud solution templates, pricing sheets, and client presentation decks to ensure accuracy and relevance. Benchmark cloud service pricing and features across providers (AWS, Azure, GWS) to support solution comparisons and recommendations. Create and manage a centralized repository of reusable cloud solution assets such as case studies, proposal templates, and FAQs. Monitor industry news, vendor updates, and promotional offers to inform the team about new opportunities or price changes. Participate in internal brainstorming sessions to contribute to the development of customized and cost-effective client cloud solutions. Schedule and coordinate meetings, demos, and follow-ups related to pre-sales and commercial discussions. Assist in the creation of customer-facing documentation, including FAQs, solution diagrams, service overviews, and how-to guides. Work under the guidance of senior cloud engineers to identify and suggest cost optimization strategies based on client usage patterns. Preferred candidate profile Basic understanding of cloud platforms like AWS, Azure, or GWS. Strong interest in a hybrid career role involving both technology and business. Good communication and presentation skills. Ability to work collaboratively with both technical and sales teams. Certification in AWS Cloud Practitioner or Azure Fundamentals (AZ-900) is an added advantage. Candidates based out of Kolkata or outskirts will be preferred.

Posted 1 week ago

Apply

5.0 - 6.0 years

6 - 6 Lacs

Chennai

Work from Office

Naukri logo

Opening is for Telcom company based out at chennai -NAVALUR Role & responsibilities Cloud Infra Ops Engineer-JD This vital role requires both deep and wide Cloud technology experience in customer facing environment. Also, critical to this role is the need for an acute understanding of the impact of mission critical activities in cloud. Required skills: Identifying relevant specific components and interfaces needed for a cloud infrastructure & virtualization platform. Cloud Management Platform (VIM controller) End to end architectural know how on Openstack components (RHEL host OS & Linux components (ovs)) System Administration of Linux ubuntu/VMware environment Nova(Compute) & Cinder(storage) and Ericsson Hyperscale datacenter solution HDS 8000 HW knowledge, NFV architecture, Atos GUI knowledge, Neutron(network), SDN and VLAN and overlay (VXLAN, MPLSGRE etc.) In-depth understanding on EO architecture and components. DC network virtualization based on SDN controllers and virtual switches Tenant virtual network implementation stretching across multiple NFVI Demonstrated experience in cloud administration in datacenter. Demonstrated experience in engineering and virtualization in the cloud environment including: virtual machine deployment and management, cloud orchestration, service instantiation and assurance and cloud analytics(optional). Redhat 7, any one of the cloud technology (OpenStack, Azure, AWS) Backup/restore and Upgrade rollback and new VMs creation. Cinder, Ceilometer/Gnocchi/Aodh, Glance, Heat, Horizon, Keystone, Neutron, Nova, Libvirt/KVM, Pacemaker, Rabitt MQ Linux distributions: Ubuntu, Centos, Red hat New installation and trouble shoot. Thorough understanding of Cloud technologies and ecosystem. Analyze the issue and provide the root cause. In-depth understanding of Open Stack Architecture and the components and IaaS (Infrastructure as a service) deployments. Live experience in deploying the Cloud Infra using Red Hat Open Stack using Triple O. Senior level OpenStack experience (Minimum 3 years of Open stack) . Must know architecture, operations and be able to troubleshoot bugs within OpenStack to achieve root cause analysis Trouble shooting with OS for memory and CPU utilization breaches and provide RCA. Expert level Linux OS troubleshooting. Ability to troubleshoot issues with the underlying components of OpenStack when investigating incidents or testing new features and projects Demonstrated ability to use configuration languages like Puppet/Chef/Ansible/Salt/Bash to create automations and manage systems Understands service deployment using the Virtualization, Orchestration, Image, and Bare Metal Provisioning services etc. Hands on skills on Shell scripting knowledge. Any experience on cloud technologies like Openstack / AWS is highly desirable Strong in fundamentals including Networking, Security, OS concepts, Virtualization etc. Design, Implement and Maintain Cloud infrastructure and services Server Hardware and Software integration & testing, installation of virtualization services, Cloud, server OS, applications. Configuring storage, networking and security functionalities. Storage knowledge like HP, EMC and IBM LUN creation and allocation. Network Knowledge with create and assigning VLANS. Knowledge with ticketing tool like ITSM, Incident management problem management change management.

Posted 1 week ago

Apply

3.0 - 7.0 years

7 - 13 Lacs

Bengaluru

Hybrid

Naukri logo

SoftwareEngineer-L2Chatbot Engg Role! 3+yrs exp, Python (must), Java/Golang (good), Django, AWS, Postgres, NLP, LLMs/GenAI, ML, NLU, NLG, TTS, voice bots. Full-stack exp a plus. C2H with TE Infotech (Exotel). Loc: BLR. Apply: ssankala@toppersedge.com

Posted 1 week ago

Apply

3.0 - 7.0 years

12 - 16 Lacs

Bengaluru

Remote

Naukri logo

Senior Cloud Engineer Job Description Position Title: Senior Cloud Engineer -- AWS Location: Remote] Position Overview The Senior Cloud Engineer will play a critical role in designing, deploying, and managing scalable, secure, and highly available cloud infrastructure across multiple platforms (AWS, Azure, Google Cloud). This role requires deep technical expertise, leadership in cloud strategy, and hands-on experience with automation, DevOps practices, and cloud-native technologies. The ideal candidate will work collaboratively with cross-functional teams to deliver robust cloud solutions, drive best practices, and support business objectives through innovative cloud engineering. Key Responsibilities Design, implement, and maintain cloud infrastructure and services, ensuring high availability, performance, and security across multi-cloud environments (AWS, Azure, GCP) Develop and manage Infrastructure as Code (IaC) using tools such as Terraform, CloudFormation, and Ansible for automated provisioning and configuration Lead the adoption and optimization of DevOps methodologies, including CI/CD pipelines, automated testing, and deployment processes Collaborate with software engineers, architects, and stakeholders to architect cloud-native solutions that meet business and technical requirements Monitor, troubleshoot, and optimize cloud systems for cost, performance, and reliability, using cloud monitoring and logging tools Ensure cloud environments adhere to security best practices, compliance standards, and governance policies, including identity and access management, encryption, and vulnerability management Mentor and guide junior engineers, sharing knowledge and fostering a culture of continuous improvement and innovation Participate in on-call rotation and provide escalation support for critical cloud infrastructure issues Document cloud architectures, processes, and procedures to ensure knowledge transfer and operational excellence Stay current with emerging cloud technologies, trends, and best practices, recommending improvements and driving innovation Required Qualifications Bachelors or Masters degree in Computer Science, Engineering, Information Systems, or a related field, or equivalent work experience 6–10 years of experience in cloud engineering or related roles, with a proven track record in large-scale cloud environments Deep expertise in at least one major cloud platform (AWS, Azure, Google Cloud) and experience in multi-cloud environments Strong programming and scripting skills (Python, Bash, PowerShell, etc.) for automation and cloud service integration Proficiency with DevOps tools and practices, including CI/CD (Jenkins, GitLab CI), containerization (Docker, Kubernetes), and configuration management (Ansible, Chef) Solid understanding of networking concepts (VPC, VPN, DNS, firewalls, load balancers), system administration (Linux/Windows), and cloud storage solutions Experience with cloud security, governance, and compliance frameworks Excellent analytical, troubleshooting, and root cause analysis skills Strong communication and collaboration abilities, with experience working in agile, interdisciplinary teams Ability to work independently, manage multiple priorities, and lead complex projects to completion Preferred Qualifications Relevant cloud certifications (e.g., AWS Certified Solutions Architect, AWS DevOps Engineer, Microsoft AZ-300/400/500, Google Professional Cloud Architect) Experience with cloud cost optimization and FinOps practices Familiarity with monitoring/logging tools (CloudWatch, Kibana, Logstash, Datadog, etc.) Exposure to cloud database technologies (SQL, NoSQL, managed database services) Knowledge of cloud migration strategies and hybrid cloud architectures

Posted 1 week ago

Apply

3.0 - 5.0 years

3 - 8 Lacs

Noida

Work from Office

Naukri logo

Roles & Responsibilities: Proficient in Python including, Github, Git commands Develop code based on functional specifications through an understanding of project code Test code to verify it meets the technical specifications and is working as intended, before submitting to code review Experience in writing tests in Python by using Pytest Follow prescribed standards and processes as applicable to software development methodology, including planning, work estimation, solution demos, and reviews Read and understand basic software requirements Assist with the implementation of a delivery pipeline, including test automation, security, and performance Assist in troubleshooting and responding to production issues to ensure the stability of the application Must-Have and Mandatory: Very Good experience in Python Flask, SQL Alchemy, Pytest Knowledge of Cloud like AWS Cloud , Lambda, S3, Dynamo DB Database - Postgres SQL or MySQL or Any relational database. Can provide suggestions for performance improvements, strategy, etc. Expertise in object-oriented design and multi-threaded programming Total Experience Expected: 04-06 years

Posted 1 week ago

Apply

5.0 - 7.0 years

8 - 12 Lacs

Pune

Hybrid

Naukri logo

So, what’s the role all about? We are looking for a highly skilled and motivated Senior Develper to join our team, with strong expertise in Python , and deep experience in building intelligent agentic systems using AWS Bedrock Agents and AWS Q Workflows . This role focuses on building end-to-end agentic task assistance solutions that execute complex workflows and enable seamless orchestration across systems. You will play a key role in creating smart automation that bridges front office interactions (customer-facing systems) with mid and back office operations (e.g., finance, fulfillment, compliance), empowering enterprise-grade digital transformation. How will you make an impact? Design, develop, and maintain scalable full-stack applications using Python . Build intelligent task agents leveraging AWS Bedrock Agents to manage and automate multi-step workflows. Integrate and orchestrate AWS Q Workflows to handle complex, enterprise-level task execution and decision-making processes. Enable contextual task handoff between front office and mid/back office systems, ensuring smooth operational continuity. Collaborate closely with cross-functional teams including product, DevOps, and AI/ML engineers to deliver secure, efficient, and intelligent systems. Write clean, maintainable code and contribute to architecture and design decisions for highly available agentic systems. Monitor, debug, and optimize live systems and workflows to ensure robust performance at scale. Have you got what it takes? 6+ years of full-stack development experience with strong hands-on skills in Python . Proven expertise in designing and deploying intelligent agents using AWS Bedrock Agents . Solid experience with AWS Q Workflows , including building and managing complex, automated workflow orchestration. Demonstrated ability to integrate AI-powered agents with enterprise systems and back-office applications. Experience building microservices and RESTful APIs within an AWS cloud-native architecture. Understanding of enterprise operations and workflow handoffs between business layers (front, mid, and back office). Familiarity with DevOps practices, CI/CD pipelines, and infrastructure-as-code (e.g., Terraform or CloudFormation). Strong problem-solving skills, system thinking, and attention to detail. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Reporting into: Tech Manager, Engineering, CX Role Type: Individual Contributor

Posted 1 week ago

Apply

10.0 - 15.0 years

40 - 45 Lacs

Bengaluru

Hybrid

Naukri logo

Key Skills: AWS Cloud, Terraform, Jenkins, Azurecloud, DevOps, Kubernetes, Docker, GCP, People Management, Leadership Roles and Responsibilities: Team Leadership & People Management: Lead and mentor a team of cloud and DevOps engineers; provide coaching, performance feedback, and career development support. Strategic Delivery Ownership: Own the execution of Cloud and DevOps initiatives including CI/CD pipeline maturity, cloud infrastructure automation, and platform scalability. Cloud Architecture & Engineering Leadership: Guide the team in designing scalable, secure, and highly available cloud-native solutions using AWS, Azure, and IaC tools (Terraform, CloudFormation). DevOps Practice Maturity: Promote DevOps and GitOps methodologies, drive automation, continuous integration, and deployment best practices across engineering teams. Cross-Functional Collaboration: Partner with Development, Cloud Security, and Product teams to align infrastructure solutions with application goals and security standards. Innovation & Continuous Improvement: Evaluate and implement new tools and practices to improve reliability, security, and speed of software delivery. Incident & Problem Management: Guide resolution of critical production issues and promote a culture of root cause analysis and preventive improvements. Experience Requirements: 10+ years of experience, including 2+ years in a technical leadership or management capacity. Deep expertise in cloud platforms (AWS and Azure), with hands-on experience in architecting and managing cloud-native solutions. Strong background in Infrastructure-as-Code tools such as Terraform and CloudFormation. Proven track record with CI/CD tools such as Jenkins, GitHub Actions, AWS CodePipeline, etc. Proficiency in containerization and orchestration tools like Docker, Kubernetes, EKS/AKS. Strong understanding of microservices architecture, cloud networking, and deployment strategies (blue/green, canary). Familiarity with observability tools (e.g., Splunk, ELK, Prometheus, Grafana) and API gateways like Kong. Exposure to security practices and tools like Orca, AquaSec, and cloud security governance. Excellent communication, leadership, and stakeholder management skills. Education : B.Tech M.Tech (Dual), MCA, B.E., B.Tech, M. Tech, M.Sc.

Posted 1 week ago

Apply

5.0 - 8.0 years

8 - 13 Lacs

Mumbai, Hyderabad, Pune

Work from Office

Naukri logo

Develop and productionize cloud-based services and full-stack applications utilizing NLP solutions, including GenAI models Implement and manage CI/CD pipelines to ensure efficient and reliable software delivery Automate cloud infrastructure using Terraform Write unit tests, integration tests and performance tests Work in a team environment using agile practices Monitor and optimize application performance and infrastructure costs Collaborate with data scientists and other developers to integrate and deploy data science models into production environments Work closely with cross-functional teams to ensure seamless integration and operation of services Proficiency JavaScript for full-stack development Strong experience with AWS cloud services, including EKS, Lambda, and S3 Knowledge of Docker containers and orchestration tools including Kubernetes

Posted 1 week ago

Apply

8.0 - 10.0 years

20 - 35 Lacs

Bengaluru

Work from Office

Naukri logo

About Us: ValueLabs is a leading provider of technology solutions and services to businesses and organizations around the world. We're passionate about delivering exceptional service and support to our clients, and we're committed to building long-term relationships based on trust, respect, and mutual benefit. GenAI Product Development | Digital Technology Solutions | ValueLabs - ValueLabs Position: Senior/Principal Full Stack Engineer AI & Cloud Solutions Location: Banglore Work Mode: Hybrid (3: WFO; 2: WFH) Experience: 8–10 Years Employment Type: Full-Time Job Description: We are seeking a highly skilled and experienced Full Stack Engineer with a strong background in software architecture and AI-based solution development . The ideal candidate will have 8–10 years of experience in full stack development and 3–4 years in building and deploying AI-driven applications. This role requires deep expertise in modern web technologies, cloud platforms, and scalable microservices architecture. Roles and Responsibilities: Design and develop scalable, secure, and high-performance full stack applications. Architect and implement microservices using FastAPI , Python , and PostgreSQL . Build dynamic and responsive front-end interfaces using ReactJS , Redux , and Context API . Integrate AI/ML models into production systems, leveraging tools for NLP , computer vision , and deep learning . Optimize application performance and ensure high code quality through unit testing and TDD practices. Collaborate with cross-functional teams to deliver end-to-end solutions. Implement CI/CD pipelines and containerized deployments using Docker , Kubernetes , and cloud platforms like AWS , Azure , or GCP . Follow secure coding practices and contribute to architectural decisions. Required Skills Frontend : ReactJS, Redux, Hooks, Context API, JavaScript, HTML, CSS Backend : Python, FastAPI, RESTful APIs, GraphQL, Asynchronous programming Database : PostgreSQL, SQLAlchemy (ORM) Testing : Jest, Mocha, PyTest, unittest, TDD DevOps & Cloud : Docker, Kubernetes, CI/CD, AWS, Azure, GCP AI/ML : Experience with AI tools, NLP, computer vision, machine learning, deep learning frameworks (e.g., TensorFlow, PyTorch) Soft Skills : Strong problem-solving, communication, and collaboration skills

Posted 1 week ago

Apply

4.0 - 9.0 years

5 - 15 Lacs

Mumbai, Mumbai Suburban, Virar

Hybrid

Naukri logo

Seeking a skilled JavaScript & .NET Developer proficient in LINQ, SQL, and .NET Core. Responsibilities included database management, query optimization, and software development. Required Candidate profile Expertise in JavaScript, .NET Core, LINQ, SQL, and stored procedures. Strong problem-solving skills, experience in serialization, and proficiency in cross-platform development.

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Hyderabad/Secunderabad, Bangalore/Bengaluru, Delhi / NCR

Hybrid

Naukri logo

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant-Data Engineer, AWS+Python, Spark, Kafka for ETL! Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Master’s Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted 1 week ago

Apply

12.0 - 17.0 years

14 - 19 Lacs

Pune

Work from Office

Naukri logo

What You'll Do About the Team We are developing a new product to support Embedded Finance requirements of our customer across the globe. This will give a great exposure to well known EPRs, third-party systems and use top-notch technologies to build integrations. We have a team and contributing to Companies top most initiative. Team members are getting an opportunity to learn something new in every week. About the Role As an engineering manager your role will be important. You'll build the Avalara Capital platform. Your creativity and will be the driving force behind an integration revolution. What Your Responsibilities Will Be Lead, mentor, and inspire a team of experienced engineers, providing guidance on best practices, architecture, and development methodologies. Strategize and guide the development efforts for designing frameworks and features that are instrumental in building the next generation integration platform. Collaborate with teams to align development efforts with product and our goals. Achieve hiring and retention targets of the team. Define product roadmap for features delivery. Review program OKRs for success and align with the team goals. Maintain high productivity, morale, engagement, and growth culture in the team. Promote best practices and contributing to community presence. Ensure team is following and other set processes. Participate in design discussions and contribute to deliver high quality and scalable products/features/frameworks. Collaborate with team members and expert groups on code reviews and test plans with an eye towards automation. Take necessary corrective measures to address problems and anticipating problem areas in new designs and work. Focus on optimization, performance, security, observability, scalability, and telemetry. Provide guidance and mentorship to engineers, promoting a culture of innovation, collaboration, accountability, learning, and professional growth. Oversee the full software development lifecycle, from requirements gathering and design to implementation, testing, deployment, and post-release support. What You'll Need to be Successful Bachelor/master's degree in computer science or equivalent. 12+ years of full stack experience in a software development role, shipping. Expert in C# or Java programming language. Experience working on AWS Cloud and DevOps (Terraform, Docker, ECS, etc.) would be beneficial. Focus on automating everything. Experience in Generative AI is a plus. Minimum 4 years of managerial experience . Experience developing the skills and career of team members. Leader Who understands the needs of customer and business to translate them into vision of the team. Experience working in matrix organisations. Experience communicating updates and resolutions to customers and other partners. Knowledge of architectural styles and design patterns with simple intuitive design. Passion to see your product be the best in the business. Knowledge of Enterprise Integration Patterns. Proficiency in CI/CD tools (Jenkins, GitLab, etc.) #LI-Remote This is a remote role. How We'll Take Care of You Total Rewards In addition to a great compensation package, paid time off, and paid parental leave, many Avalara employees are eligible for bonuses. Health & Wellness Benefits vary by location but generally include private medical, life, and disability insurance. Inclusive culture and diversit y Avalara strongly supports diversity, equity, and inclusion, and is committed to integrating them into our business practices and our organizational culture. We also have a total of 8 employee-run resource groups, each with senior leadership and exec sponsorship. What You Need To Know

Posted 2 weeks ago

Apply

7.0 - 12.0 years

0 Lacs

Pune, Bengaluru, Delhi / NCR

Work from Office

Naukri logo

Cloud Application Integration Lead [8-10+ years of relevant experience] Job Purpose and primary objectives: Application Integration Lead with good hands-on knowledge of AWS Integration related Native services. Experience and expertise in AWS Cloud services. Key responsibilities/ Skills/Knowledge • Define and architect solutions using various AWS cloud native services. • Architect and design application solution solutions, focusing on application Integration using various serverless and cloud native services like - SQS, SNS, Pub/Sub architecture, Lambda, Kinesis Firehose, EKS, AWS S3, AWS API gateway. • 3+ years of direct experience architecting/designing high throughput applications on AWS. Must have experience with resiliency, reliability and high availability engineering. • Proven ability to architect, design and implement cloud-based and/or cloud-native solutions and extensive knowledge on API Gateway and API based integrations. • Hands-on experience with containerization platforms and serverless computing, such as Docker, EKS, Lambda, Kubernetes event driven Autoscaling knowledge KEDA is preferred. • Hands-on experience with AWS Elastic Beanstalk and basic experience creating RESTful services. • Working knowledge of Databases, such as Amazon Aurora (PostgreSQL or MySQL), DynamoDB, Redis. • Experience with migrations to the cloud from both physical and virtual environments Experience required: • Bachelor’s degree or equivalent experience in a software engineering discipline • 8-10+ years of relevant experience in a professional setting as an Infrastructure Architect with Cloud experience • Strong communication skills • Certifications (preferred) – AWS Solution Architect Professional and AWS Specialty certific

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies