Home
Jobs

378 Eks Jobs - Page 9

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8 - 13 years

25 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

About The Role About The Role At Kotak Mahindra Bank, customer experience is at the forefront of everything we do on Digital Platform. To help us build & run platform for Digital Applications , we are now looking for an experienced Sr. DevOps Engineer . They will be responsible for deploying product updates, identifying production issues and implementing integrations that meet our customers' needs. If you have a solid background in software engineering and are familiar with AWS EKS, ISTIO/Services Mesh/tetrate, Terraform,Helm Charts, KONG API Gateway, Azure DevOps, SpringBoot , Ansible, Kafka/MOngoDB we"™d love to speak with you. Objectives of this Role Building and setting up new development tools and infrastructure Understanding the needs of stakeholders and conveying this to developers Working on ways to automate and improve development and release processes Investigate and resolve technical issues Develop scripts to automate visualization Design procedures for system troubleshooting and maintenance Skills and Qualifications BSc in Computer Science, Engineering or relevant field Experience as a DevOps Engineer or similar software engineering role minimum 5 Yrs Proficient with git and git workflows Good knowledge of Kubernets EKS,Teraform,CICD ,AWS Problem-solving attitude Collaborative team spirit Testing and examining code written by others and analyzing results Identifying technical problems and developing software updates and "˜fixes"™ Working with software developers and software engineers to ensure that development follows established processes and works as intended Monitoring the systems and setup required Tools Daily and Monthly Responsibilities Deploy updates and fixes Provide Level 3 technical support Build tools to reduce occurrences of errors and improve customer experience Develop software to integrate with internal back-end systems Perform root cause analysis for production errors

Posted 1 month ago

Apply

6 - 10 years

12 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

At F5, we strive to bring a better digital world to life. Our teams empower organizations across the globe to create, secure, and run applications that enhance how we experience our evolving digital world. We are passionate about cybersecurity, from protecting consumers from fraud to enabling companies to focus on innovation. Everything we do centers around people. That means we obsess over how to make the lives of our customers, and their customers, better. And it means we prioritize a diverse F5 community where each individual can thrive. About The Role Position Summary The Senior Product Manager plays a pivotal role in product development for F5 Distributed Cloud App Delivery strategies. This position requires an in-depth understanding of market dynamics in Kubernetes platforms, Multicloud Networking, Public Cloud and SaaS platforms as well as strong leadership, partnering and analytical abilities, to help build a shared vision and execute to establish a market leading position. Primary Responsibilities Product Delivery: Drive product management activities for F5 Network Connect and F5 Distributed Apps Build compelling technical marketing content to drive product awareness including building reference architectures and customer case studies Deliver web content, whitepapers, and demonstrations to drive customer adoption, and ensure technical marketing alignment with key partners Ensure accountability for product success and present data-backed findings during business reviews and QBRs Customer Engagement & Feedback: Engage with customers to understand their business goals, constraints, and requirements Prioritize feature enhancements based on customer feedback and business value Utilize the Digital Adoption Platform to identify areas of improvement, increase revenue and reduce churn Market Analysis: Position F5 Network Connect and Distributed Apps with a competitive edge in the market Validate market demand based on customer usage Conduct in-depth research to stay abreast of developments in Multicloud Networking as well as Kubernetes (CaaS/PaaS) ecosystem Team Collaboration: Collaborate with stakeholders to make informed decisions on product backlog prioritization Foster strong relationships with engineering, sales, marketing, and customer support teams Work with technical teams to ensure seamless product rollouts Work with key decision makers in marketing and sales to ensure smoot product delivery to customers Knowledge, Skills, and Abilities Technical Skills: Proficient with core networking technologies such as BGP, VPNs and tunneling, routing, NAT, etc. Proficient with core Kubernetes technologies and ecosystem such as CNIs, Ingress Controllers, etc. Proficient with core Public Cloud networking services – especially with AWS, Azure and GCP Proficient with PaaS services such as OpenShift, EKS (AWS), GKE (GCP), AKS (Azure) Well versed with L4/L7 load balancing & proxy technologies and protocols Stakeholder Management: Demonstrate strong leadership, negotiation, and persuasion capabilities Effectively manage and navigate expectations from diverse stakeholder groups Uphold a data-driven approach amidst a fast-paced, changing environment Analytical Skills: Ability to generate data-driven reports and transform complex data into actionable insights Proven skills in data analytics and making data-backed decisions Strong awareness of technology trends and potential influence on F5’s business Qualifications BA/BS degree in a relevant field 4+ years in technical product management or a related domain 2+ years of product management in Multicloud Networking, PaaS or an adjacent area (exSSE/SD-WAN) Experience developing relationships with suppliers and co-marketing partners highly desirable. The About The Role is intended to be a general representation of the responsibilities and requirements of the job. However, the description may not be all-inclusive, and responsibilities and requirements are subject to change. Please note that F5 only contacts candidates through F5 email address (ending with @f5.com) or auto email notification from Workday (ending with f5.com or @myworkday.com ) . Equal Employment Opportunity It is the policy of F5 to provide equal employment opportunities to all employees and employment applicants without regard to unlawful considerations of race, religion, color, national origin, sex, sexual orientation, gender identity or expression, age, sensory, physical, or mental disability, marital status, veteran or military status, genetic information, or any other classification protected by applicable local, state, or federal laws. This policy applies to all aspects of employment, including, but not limited to, hiring, job assignment, compensation, promotion, benefits, training, discipline, and termination. F5 offers a variety of reasonable accommodations for candidates . Requesting an accommodation is completely voluntary. F5 will assess the need for accommodations in the application process separately from those that may be needed to perform the job. Request by contacting accommodations@f5.com.

Posted 1 month ago

Apply

2 - 5 years

6 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

12 plus years of overall IT experience 5 plus years of Cloud implementation experience (AWS - S3), Terraform, Docker, Kubernetes Expert in troubleshooting cloud impementation projects Expert in cloud native technologies Good working knowledge in Terraform and Quarkus Must Have skills Cloud AWS Knowledge (AWSS3, Load-Balancers,VPC/VPC-Peering/Private-Public-Subnets, EKS, SQS, Lambda,Docker/Container Services, Terraform or other IaC-Technologies for normal deployment), Quakrus, PostgreSQL, Flyway, Kubernetes, OpenId flow, Open-Search/Elastic-Search, Open API/Swagger, Java OptionalKafka, Python #LI-INPAS Job Segment Developer, Java, Technology

Posted 1 month ago

Apply

2 - 6 years

8 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Digital Engineering Sr. Staff Engineer to join our team in Bangalore, Karnataka (IN-KA), India (IN). Title - Lead Data Architect (Streaming) Required Skills and Qualifications Overall 10+ years of IT experience of which 7+ years of experience in data architecture and engineering Strong expertise in AWS cloud services, particularly Lambda, SNS, S3, and EKS Strong experience with Confluent Strong experience in Kafka Solid understanding of data streaming architectures and best practices Strong problem-solving skills and ability to think critically Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders Knowledge of Apache Airflow for data orchestration Bachelor's degree in Computer Science, Engineering, or related field Preferred Qualifications An understanding of cloud networking patterns and practises Experience with working on a library or other long term product Knowledge of the Flink ecosystem Experience with Terraform Deep experience with CI/CD pipelines Strong understanding of the JVM language family Understanding of GDPR and the correct handling of PII Expertise with technical interface design Use of Docker Key Responsibilities Architect end-to-end data solutions using AWS services, including Lambda, SNS, S3, and EKS, Kafka and Confluent, all within a larger and overarching programme ecosystem Architect data processing applications using Python, Kafka, Confluent Cloud and AWS Develop data ingestion, processing, and storage solutions using Python and AWS Lambda, Confluent and Kafka Ensure data security and compliance throughout the architecture Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions Optimize data flows for performance, cost-efficiency, and scalability Implement data governance and quality control measures Ensure delivery of CI, CD and IaC for NTT tooling, and as templates for downstream teams Provide technical leadership and mentorship to development teams and lead engineers Stay current with emerging technologies and industry trends Collaborate with data scientists and analysts to enable efficient data access and analysis Evaluate and recommend new technologies to improve data architecture Position Overview: We are seeking a highly skilled and experienced Data Architect to join our dynamic team. The ideal candidate will have a strong background in designing and implementing data solutions using AWS infrastructure and a variety of core and supplementary technologies. This role requires a deep understanding of data architecture, cloud services, and the ability to drive innovative solutions to meet business needs. About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies.Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us atus.nttdata.com NTT DATA endeavors to make https://us.nttdata.comaccessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here. Job Segment Developer, Computer Science, Consulting, Technology

Posted 1 month ago

Apply

2 - 6 years

8 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Digital Engineering Sr. Staff Engineer to join our team in Bangalore, Karnataka (IN-KA), India (IN). TitleLead Data Architect (Warehousing) Required Skills and Qualifications Overall 10+ years of IT experience of which 7+ years of experience in data architecture and engineering Strong expertise in AWS cloud services, particularly Lambda, SNS, S3, and EKS Proficiency in Python Solid understanding of data warehousing architectures and best practices Strong Snowflake skills Strong Data warehouse skills Strong problem-solving skills and ability to think critically Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders Experience of data cataloguing Knowledge of Apache Airflow for data orchestration Experience modelling, transforming and testing data in DBT Bachelor's degree in Computer Science, Engineering, or related field Preferred Qualifications Familiarity with Atlan for data catalog and metadata management Experience integrating with IBM MQ Familiarity with Sonarcube for code quality analysis AWS certifications (e.g., AWS Certified Solutions Architect) Experience with data modeling and database design Knowledge of data privacy regulations and compliance requirements An understanding of Lakehouses An understanding of Apache Iceberg tables SnowPro Core certification Key Responsibilities Architect end-to-end data solutions using AWS services, including Lambda, SNS, S3, and EKS, as well as Snowflake, DBT and Apache Airflow, all within a larger and overarching programme ecosystem Develop data ingestion, processing, and storage solutions using Python and AWS Lambda and Snowflake Architect data processing applications using Python Ensure data security and compliance throughout the architecture Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions Optimize data flows for performance, cost-efficiency, and scalability Implement data governance and quality control measures Ensure delivery of CI, CD and IaC for NTT tooling, and as templates for downstream teams Provide technical leadership and mentorship to development teams and lead engineers Stay current with emerging technologies and industry trends Ensure data security and implement best practices using tools like Synk Collaborate with data scientists and analysts to enable efficient data access and analysis Evaluate and recommend new technologies to improve data architecture Position Overview: We are seeking a highly skilled and experienced Data Architect to join our dynamic team. The ideal candidate will have a strong background in designing and implementing data solutions using AWS infrastructure and a variety of core and supplementary technologies. This role requires a deep understanding of data architecture, cloud services, and the ability to drive innovative solutions to meet business needs. About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies.Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us atus.nttdata.com NTT DATA endeavors to make https://us.nttdata.comaccessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here. Job Segment Developer, Solution Architect, Data Warehouse, Computer Science, Database, Technology

Posted 1 month ago

Apply

4 - 9 years

16 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

Req ID: 301930 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Digital Solution Architect Lead Advisor to join our team in Bangalore, Karnataka (IN-KA), India (IN). TitleData Solution Architect Position Overview: We are seeking a highly skilled and experienced Data Solution Architect to join our dynamic team. The ideal candidate will have a strong background in designing and implementing data solutions using AWS infrastructure and a variety of core and supplementary technologies. This role requires a deep understanding of data architecture, cloud services, and the ability to drive innovative solutions to meet business needs. Required Skills and Qualifications - Bachelor's degree in Computer Science, Engineering, or related field - 7+ years of experience in data architecture and engineering - Strong expertise in AWS cloud services, particularly Lambda, SNS, S3, and EKS - Proficiency in Kafka/Confluent Kafka and Python - Experience with Synk for security scanning and vulnerability management - Solid understanding of data streaming architectures and best practices - Strong problem-solving skills and ability to think critically - Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders Preferred Qualifications - Experience with Kafka Connect and Confluent Schema Registry - Familiarity with Atlan for data catalog and metadata management - Knowledge of Apache Flink for stream processing - Experience integrating with IBM MQ - Familiarity with Sonarcube for code quality analysis - AWS certifications (e.g., AWS Certified Solutions Architect) - Experience with data modeling and database design - Knowledge of data privacy regulations and compliance requirements Key Responsibilities - Design and implement scalable data architectures using AWS services and Kafka - Develop data ingestion, processing, and storage solutions using Python and AWS Lambda - Ensure data security and implement best practices using tools like Synk - Optimize data pipelines for performance and cost-efficiency - Collaborate with data scientists and analysts to enable efficient data access and analysis - Implement data governance policies and procedures - Provide technical guidance and mentorship to junior team members - Evaluate and recommend new technologies to improve data architecture - Architect end-to-end data solutions using AWS services, including Lambda, SNS, S3, and EKS - Design and implement data streaming pipelines using Kafka/Confluent Kafka - Develop data processing applications using Python - Ensure data security and compliance throughout the architecture - Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions - Optimize data flows for performance, cost-efficiency, and scalability - Implement data governance and quality control measures - Provide technical leadership and mentorship to development teams - Stay current with emerging technologies and industry trend About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies.Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us atus.nttdata.com NTT DATA is an equal opportunity employer and considers all applicants without regarding to race, color, religion, citizenship, national origin, ancestry, age, sex, sexual orientation, gender identity, genetic information, physical or mental disability, veteran or marital status, or any other characteristic protected by law. We are committed to creating a diverse and inclusive environment for all employees. If you need assistance or an accommodation due to a disability, please inform your recruiter so that we may connect you with the appropriate team. Job Segment Solution Architect, Consulting, Database, Computer Science, Technology

Posted 1 month ago

Apply

10 - 15 years

17 - 22 Lacs

Mumbai, Hyderabad, Bengaluru

Work from Office

Naukri logo

Job roles and responsibilities : The AWS DevOps Engineer is responsible for automating, optimizing, and managing CI/CD pipelines, cloud infrastructure, and deployment processes on AWS. This role ensures smooth software delivery while maintaining high availability, security, and scalability. Design and implement scalable and secure cloud infrastructure on AWS, utilizing services such as EC2,EKS, ECS, S3, RDS, and VPC Automate the provisioning and management of AWS resources using Infrastructure as Code tools: (Terraform/ Cloud Formation / Ansible ) and YAML Implement and maintain continuous integration and continuous deployment (CI/CD) pipelines using tools like Jenkins, GitLab, or AWS CodePipeline Advocate for a No-Ops model, striving for console-less experiences and self-healing systems Experience with containerization technologies: Docker and Kubernetes Mandatory Skills: Overall experience is 5 - 8 Years on AWS Devops Speicalization (AWS CodePipeline, AWS CodeBuild, AWS CodeDeploy, AWS CodeCommit) Work experience on AWS Devops, IAM Work expertise on coding tools - Terraform or Ansible or Cloud Formation , YAML Good on deployment work - CI/CD pipelining Manage containerized workloads using Docker, Kubernetes (EKS), or AWS ECS , Helm Chart Has experience of database migration Proficiency in scripting languages (Python AND (Bash OR PowerShell)). Develop and maintain CI/CD pipelines using (AWS CodePipeline OR Jenkins OR GitHub Actions OR GitLab CI/CD) Experience with monitoring and logging tools (CloudWatch OR ELK Stack OR Prometheus OR Grafana) Career Level - IC4 Responsibilities Job roles and responsibilities : The AWS DevOps Engineer is responsible for automating, optimizing, and managing CI/CD pipelines, cloud infrastructure, and deployment processes on AWS. This role ensures smooth software delivery while maintaining high availability, security, and scalability. Design and implement scalable and secure cloud infrastructure on AWS, utilizing services such as EC2,EKS, ECS, S3, RDS, and VPC Automate the provisioning and management of AWS resources using Infrastructure as Code tools: (Terraform/ Cloud Formation / Ansible ) and YAML Implement and maintain continuous integration and continuous deployment (CI/CD) pipelines using tools like Jenkins, GitLab, or AWS CodePipeline Advocate for a No-Ops model, striving for console-less experiences and self-healing systems Experience with containerization technologies: Docker and Kubernetes Mandatory Skills: Overall experience is 5 - 8 Years on AWS Devops Speicalization (AWS CodePipeline, AWS CodeBuild, AWS CodeDeploy, AWS CodeCommit) Work experience on AWS Devops, IAM Work expertise on coding tools - Terraform or Ansible or Cloud Formation , YAML Good on deployment work - CI/CD pipelining Manage containerized workloads using Docker, Kubernetes (EKS), or AWS ECS , Helm Chart Has experience of database migration Proficiency in scripting languages (Python AND (Bash OR PowerShell)). Develop and maintain CI/CD pipelines using (AWS CodePipeline OR Jenkins OR GitHub Actions OR GitLab CI/CD) Experience with monitoring and logging tools (CloudWatch OR ELK Stack OR Prometheus OR Grafana)

Posted 1 month ago

Apply

6 - 11 years

20 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Responsibilities Owns all technical aspects of software development for assigned applications Participates in the design and development of systems & application programs Functions as Senior member of an agile team and helps drive consistent development practices tools, common components, and documentation Works with product owners to prioritize features for ongoing sprints and managing a list of technical requirements based on industry trends, new technologies, known defects, and issues Qualifications In depth experience configuring and administering EKS clusters in AWS . In depth experience in configuring DataDog in AWS environments especially in EKS In depth understanding of OpenTelemetry and configuration of OpenTelemetry Collectors In depth knowledge of observability concepts and strong troubleshooting experience. Experience in implementing comprehensive monitoring and logging solutions in AWS using CloudWatch Experience in Terraform and Infrastructure as code. Experience in Helm Strong scripting skills in Shell and/or python . Experience with large-scale distributed systems and architecture knowledge (Linux/UNIX and Windows operating systems, networking, storage) in a cloud computing or traditional IT infrastructure environment. Must have a good understanding of cloud concepts (Storage /compute/network). Experience in Collaborating with several cross functional teams to architect observability pipelines for various AWS services like EKS, SQS etc. Experience with Git and GitHub . Proficient in developing and maintaining technical documentation, ADRs, and runbooks.

Posted 1 month ago

Apply

5 - 10 years

0 - 0 Lacs

Hyderabad

Work from Office

Naukri logo

Job Description: DevOps Engineer Qualifications: - Bachelors or Masters degree in Computer Science or Computer Engineering. - 4 to 8 years of experience in DevOps. Key Skills and Responsibilities: - Passionate about continuous build, integration, testing, and delivery of systems. - Strong understanding of distributed systems, APIs, microservices, and cloud computing. - Experience in implementing applications on private and public cloud infrastructure. - Proficient in container technologies such as Kubernetes, including experience with public clouds like AWS, GCP, and other platforms through migrations, scaling, and day-to-day operations. - Hands-on experience with AWS services (VPC, EC2, EKS, S3, IAM, etc.) and Elastic Beanstalk. - Knowledge of source control management (Git, GitHub, GitLab). - Hands-on experience with Kafka for data streaming and handling microservices communication. - Experience in managing Jenkins for CI/CD pipelines. - Familiar with logging tools and monitoring solutions. - Experience working with network load balancers (Nginx, Netscaler). - Proficient with KONG API gateways, Kubernetes, PostgreSQL, NoSQL databases, and Kafka. - Experience with AWS S3 buckets, including policy management, storage, and backup using S3 and Glacier. - Ability to respond to production incidents and take on-call responsibilities. - Experience with multiple cloud providers and designing applications accordingly. - Skilled in owning and operating mission-critical, large-scale product operations (provisioning, deployment, upgrades, patching, and incidents) on the cloud. - Strong commitment to ensuring high availability and scalability of production systems. - Continuously raising the standard of engineering excellence by implementing best DevOps practices. - Quick learner with a balance between listening and taking charge. Responsibilities: - Develop and implement tools to automate and streamline operations. - Develop and maintain CI/CD pipeline systems for application development teams using Jenkins. - Prioritize production-related issues alongside operational team members. - Conduct root cause analysis, resolve issues, and implement long-term fixes. - Expand the capacity and improve the performance of current operational systems. Regards Mohammed Umar Farooq HR Recruitment Team Revest Solutions 9949051730

Posted 1 month ago

Apply

2 - 4 years

12 - 14 Lacs

Navi Mumbai

Work from Office

Naukri logo

Overview GEP is a diverse, creative team of people passionate about procurement. We invest ourselves entirely in our client’s success, creating strong collaborative relationships that deliver extraordinary value year after year. Our clients include market global leaders with far-flung international operations, Fortune 500 and Global 2000 enterprises, leading government and public institutions. We deliver practical, effective services and software that enable procurement leaders to maximise their impact on business operations, strategy and financial performance. That’s just some of the things that we do in our quest to build a beautiful company, enjoy the journey and make a difference. GEP is a place where individuality is prized, and talent respected. We’re focused on what is real and effective. GEP is where good ideas and great people are recognized, results matter, and ability and hard work drive achievements. We’re a learning organization, actively looking for people to help shape, grow and continually improve us. Are you one of us? GEP is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, ethnicity, color, national origin, religion, sex, disability status, or any other characteristics protected by law. We are committed to hiring and valuing a global diverse work team. For more information please visit us on GEP.com or check us out on LinkedIn.com. Responsibilities The candidate will be responsible for creating infrastructure designs and guiding the development and implementation of infrastructure, applications, systems and processes. This position will be working directly with infrastructure, application development and QA teams to build and deploy highly available and scalable systems in private or public cloud environments along with release management. • Candidate must have experience on AZURE or GCP Cloud Platform • Building a highly scalable, highly available, private or public infrastructure • Owning and maintaining and enhancing the infrastructure and the related tools • Help build out an entirely CI ecosystem, including automated and auto scaling testing systems. • Design and implement monitoring and alerting for production systems used by DevOps staff • Work closely with developers and other staff to solve DevOps issues with customer facing services, tools and apps Qualifications REQUIREMENTS • 2+ of experience working in a DevOps role in a continuous integration environment specially in Micro-Soft technologies. • Strong knowledge of configuration management software such as Power Shell, Ansible, Continuous integration tools such as Octopus, Azure DevOps, Jenkins • Developing complete solutions considering sizing, infrastructure, data protection, disaster recovery, security, application requirements on cloud enterprise systems. • Experience adhering to an Agile development environment and iterative sprint cycle. • Familiarity with Database Deployment and CI/CD Pipeline. • Hands-on experience with CI/CD tools like VSTS, Azure DevOps, Jenkins(at least one of this tools experience) • Worked on Docker, Container, Kubernetes, AWS EKS, API Gateway, Application Load balancer , WAF, Cloud Front • Experience with GIT, or Github and the gitflow model, administration, User Management. Must be worked on AWS Platform with minimum 2 years of experience. • Strong understanding of Linux. Strong experience in various tools related to Continuous Integration and Continuous Deployment. • Automating builds using MS Build scripts • Any Scripting language(ruby,python, Yaml, Terraform) or any other application development experience(.net , java or golan etc) • Ability to write in multiple languages including Python, Java, Ruby, and Bash scripting. • Experience with setting up SLAs and monitoring of infrastructure and applications using Nagios, New Relic, Pingdom, VictorOps/Pagerduty like tools. • Experience with network configurations (switches, routers, firewalls) and a good understanding of routing and switching, firewalls, VPN tunnels.

Posted 1 month ago

Apply

9 - 14 years

30 - 40 Lacs

Navi Mumbai

Work from Office

Naukri logo

Overview GEP is a diverse, creative team of people passionate about procurement. We invest ourselves entirely in our client’s success, creating strong collaborative relationships that deliver extraordinary value year after year. Our clients include market global leaders with far-flung international operations, Fortune 500 and Global 2000 enterprises, leading government and public institutions. We deliver practical, effective services and software that enable procurement leaders to maximise their impact on business operations, strategy and financial performance. That’s just some of the things that we do in our quest to build a beautiful company, enjoy the journey and make a difference. GEP is a place where individuality is prized, and talent respected. We’re focused on what is real and effective. GEP is where good ideas and great people are recognized, results matter, and ability and hard work drive achievements. We’re a learning organization, actively looking for people to help shape, grow and continually improve us. Are you one of us? GEP is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, ethnicity, color, national origin, religion, sex, disability status, or any other characteristics protected by law. We are committed to hiring and valuing a global diverse work team. For more information please visit us on GEP.com or check us out on LinkedIn.com. Responsibilities The candidate will be responsible for creating infrastructure designs and guiding the development and implementation of infrastructure, applications, systems and processes. This position will be working directly with infrastructure, application development and QA teams to build and deploy highly available and scalable systems in private or public cloud environments along with release management. • Candidate must have experience on AZURE or GCP Cloud Platform • Building a highly scalable, highly available, private or public infrastructure • Owning and maintaining and enhancing the infrastructure and the related tools • Help build out an entirely CI ecosystem, including automated and auto scaling testing systems. • Design and implement monitoring and alerting for production systems used by DevOps staff • Work closely with developers and other staff to solve DevOps issues with customer facing services, tools and apps Qualifications 9+ of experience working in a DevOps role in a continuous integration environment specially in Micro-Soft technologies. • Strong knowledge of configuration management software such as Power Shell, Ansible, Continuous integration tools such as Octopus, Azure DevOps, Jenkins • Developing complete solutions considering sizing, infrastructure, data protection, disaster recovery, security, application requirements on cloud enterprise systems. • Experience adhering to an Agile development environment and iterative sprint cycle. • Familiarity with Database Deployment and CI/CD Pipeline. • Hands-on experience with CI/CD tools like VSTS, Azure DevOps, Jenkins(at least one of this tools experience) • Worked on Docker, Container, Kubernetes, AWS EKS, API Gateway, Application Load balancer , WAF, Cloud Front • Experience with GIT, or Github and the gitflow model, administration, User Management. Must be worked on AWS Platform with minimum 2 years of experience. • Strong understanding of Linux. Strong experience in various tools related to Continuous Integration and Continuous Deployment. • Automating builds using MS Build scripts • Any Scripting language(ruby,python, Yaml, Terraform) or any other application development experience(.net , java or golan etc) • Ability to write in multiple languages including Python, Java, Ruby, and Bash scripting. • Experience with setting up SLAs and monitoring of infrastructure and applications using Nagios, New Relic, Pingdom, VictorOps/Pagerduty like tools. • Experience with network configurations (switches, routers, firewalls) and a good understanding of routing and switching, firewalls, VPN tunnels

Posted 1 month ago

Apply

3 - 6 years

20 - 25 Lacs

Hyderabad

Work from Office

Naukri logo

Overview Job Title: Senior DevOps Engineer Location: Bangalore / Hyderabad / Chennai / Coimbatore Position: Full-time Department: Annalect Engineering Position Overview Annalect is currently seeking a Senior DevOps Engineer to join our technology team remotely, We are passionate about building distributed back-end systems in a modular and reusable way. We're looking for people who have a shared passion for data and desire to build cool, maintainable and high-quality applications to use this data. In this role you will participate in shaping our technical architecture, design and development of software products, collaborate with back-end developers from other tracks, as well as research and evaluation of new technical solutions. Responsibilities Key Responsibilities: Build and maintain cloud infrastructure through terraform IaC. Cloud networking and orchestration with AWS (EKS, ECS, VPC, S3, ALB, NLB). Improve and automate processes and procedures. Constructing CI/CD pipelines. Monitoring and handling incident response of the infrastructure, platforms, and core engineering services. Troubleshooting infrastructure, network, and application issues. Help identify and troubleshoot problems within environment. Qualifications Required Skills 5 + years of DevOps experience 5 + years of hands-on experience in administering cloud technologies on AWS, especially with IAM, VPC, Lambda, EKS, EC2, S3, ECS, CloudFront, ALB, API Gateway, RDS, Codebuild, SSM, Secret Manager, Lambda, API Gateway etc. Experience with microservices, containers (Docker), container orchestration (Kubernetes). Demonstrable experience of using Terraform to provision and configure infrastructure. Scripting ability - PowerShell, Python, Bash etc. Comfortable working with Linux/Unix based operating systems (Ubuntu preferred) Familiarity with software development, CICD and DevOps tools (Bitbucket, Jenkins, GitLab, Codebuild, Codepipeline) Knowledge of writing Infrastructure as Code (laC) using Terraform. Experience with microservices, containers (Docker), container orchestration (Kubernetes), serverless computing (AWS Lambda) and distributed/scalable systems. Possesses a problem-solving attitude. Creative, self-motivated, a quick study, and willing to develop new skills. Additional Skills Familiarity with working with data and databases (SQL, MySQL, PostgreSQL, Amazon Aurora, Redis, Amazon Redshift, Google BigQuery). Knowledge of Database administration. Experience with continuous deployment/continuous delivery (Jenkins, Bamboo). AWS/GCP/Azure Certification is a plus. Experience in python coding is welcome. Passion for data-driven software. All of our tools are built on top of data and require work with data. Knowledge of laaS/PaaS architecture with good understanding of Infrastructure and Web Application security Experience with logging/monitoring (CloudWatch, Datadog, Loggly, ELK). Passion for writing good documentation and creating architecture diagrams.

Posted 1 month ago

Apply

5 - 9 years

11 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

Were looking for a skilled and motivated Technical Specialist with expertise in cloud technologies, security best practices, and DevOps methodologies to help shape and lead impactful initiatives across our global platforms. You will play a pivotal part in designing and implementing cutting-edge solutions, working closely with cross-functional teams worldwide. Youll be at the forefront of driving automation, enhancing system reliability, and delivering meaningful results in a fast-paced, agile environment. You have: BE / Master's Degree in Computer Science or related technical discipline, or equivalent practical experience with 6-8 years of experience in software design, development, and testing Working experience with public or private cloud environments, including any of the following platformsAmazon AWS EKS, Red Hat OpenShift, Google GCP GKE, Microsoft Azure AKS, VMware Tanzu, or open-source Kubernetes Strong Python development skills, with experience in DevOps practices, working in a Jenkins-based environment, and familiarity with test frameworks like Radish and Cucumber Experience with container technologies (Docker or Podman) and Helm charts Expertise in container management environments (e.g., Kubernetes, service mesh, IAM, FPM) It would be nice if you also had: Experience in functional and system testing, software validation/reviews, and providing technical support during platform deployment and product integration Knowledge in configuring and managing security vulnerability scans, including container vulnerability scanning (e.g., Anchor), port scanning (e.g., Tenable), and malware scanning (e.g., Symantec Endpoint Protection) Experience in researching solutions to security vulnerabilities and applying hands-on mitigation strategies Lead & perform development activities of medium/high complexity features. Architect, design, develop, and test scalable software solutions Own and lead feature development and contribute to process improvements Collaborate with peers to resolve technical issues and review design specs Build and automate tests using frameworks like Radish, Cucumber, etc.

Posted 1 month ago

Apply

4 - 8 years

9 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

Experience in Modernizing applications to Container based platform using EKS, ECS, Fargat Proven experience on using DevOps tools during Modernization. Solid experience around No-SQL database. Should have used Orchestration engine like Kubernetes, Mesos Java8, spring boot, sql, Postgres DB and AWS Secondary Skills: React, redux, JavaScript Experience level knowledge on AWS Deployment Services, AWS beanstalk, AWS tools & SDK, AWS Cloud9, AWS CodeStar, AWS Command line interface etc and hands on experience on AWS ECS, AWS ECR, AWS EKS, AWS Fargate, AWS Lambda function, Elastic Chache, S3 objects, API Gateway, AWS Cloud Watch and AWS SNS Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Creative problem-solving skills and superb communication Skill. Should have worked on at least 3 engagements modernizing client applications to Container based solutions. Should be expert in any of the programming languages like Java, .NET, Node .js, Python, Ruby, Angular .js Preferred technical and professional experience Experience in distributed/scalable systems Knowledge of standard tools for optimizing and testing code Knowledge/Experience of Development/Build/Deploy/Test life cycle

Posted 1 month ago

Apply

5 - 10 years

13 - 23 Lacs

Mangalore, Bengaluru

Work from Office

Naukri logo

Position Overview We are looking for a Technical Lead with hands-on experience in React, Node.js, and cloud platforms like AWS or Azure. Youll drive the development of scalable, high-performance systems using modern architectures, collaborate on migration strategies, and build robust APIs. Strong knowledge of cloud services, containerization, and IoT technologies is essential. Job Role: Technical Lead Job Type: Full Time Experience: Minimum 5+ years Job Location: Bangalore/ Mangalore Technical Skills:AWS Cloud, Azure Cloud, TypeScript, Node, React About Us: We are a multi-award-winning creative engineering company. Since 2011, we have worked with our customers as a design and technology enablement partner, helping them on their digital transformation journey. Roles and Responsibilities: Evaluate existing systems and propose enhancements to improve efficiency, security, and scalability. Create technical documentation and architectural guidelines for the development team. Experience in developing software platforms using event-driven architecture Develop high-performance and throughput systems. Ability to define, track and deliver items to schedule. Collaborate with cross-functional teams to define migration strategies, timelines, and milestone Technical Skills: Hands-on experience in React & Node Hands-on experience in any one of the cloud provider like AWS, GCP or Azure Multiple database proficiency including SQL and NoSQL Highly skilled at facilitating and documenting requirements Experience developing REST API with JSON, XML for data transfer. lAbility to develop both internal facing and external facing APIs using JWT and OAuth2.0 Good understanding of cloud technologies, such as Docker, Kubernetes, MQTT, EKS, Lambda, IoT Core, and Kafka. Good understanding of messaging systems like SQS, PubSub Ability to establish priorities and proceed with objectives without supervision. Familiar with HA/DR, scalability, performance, code optimizations Good organizational skills and the ability to work on more than one project at a time. Exceptional attention to detail and good communication skills. Experience with Amazon Web Services, JIRA, Confluence, GIT, Bitbucket. Other Skills: Experience working with Go & Python Good understanding of IoT systems. Exposure to or knowledge of the energy industry. What we offer: A competitive salary and comprehensive benefits package. The opportunity to work on international projects and cutting-edge technology. A dynamic work environment that promotes professional growth, continuous learning, and mentorship. If you are passionate to work in a collaborative and challenging environment, apply now!

Posted 1 month ago

Apply

6 - 10 years

15 - 20 Lacs

Kolkata

Work from Office

Naukri logo

AWS DevOps Engineer DevOps, AWS, EKS, Datadog / Dynatrace

Posted 2 months ago

Apply

6 - 8 years

5 - 9 Lacs

Coimbatore

Hybrid

Naukri logo

Qualifications Baseline skills/experiences/attributes: - 5+ years of experience solving distributed systems/web development problems - 5+ years of experience working with RDBMS and REST API - Demonstrated success as a database architect, database engineer, consultant or database administrator using PostgreSQL or other enterprise relational databases. - Full working knowledge of database design and components including storage/table spaces, schemas, indices, partitions, aliases, constraints, and triggers. - Detailed experience designing and building modern back-end cloud systems - Bachelor's Degree in Computer Science or equivalent experience Ideally, you also have these skills/experiences/attributes (but its ok if you dont!): - GraphQL and related frameworks e.g. Relay, Apollo and URQL - Containerization and orchestration technologies such as Docker, ECS, EKS - Building CICD pipelines in orchestration tools like CircleCI or Jenkins - Web apps with live updates - pub/sub solutions (such as Redis), WebSockets - Designing and building observability solutions, utilizing modern monitoring, logging, and metrics tools ensure overall system health

Posted 2 months ago

Apply

4 - 7 years

20 - 25 Lacs

Mumbai

Work from Office

Naukri logo

Senior CloudOps Engineer: Congratulations, you have taken the first step towards bagging a career-defining role. Join the team of superheroes that safeguard data wherever it goes. What should you know about us? Seclore protects and controls digital assets to help enterprises prevent data theft and achieve compliance. Permissions and access to digital assets can be granularly assigned and revoked, or dynamically set at the enterprise-level, including when shared with external parties. Asset discovery and automated policy enforcement allow enterprises to adapt to changing security threats and regulatory requirements in real-time and at scale. Know more about us at www.seclore.com You would love our tribe: If you are a risk-taker, innovator, and fearless problem solver who loves solving challenges of data security, then this is the place for you! Role: Senior CloudOps Engineer Experience: 4-7 Years Location: Mumbai A sneak peek into the role: This position is for self-motivated and highly energetic individuals who can think of multiple solutions for a given problem and help in decision-making while working in a super-agile environment. Here's what you will get to explore: Serve as coach and mentor to junior engineers. Define, implement, manage, and improve operational support processes. Responsible for ensuring the Seclore Cloud operations. Manage and develop automation to support zero-downtime infrastructure changes across multiple globally distributed systems. Define and implement ops reporting and support dashboards & manage platform operations support. Automating cloud solutions using tools standard in the Cloud / DevOps industry. Working in a scientific way by forming hypotheses, experimentation, and delivering incremental improvements. Follow automation best practices. Oversee the work of your team and ensure quality outcomes related to defined KPIs. Actively support operational teams and other stakeholder teams to maintain business continuation and maintain customer satisfaction. Automate monitoring tools to monitor system health and reliability to support high uptime requirements. Ensure adherence to standards, policies, and procedures. Work with many services on AWS and learn/work all aspects of SaaS offering. Work with new tools and technologies and implement them. Solve all business and operational problems with Automation. Gain exposure to SRE, Automation, and Cloud Operations job functions. We can see the next Entrepreneur At Seclore if you have: A technical degree (Engineering, MCA) from a reputed institute. 4+ years experience working with AWS. 3+ years experience working with Jenkins, Docker, Git, Ansible, Linux. 5-6 years of total relevant experience. An automation-first approach/mindset. Effective verbal and written communication skills and management of priorities and deliverables. Experience with managing multiple production workloads on AWS. Understanding of the software lifecycle and appreciation of DevOps/Automation principles. Experience covering a range of the following or similar technologies and tools: Scripting - Python and Bash. Ansible / Puppet. Hands-on experience with Docker (Preferably ECS, EKS). Databases - Oracle RDS - Understanding performance bottlenecks and maintaining RDS. Appreciation of building secure, scalable infrastructure. Terraform / CloudFormation working experience. AWS certifications will be a plus. Knowledge about Cloud security best practices/SOC will be a plus. Why do we call Seclorites Entrepreneurs not Employees? We value and support those who take the initiative and calculate risks. We have an attitude of a problem solver and an aptitude that is tech agnostic. You get to work with the smartest minds in the business. We are thriving, not just living. At Seclore, it is not just about work but about creating outstanding employee experiences. Our supportive and open culture enables our team to thrive. Excited to be the next Entrepreneur, apply today! Dont have some of the above points in your resume at the moment? Dont worry. We will help you build it. Lets build the future of data security at Seclore together.

Posted 2 months ago

Apply

5 - 7 years

7 - 11 Lacs

Hyderabad

Work from Office

Naukri logo

Skills : Good experience in AWS/Azure CLOUD with Kubernetes & Docker Architecture. A Cloud Security Administrator primarily focuses on designing and implementing the overall security architecture. Cloud Security Posture Management (CSPM), Required Candidate profile Data Security Posture Management (DSPM), Hub, AWS Configuration, Guard Duty, Python, AWS EKS, AKS, ECR, Lambda functions, Governance, DevSecOps Notice Period: 0-30 days

Posted 2 months ago

Apply

3 - 5 years

6 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Role Overview: Trellix is looking for SDETs who are self-driven and passionate to work on Endpoint Detection and Response (EDR) line of products. The team is the ultimate quality gate before shipping to Customers. Tasks range from manual and, automated testing (including automation development), non-functional (performance, stress, soak), solution, security testing and much more. Work on cutting edge technology and AI driven analysis. About the role: Peruse requirements documents thoroughly and thus design relevant test cases that cover new product functionality and the impacted areas Execute new feature and regression cases manually, as needed for a product release Identify critical issues and communicate them effectively in a timely manner Familiarity with bug tracking platforms such as JIRA, Bugzilla, etc. is helpful. Filing defects effectively, i.e., noting all the relevant details that reduces the back-and-forth, and aids quick turnaround with bug fixing is an essential trait for this job Identify cases that are automatable, and within this scope segregate cases with high ROI from low impact areas to improve testing efficiency Hands-on with automation programming languages such as Python, Java, etc. is advantageous.Execute, monitor and debug automation runs Author automation code to improve coverage across the board Willing to explore and increase understanding on Cloud/ On-prem infrastructure About you: 3-5 years of experience in a SDET role with a relevant degree in Computer Science or Information Technology is required Show ability to quickly learn a product or concept, viz., its feature set, capabilities, functionality and nitty-gritty Solid fundamentals in any programming language (preferably, Python or JAVA) and OOPS concepts. Also, hands-on with CI/CD with Jenkins or similar is a must RESTful API testing using tools such as Postman or similar is desired Familiarity and exposure to AWS and its offerings, such as, S3, EC2, EBS, EKS, IAM, etc., is required. Exposure to Docker, helm, argoCD is an added advantage Strong foundational knowledge in working on Linux based systems. This includes, setting up git repos, user management, network configurations, use of package managers, etc. Hands-on with non-functional testing, such as, performance and load, is desirable. Exposure to Locust or JMeter tools will be an added advantage Any level of proficiency with prometheus, grafana, service metrics, would be nice to have Understanding of Endpoint security concepts around Endpoint Detection and Response (EDR) would be advantageous. Company Benefits and Perks: We work hard to embrace diversity and inclusion and encourage everyone to bring their authentic selves to work every day. We offer a variety of social programs, flexible work hours and family-friendly benefits to all of our employees. Retirement Plans Medical, Dental and Vision Coverage Paid Time Off Paid Parental Leave Support for Community Involvement

Posted 2 months ago

Apply

2 - 4 years

4 - 6 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for a AWS DevOps Specialist We are seeking an experienced AWS DevOps Specialist to drive automation, scalability, and continuous delivery within our AWS cloud infrastructure. The successful candidate will have in-depth knowledge of AWS services, infrastructure automation, CI/CD pipelines, and cloud-native application deployment practices. This role requires close collaboration with development, operations, and security teams to ensure smooth application lifecycle management and infrastructure optimization. Key Responsibilities: Infrastructure as Code (IaC): o Design, implement, and manage cloud infrastructure using automation tools such as AWS CloudFormation, Terraform, or Ansible. o Ensure consistency and reliability of AWS resources through automated provisioning and configuration management. CI/CD Pipeline Management: o Build and maintain Continuous Integration and Continuous Deployment (CI/CD) pipelines for seamless deployment of applications and services. o Use AWS CodePipeline, Jenkins, GitLab, or similar tools to automate code builds, tests, and releases. o Integrate security and compliance checks into CI/CD pipelines (DevSecOps). Containerization & Orchestration: o Design and manage containerized environments using Docker and orchestrate them with Kubernetes (EKS) or Amazon ECS. o Optimize container orchestration for scalability, resilience, and performance. Monitoring & Logging: o Set up monitoring, alerting, and log management using AWS CloudWatch, ELK stack, Prometheus, Grafana or similar tools. o Implement proactive monitoring of system performance, application health, and AWS resource utilization. Automation & Scripting: o Automate repetitive tasks using scripting languages (e.g., Python, Bash) and AWS Lambda for serverless automation. o Develop tools and scripts to improve automation and reduce manual intervention in deployments and infrastructure management. Security & Compliance: o Ensure security best practices in AWS environments, including IAM roles, security groups, encryption (KMS), and secure networking (VPC, VPN). o Implement automated compliance monitoring and audits using AWS Config, AWS Trusted Advisor, and other tools. Cost Optimization: o Monitor and optimize cloud usage and costs using AWS Cost Explorer and other cloud cost management tools. o Recommend architectural improvements to reduce cloud expenses while maintaining high performance and reliability. Collaboration & Support: o Work closely with development, operations, and QA teams to ensure smooth and efficient software delivery pipelines. o Provide DevOps best practices and technical guidance to teams in terms of automation, deployment, and scaling. Qualifications: Education: o Bachelors degree in Computer Science, Information Technology, or a related field. Experience: o Proven experience as a DevOps Engineer or AWS DevOps Specialist, preferably in a large-scale cloud environment. o Strong hands-on experience with AWS services such as EC2, S3, Lambda, RDS, Route 53, VPC, and ELB. o Experience building and managing CI/CD pipelines using AWS CodePipeline, Jenkins, GitLab, or similar tools. o Extensive experience with IaC tools (e.g., Terraform, CloudFormation) and configuration management tools (e.g., Ansible, Puppet). Skills: o Expertise in containerization and orchestration technologies (Docker, Kubernetes, EKS, ECS). o Strong Linux/Unix system administration skills. o Proficiency in one or more scripting languages such as Python, Bash, or PowerShell. o Familiarity with monitoring, logging, and observability tools like CloudWatch, ELK stack, Prometheus, or Grafana. o Knowledge of cloud security principles and best practices. Soft Skills: o Strong problem-solving and analytical skills. o Ability to work both independently and in a collaborative environment. o Excellent verbal and written communication skills. o Ability to manage multiple projects and tasks simultaneously. Preferred Qualifications: AWS Certified DevOps Engineer or similar AWS certifications. Experience with serverless architectures (EMR Serverless, AWS Lambda, API Gateway). Knowledge of multi-cloud environments (AWS, GCP, Azure) and hybrid cloud strategies. Familiarity with DevSecOps practices.

Posted 2 months ago

Apply

8 - 10 years

10 - 12 Lacs

Pune

Work from Office

Naukri logo

Siemens Experience and Platform Engineering organization is seeking a highly proactive and results-driven Cloud DevOps Release Manager to join our complementary team. The ideal candidate will be responsible for ensuring end-to-end accountability for releases, actively carrying out release criteria, and implementing robust processes and guardrails to enhance quality and reliability. As a DevOps Release Manager, you will release products software after completing the testing and deployment stages, and work closely with the application development team, testing team, and production team. You will maintain proper coordination between these teams to update the project related information. This position drives into overall interpersonal agile transformation and release train engineering activities as a part of the Cloud Operations Programs and Process organization. This includes but is not limited to the function of a Release Manager ensuring all conditions are met in accordance with the change management, test, continuous deployment policies prior to deployment to production environment. This role requires strong leadership, firm decision-making, and a hands-on approach to drive efficiency in deployments while maintaining the highest standards of quality and stability. The Release Manager will collaborate closely with multi-functional teams, including development, QA, security, and operations, to ensure seamless software releases and continuous improvement in release management practices. Seeking a confident individual looking to work in an exciting, fast-paced environment, steering the organization with a proactive and hands on approach. Key Roles and Responsibilities: Release Planning and Coordination: Actively drive release planning with development, QA, and operations teams, ensuring alignment to release achievements. Collaborate with development, QV, and operations teams to plan and coordinate continuous software releases. Define and implement clear release scope, schedule, and dependencies to ensure timely and smooth deployments. Create and submit change records as the need arises for process and audit compliance. Facilitation and active participation in Technical Change Advisory and Review boards required. Involvement with planning, testing, tracking, release, deployment, communication, and risk management. Supervise all aspects of release lifecyclefrom planning to executionensuring accountability across teams. Coordination and active participation end to end during planned downtimes. Active participation in root cause analysis and report out on release outages and active release upgrades Proactively identify potential blockers and risks, driving timely resolutions. Maintain strict oversight of change management and actively participate in Technical Change Advisory and Review boards. Release Execution and Enforcement Ensure rigorous adherence to change, testing, and deployment policies before approving production releases. Actively oversee planned downtimes and ensure that releases meet operational reliability standards. Maintain continuous alignment with leadership on release health and readiness, providing detailed impact assessments. Own and drive root cause analysis (RCA) efforts for any release outages or failures, implementing corrective actions to prevent recurrence. Push teams towards achieving higher efficiency and quality through automation and process improvements. Release Automation & Environment Management: Champion CI/CD standard methodologies, ensuring efficient, automated deployments via AWS CI/CD, GitLab CI/CD, or similar tools. Be responsible for version control repositories, making sure branching strategies to minimize conflicts and ensure stability. Implement infrastructure as code (IaC) practices using tools like Terraform, Morpheus, or CloudFormation. Lead various development, testing, staging, and production environments, ensuring consistency and reliability across environments. Implement infrastructure as code (IaC) practices to handle environments using tools like Morpheus, Terraform, CloudFormation or similar. Quality Assurance & Continuous Improvement Establish and implement meticulous quality gates in collaboration with QA teams. Drive continuous improvement initiatives to refine release processes, minimizing defects and deployment risks. Analyze trends from past releases, identify struggles, and implement measurable improvements. Stay updated on industry standard processes and emerging technologies to optimize release efficiency. Communication & Stakeholder Management: Act as the central point of accountability for release readiness and execution, keeping all collaborators aligned. Provide real-time transparency into release status, risks, and mitigation plans. Ensure clear and timely communication of release schedules, changes, and impact assessments. Incident Management: We are looking for candidate who can work closely with SRE teams to address any post-release incidents or issues, contributing to rapid resolution and root cause analysis. Provide immediate report out to leadership with ability to speak to the findings and next steps Required Qualifications (or equivalent experience) Degree in Computer Science, Information Technology, Software Engineering or related fields or equivalent experience. We are looking for candidate with 5-8 years of validated experience as a DevOps Release Manager or similar role in a fast-paced software development environment. Proven understanding of DevOps practices, continuous integration, continuous delivery, and related tools! Proficiency in CI/CD tools such as AWS CI/CD, GitLab CI/CD, or others. Hands-on experience with version control systems (AWS Code Commit, Git, SVN) and branching strategies. Familiarity with infrastructure as code (IaC) tools like CloudFormation, Terraform, Morpheus, or similar. Validated understanding of Agile methodologies and their application in release management. Proven ability to coordinate multi-functional work teams toward task completion Confirmed effective leadership and analytical skills Sophisticated written, verbal communication skills are a must Sophisticated knowledge of software development lifecycle is required Excellent problem-solving skills and the ability to adapt to constantly evolving requirements. Strong communication and partnership skills to work efficiently across teams. Attention to detail, with a focus on maintaining excellence in software releases. Preferred Qualifications: We are seeking a candidate with 8-10 years experience with containerization and orchestration technologies (EKS, ECS, Docker, Kubernetes) Relevant certifications in DevOps or related fields are a plus! SAFE Agile RTE certification. Scaled Agile SAFE certification

Posted 2 months ago

Apply

4 - 6 years

4 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

Senior AWS Cloud Engineer What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, and maintaining scalable, secure, and reliable AWS cloud infrastructure. This is a hands-on engineering role requiring deep expertise in Infrastructure as Code (IaC), automation, cloud networking, and security . The ideal candidate should have strong AWS knowledge and be capable of writing and maintaining Terraform, CloudFormation, and CI/CD pipelines to streamline cloud deployments. Please note, this is an onsite role based in Hyderabad. Roles & Responsibilities: AWS Infrastructure Design & Implementation Architect, implement, and manage highly available AWS cloud environments . Design VPCs, Subnets, Security Groups, and IAM policies to enforce security standard processes. Optimize AWS costs using reserved instances, savings plans, and auto-scaling . Infrastructure as Code (IaC) & Automation Develop, maintain, and enhance Terraform & CloudFormation templates for cloud provisioning. Automate deployment, scaling, and monitoring using AWS-native tools & scripting. Implement and manage CI/CD pipelines for infrastructure and application deployments. Cloud Security & Compliance Enforce standard processes in IAM, encryption, and network security. Ensure compliance with SOC2, ISO27001, and NIST standards. Implement AWS Security Hub, GuardDuty, and WAF for threat detection and response. Monitoring & Performance Optimization Set up AWS CloudWatch, Prometheus, Grafana, and logging solutions for proactive monitoring. Implement autoscaling, load balancing, and caching strategies for performance optimization. Solve cloud infrastructure issues and conduct root cause analysis. Collaboration & DevOps Practices Work closely with software engineers, SREs, and DevOps teams to support deployments. Maintain GitOps standard processes for cloud infrastructure versioning. Support on-call rotation for high-priority cloud incidents. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree and 4 to 6 years of experience in computer science, IT, or related field with hands-on cloud experience OR Bachelor’s degree and 6 to 8 years of experience in computer science, IT, or related field with hands-on cloud experience OR Diploma and 10 to 12 years of experience in computer science, IT, or related field with hands-on cloud experience Must-Have Skills: Deep hands-on experience with AWS (EC2, S3, RDS, Lambda, VPC, IAM, ECS/EKS, API Gateway, etc.) . Expertise in Terraform & CloudFormation for AWS infrastructure automation. Strong knowledge of AWS networking (VPC, Direct Connect, Transit Gateway, VPN, Route 53) . Experience with Linux administration, scripting (Python, Bash), and CI/CD tools (Jenkins, GitHub Actions, CodePipeline, etc.) . Strong troubleshooting and debugging skills in cloud networking, storage, and security . Preferred Qualifications: Good-to-Have Skills: Experience with Kubernetes (EKS) and service mesh architectures . Knowledge of AWS Lambda and event-driven architectures . Familiarity with AWS CDK, Ansible, or Packer for cloud automation. Exposure to multi-cloud environments (Azure, GCP) . Familiarity with HPC, DGX Cloud . Professional Certifications (preferred): AWS Certified Solutions Architect – Associate or Professional AWS Certified DevOps Engineer – Professional Terraform Associate Certification Soft Skills: Strong analytical and problem-solving skills. Ability to work effectively with global, virtual teams Effective communication and collaboration with cross-functional teams. Ability to work in a fast-paced, cloud-first environment. Shift Information: This position is required to be onsite and participate in 24/5 and weekend on call in rotation fashion and may require you to work a later shift. Candidates must be willing and able to work off hours, as required based on business requirements. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 months ago

Apply

9 - 14 years

20 - 35 Lacs

Pune

Hybrid

Naukri logo

Role & responsibilities 10+ years Proficient experience with AWS Cloud. 7+ years' relevant experience working on design cloud Infrastructure solution and application migration to cloud Proficient in Cloud Networking and network configuration. Proficient in Terraform for managing Infrastructure as code Proficient in Git based SCM, Implementing CI for application/IaC with tools like Jenkins, JenkinsX, Github Actions, Gitlab CI, AWS DevOps. More exposure on as many tools the better.

Posted 2 months ago

Apply

14 - 20 years

40 - 50 Lacs

Bengaluru

Work from Office

Naukri logo

Role Description Join the AI team at Salesforce and make a real impact with your software designs and code! This position requires technical skills, outstanding analytical and influencing skills, and extraordinary business insight. It is a multi-functional role that requires building alignment and communication with several engineering organisations. We work in a highly collaborative environment, and you will partner with a highly cross functional team comprised of Data Scientists, Software Engineers, Machine learning engineers, UX experts, and product managers to build upon Agentforce, our innovative new AI framework. We value execution, clear communication, feedback and making learning fun. Your impact - You will: Architect, design, implement, test and deliver highly scalable AI solutions: Agents, AI Copilots/assistants, Chatbots, AI Planners, RAG solutions. Be accountable for defining and driving software architecture and enterprise capabilities (scalability, fault tolerance, extensibility, maintainability, etc.) Independently design sophisticated software systems for high-end solutions, while working in a consultative fashion with other senior engineers and architects in AI Cloud and across the company Determine overall architectural principles, frameworks, and standards to craft vision and roadmaps Analyze and provide feedback on product strategy and technical feasibility Drive long-term design strategies that span multiple sophisticated projects, deliver technical reports and performance presentations to customers and at industry events Actively communicate with, encourage and motivate all levels of staff. Be a domain expert for multiple products, while writing code and working closely with other developers, PM, and UX to ensure features are delivered to meet business and quality requirements Troubleshoot complex production issues and work with support and customers as needed Drives long-term design strategies that span multiple sophisticated projects, deliver technical reports and performance presentations to customers and at industry events Required Skills: 14+ years of experience in building highly scalable Software-as-a-Service applications/ platform Experience building technical architectures that address complex performance issues Thrive in dynamic environments, working on cutting edge projects that often come with ambiguity. Innovation/startup mindset to be able to adapt Deep knowledge of object oriented programming and experience with at least one object oriented programming language, preferably Java Proven ability to mentor team members to support their understanding and growth of software engineering architecture concepts and aid in their technical development High proficiency in at least one high-level programming language and web framework (NodeJS, Express, Hapi, etc.) Proven understanding of web technologies, such as JavaScript, CSS, HTML5, XML, JavaScript, JSON, and/or Ajax Data model design, database technologies (RDBMS & NoSQL), and languages such as SQL and PL/SQL Experience delivering or partnering with teams that ship AI products at high scale. Experience in automated testing including unit and functional testing using Java, JUnit, JSUnit, Selenium Demonstrated ability to drive long-term design strategies that span multiple complex projects Experience delivering technical reports and presentations to customers and at industry events Demonstrated track record of cultivating strong working relationships and driving collaboration across multiple technical and business teams to resolve critical issues Experience with the full software lifecycle in highly agile and ambiguous environments Excellent interpersonal and communication skills. Preferred Skills: Solid experience in API development, API lifecycle management and/or client SDKs development Experience with machine learning or cloud technology platforms like AWS sagemaker, terraform, spinnaker, EKS, GKE Experience with AI/ML and Data science, including Predictive and Generative AI Experience with data engineering, data pipelines or distributed systems Experience with continuous integration (CI) and continuous deployment (CD), and service ownership Familiarity with Salesforce APIs and technologies Ability to support/resolve production customer escalations with excellent debugging and problem solving skills

Posted 2 months ago

Apply

Exploring EKS Jobs in India

The job market for EKS (Elastic Kubernetes Service) professionals in India is rapidly growing as more companies are adopting cloud-native technologies. EKS is a managed Kubernetes service provided by Amazon Web Services (AWS), allowing users to easily deploy, manage, and scale containerized applications using Kubernetes.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities are known for their strong technology sectors and have a high demand for EKS professionals.

Average Salary Range

The average salary range for EKS professionals in India varies based on experience levels: - Entry-level: ₹4-6 lakhs per annum - Mid-level: ₹8-12 lakhs per annum - Experienced: ₹15-25 lakhs per annum

Career Path

A typical career path in EKS may include roles such as: - Junior EKS Engineer - EKS Developer - EKS Administrator - EKS Architect - EKS Consultant

Related Skills

Besides EKS expertise, professionals in this field are often expected to have knowledge or experience in: - Kubernetes - Docker - AWS services - DevOps practices - Infrastructure as Code (IaC)

Interview Questions

  • What is EKS and how does it differ from self-managed Kubernetes clusters? (basic)
  • How do you monitor the performance of EKS clusters? (medium)
  • Can you explain the process of deploying a new application on EKS? (medium)
  • What are the key security considerations for EKS deployments? (medium)
  • How do you handle scaling in EKS based on varying workloads? (medium)
  • What is the difference between a Deployment and a StatefulSet in Kubernetes? (advanced)
  • How do you troubleshoot networking issues in an EKS cluster? (advanced)
  • Explain the concept of a Pod in Kubernetes and its significance in EKS. (basic)
  • What tools do you use for managing and monitoring EKS clusters? (medium)
  • How do you ensure high availability for applications running on EKS? (medium)
  • Describe the process of upgrading Kubernetes versions in an EKS cluster. (medium)
  • How do you optimize resource utilization in EKS clusters? (medium)
  • What are the advantages of using EKS over self-managed Kubernetes clusters? (basic)
  • Can you explain the concept of a Service in Kubernetes and its role in EKS? (basic)
  • How do you handle persistent storage in EKS for stateful applications? (medium)
  • What is the role of a ConfigMap in Kubernetes and how is it used in EKS? (basic)
  • How do you automate the deployment process in EKS? (medium)
  • Explain the concept of a Namespace in Kubernetes and its significance in EKS. (basic)
  • How do you ensure security compliance in EKS deployments? (medium)
  • What are the best practices for managing secrets in EKS clusters? (medium)
  • How do you implement CI/CD pipelines for applications deployed on EKS? (medium)
  • Describe a challenging issue you faced in managing an EKS cluster and how you resolved it. (advanced)
  • How do you handle rolling updates in EKS deployments? (medium)
  • What are the key considerations for disaster recovery planning in EKS? (medium)

Closing Remark

As you explore opportunities in the EKS job market in India, remember to showcase your expertise in EKS, Kubernetes, and related technologies during interviews. Prepare thoroughly, stay updated with industry trends, and apply confidently to secure exciting roles in this fast-growing field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies