Home
Jobs

378 Eks Jobs - Page 6

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 14.0 years

15 - 20 Lacs

Chennai

Work from Office

Naukri logo

Role Summary: We are looking for a seasoned Senior Development Lead with over 10 years of experience in leading development teams and delivering high-quality technical solutions. This role involves not only technical leadership and stakeholder communication but also hands-on development. The ideal candidate will be an SME in modern cloud-native development, proficient in event-driven architecture and integration design, and capable of mentoring developers while enforcing coding standards and best practices. Key Responsibilities: Team Leadership and Stakeholder Engagement Lead and mentor development teams across multiple projects Communicate effectively with senior stakeholders, providing updates and risk assessments Foster a collaborative and productive development environment Code Quality and Best Practices Define and implement coding standards and quality assurance processes Conduct detailed code reviews to ensure high performance, security, and maintainability standards Encourage continuous learning and knowledge sharing among team members Architecture and Design Design scalable and secure APIs, events, integrations, and system adapters Collaborate with architects and product owners to refine technical requirements and solutions in an Agile setup Ensure alignment of design decisions with enterprise architecture and strategy Cross-Functional Collaboration Work with cross-functional teams to manage dependencies and coordinate deliverables Facilitate integration with multiple systems and services in an agile development environment Hands-on Development Act as a Subject Matter Expert (SME) in software development, providing technical guidance Write and review code in Event Driven Architecture to ensure best practices are followed Lead by example through active participation in key development activities Qualifications: Experience : 10+ years of software development experience with at least 3 years in a team leadership role Proven ability to lead distributed development teams and deliver enterprise-grade software processing high throughput and volumes Technical Skills: Strong hands-on development experience with AWS cloud, Kafka Streams, Java, Spring and Spring Boot frameworks for distributed systems Proficient in API design JSON, and Avro schema Familiarity with event-driven architecture and microservices integration and scaling Experience with DevOps practices and CI/CD pipelines Exposure to containerization technologies (Docker, Kubernetes, ECS, EKS) Experience in arriving at cost optimal solution in the Cloud Solution Architectures / Designs resulting in decreasing cloud spend Experience in Node.js and Angular JS or moden UI framework is an added plus Soft Skills: Strong problem-solving and decision-making skills Excellent communication and leadership abilities Ability to manage time, priorities, and multiple tasks efficiently

Posted 2 weeks ago

Apply

5.0 - 8.0 years

8 - 12 Lacs

Ahmedabad

Work from Office

Naukri logo

About us: Working at Tech Holding isn't just a job, it's an opportunity to be a part of something bigger We are a full-service consulting firm that was founded on the premise of delivering predictable outcomes and high-quality solutions to our clients Our founders and team members have industry experience and have held senior positions in a wide variety of companies from emerging startups to large Fortune 50 firms and we have taken our combined experiences and developed a unique approach that is supported by the principles of deep expertise, integrity, transparency, and dependability. We're looking for a Senior DevOps Engineer to implement and maintain our cloud infrastructure and automation practices across Azure and AWS platforms This role will focus on hands-on implementation and optimization of our DevOps processes. Experience: Total 4-5+ years Shift: PST Time zone overlapping of 2-4 hours. Key Responsibilities: Implement and maintain CI/CD pipelines and automation workflows Deploy and manage cloud infrastructure using Infrastructure as Code Configure and maintain monitoring and alerting systems Collaborate with development teams to improve deployment processes Participate in on-call rotation and incident response Implement security best practices and compliance requirements Support and optimize container orchestration platforms Contribute to documentation and knowledge sharing Required Qualifications: 7+ years of DevOps/Infrastructure engineering experience Programming experience with Python or Go Strong understanding of CI/CD principles and automation tools. Expertise in Infrastructure as Code (IaC) methodologies (e.g., Terraform, Ansible). Strong expertise in Azure services and implementation patterns Working knowledge of AWS services (ECS, EKS, Batch Services) Hands-on experience with Terraform and Infrastructure as Code

Posted 2 weeks ago

Apply

13.0 - 18.0 years

35 - 55 Lacs

Bengaluru

Hybrid

Naukri logo

SRE Manager About Ushur I Ushur XOS l Ushur GenA I Location: Bangalore Work Mode: Hybrid Experince: 12 to 18 Years The Role Our fast-growing team is seeking a Manager of SRE to join us as we pioneer Customer Experience AutomationTM as an Industry category. As the Manager of SRE you will be responsible for two important charters Operate and manage Ushurs production cloud Build a white-glove customer support and incident management function The ideal candidate for this role will be passionate about building a healthy high-performing team, and bring strong technical leadership, a customer-centric focus, and results-oriented action. You will begin as a player/coach while building and continuously improving execution, processes, tools/technology and analytics. Responsibilities Build and Manage a world-class SRE team. Design a 24x7 follow-the-sun organization including seamless handover across regions. Mentor and grow team focused on delivering white glove support and incident management service. Drive data-driven SRE strategy by defining and prioritizing SRE Objectives and Key Results (OKRs) aligned with company mission. This includes setting measurable targets for key service level agreements Manager Enterprise Support function to deliver exceptional white glove experiences at scale in close partnership with our Customer Success, Solution Consulting and Engineering teams. Responsible for ensuring that the Ushur platform runs reliably in production. Partner with the DevOps, Security and Engineering teams to automate deployment, monitoring and observability of the production cloud. Bring deep technical expertise in Ushur Customer Experience Automation. Provide customers with ongoing technical support and incident management for complex issues and support escalations. Optimize and automate support processes including improving the reliability of on-call processes, managing incidents, updating runbooks and documentation, reviewing RCAs and recommending solutions to prevent the recurrence and severity of incidents. Cross-functionally to drive positive customer outcomes. Engage with Product, Sales, Customer Success, Solution Consulting, Security, and Engineering, as necessary to make customers successful on our platform Qualifications 5+ years of experience of SRE/CloudOps Manager/Lead role in Enterprise SaaS Track record of developing and mentoring great talent, building and motivating high-achieving teams. Ability to lead diverse teams across multiple time zones. Business Acumen - Ability to quickly grasp and adapt to a variety of customer verticals, geographies, and business structures. Excellent verbal, written, and presentation skills with the ability to absorb complex technical concepts and communicate them to a non-technical audience Highly organized, collaborative and detail-oriented Deep experience with AWS cloud services, REST APIs, Linux Experience with DevOps processes and Build deployment, and orchestration technologies Passion for technology and for being a part of a fast-growing SaaS startup where we move quickly and wear many hats Flexible approach, able to operate effectively with uncertainty and change Driven, self-motivated, enthusiastic and with a can do attitude Benefits Great Company Culture. We pride ourselves on having a values-based culture that is welcoming, intentional, and respectful. Bring your whole self to work . We are focused on building a diverse culture, with innovative ideas where you and your ideas are valued. We are a start-up and know that every person has a significant impact! Rest and Relaxation . 20 days of flexible leaves per year, Monthly Wellness Day (aka a day off to care for yourself) and more! Health Benefits. Preventive health checkups, Medical Insurance covering the dependents, wellness sessions, and health talks at the office Keep learning. One of our core values is Growth Mindset - we believe in lifelong learning. Certification courses are reimbursed. Ushur Community offers wide resources for our employees to learn and grow. Flexible Work. In-office or hybrid working model, depending on position and location. We seek to create an environment for all our employees where they can thrive in both their profession and personal life. Why join us? We are passionate about Ushur, our product, and helping our employees grow and develop in their career in a caring, collaborative environment. We offer a very competitive compensation plan & stock options for the ideal candidates.

Posted 2 weeks ago

Apply

15.0 - 20.0 years

40 - 50 Lacs

Hyderabad, Pune

Work from Office

Naukri logo

An AWS DevOps Architect designs and manages the DevOps environment for an organization. They ensure that software development and IT operations are integrated seamlessly. Responsibilities DevOps strategy: Develop and implement the DevOps strategy and roadmap Automation: Automate the provisioning, configuration, and management of infrastructure components Cloud architecture: Design and manage the cloud and infrastructure architecture Security: Implement security measures and compliance controls Collaboration: Foster collaboration between development, operations, and other cross-functional teams Continuous improvement: Regularly review and analyze DevOps processes and practices Reporting: Provide regular reports on infrastructure performance, costs, and security to management Skills and experience Experience with AWS services like ECS, EKS, and Kubernetes Knowledge of scripting languages like Python Experience with DevOps tools and technologies like Jenkins, Terraform, and Ansible Experience with CI/CD pipelines Experience with cloud governance standards and best practices PS we need strong DevOps Tool Implementation Experts on AWS platform ( Jenkins, Terrraform and other DevOps Tool) Mandatory Key Skills AWS DevOps Architect,DevOps,software development,IT operations,cloud architecture,infrastructure architecture,ECS,EKS,Kubernetes,AWS DevOps*

Posted 2 weeks ago

Apply

1.0 - 3.0 years

3 - 6 Lacs

Chennai

Work from Office

Naukri logo

What youll be doing You will be part of the Network Planning group in GNT organization supporting development of deployment automation pipelines and other tooling for the Verizon Cloud Platform. You will be supporting a highly reliable infrastructure running critical network functions. You will be responsible for solving issues that are new and unique, which will provide the opportunity to innovate. You will have a high level of technical expertise and daily hands-on implementation working in a planning team designing and developing automation. This entitles programming and orchestrating the deployment of feature sets into the Kubernetes CaaS platform along with building containers via a fully automated CI/CD pipeline utilizing Ansible playbooks, Python and CI/CD tools and process like JIRA, GitLab, ArgoCD, or any other scripting technologies. Leveraging monitoring tools such as Redfish, Splunk, and Grafana to monitor system health, detect issues, and proactively resolve them. Design and configure alerts to ensure timely responses to critical events. Working with the development and Operations teams to design, implement, and optimize CI/CD pipelines using ArgoCD for efficient, automated deployment of applications and infrastructure. Implementing security best practices for cloud and containerized services and ensure adherence to security protocols. Configure IAM roles, VPC security, encryption, and compliance policies. Continuously optimize cloud infrastructure for performance, scalability, and cost-effectiveness. Use tools and third-party solutions to analyze usage patterns and recommend cost-saving strategies. Working closely with the engineering and operations teams to design and implement cloud-based solutions. Maintaining detailed documentation of cloud architecture and platform configurations and regularly provide status reports and performance metrics. What were looking for... Youll need to have: Bachelors degree or one or more year of work experience. Experience years in Kubernetes administration Hands-on experience with one or more of the following platforms: EKS, Red Hat OpenShift, GKE, AKS, OCI GitOps CI/CD workflows (ArgoCD, Flux) and Very Strong Expertise in the following: Ansible, Terraform, Helm, Jenkins, Gitlab VSC/Pipelines/Runners, Artifactory Strong proficiency with monitoring/observability tools such as New Relic, Prometheus/Grafana, logging solutions (Fluentd/Elastic/Splunk) to include creating/customizing metrics and/or logging dashboards Backend development experience with languages to include Golang (preferred), Spring Boot, and Python Development Experience with the Operator SDK, HTTP/RESTful APIs, Microservices Familiarity with Cloud cost optimization (e.g. Kubecost) Strong experience with infra components like Flux, cert-manager, Karpenter, Cluster Autoscaler, VPC CNI, Over-provisioning, CoreDNS, metrics-server Familiarity with Wireshark, tshark, dumpcap, etc., capturing network traces and performing packet analysis Demonstrated expertise with the K8S ecosystem (inspecting cluster resources, determining cluster health, identifying potential application issues, etc.) Strong Development of K8S tools/components which may include standalone utilities/plugins, cert-manager plugins, etc. Development and working experience with Service Mesh lifecycle management and configuring, troubleshooting applications deployed on Service Mesh and Service Mesh related issues Expertise in RBAC and Pod Security Standards, Quotas, LimitRanges, OPA Gatekeeper Policies Working experience with security tools such as Sysdig, Crowdstrike, Black Duck, etc. Demonstrated expertise with the K8S security ecosystem (SCC, network policies, RBAC, CVE remediation, CIS benchmarks/hardening, etc.) Networking of microservices, solid understanding of Kubernetes networking and troubleshooting Certified Kubernetes Administrator (CKA) Demonstrated very strong troubleshooting and problem-solving skills Excellent verbal communication and written skills Even better if you have one or more of the following: Certified Kubernetes Application Developer (CKAD) Red Hat Certified OpenShift Administrator Familiarity with creating custom EnvoyFilters for Istio service mesh and integrating with existing web application portals Experience with OWASP rules and mitigating security vulnerabilities using security tools like Fortify, Sonarqube, etc. Database experience (RDBMS, NoSQL, etc.)

Posted 2 weeks ago

Apply

3.0 - 6.0 years

4 - 9 Lacs

Chennai

Work from Office

Naukri logo

**Position Overview:** We are seeking an experienced AWS Cloud Engineer with a robust background in Site Reliability Engineering (SRE). The ideal candidate will have 3 to 6 years of hands-on experience managing and optimizing AWS cloud environments with a strong focus on performance, reliability, scalability, and cost efficiency. **Key Responsibilities:** * Deploy, manage, and maintain AWS infrastructure, including EC2, ECS Fargate, EKS, RDS Aurora, VPC, Glue, Lambda, S3, CloudWatch, CloudTrail, API Gateway (REST), Cognito, Elasticsearch, ElastiCache, and Athena. * Implement and manage Kubernetes (K8s) clusters, ensuring high availability, security, and optimal performance. * Create, optimize, and manage containerized applications using Docker. * Develop and manage CI/CD pipelines using AWS native services and YAML configurations. * Proactively identify cost-saving opportunities and apply AWS cost optimization techniques. * Set up secure access and permissions using IAM roles and policies. * Install, configure, and maintain application environments including: * Python-based frameworks: Django, Flask, FastAPI * PHP frameworks: CodeIgniter 4 (CI4), Laravel * Node.js applications * Install and integrate AWS SDKs into application environments for seamless service interaction. * Automate infrastructure provisioning, monitoring, and remediation using scripting and Infrastructure as Code (IaC). * Monitor, log, and alert on infrastructure and application performance using CloudWatch and other observability tools. * Manage and configure SSL certificates with ACM and load balancing using ELB. * Conduct advanced troubleshooting and root-cause analysis to ensure system stability and resilience. **Technical Skills:** * Strong experience with AWS services: EC2, ECS, EKS, Lambda, RDS Aurora, S3, VPC, Glue, API Gateway, Cognito, IAM, CloudWatch, CloudTrail, Athena, ACM, ELB, ElastiCache, and Elasticsearch. * Proficiency in container orchestration and microservices using Docker and Kubernetes. * Competence in scripting (Shell/Bash), configuration with YAML, and automation tools. * Deep understanding of SRE best practices, SLAs, SLOs, and incident response. * Experience deploying and supporting production-grade applications in Python (Django, Flask, FastAPI), PHP (CI4, Laravel), and Node.js. * Solid grasp of CI/CD workflows using AWS services. * Strong troubleshooting skills and familiarity with logging/monitoring stacks.

Posted 3 weeks ago

Apply

2.0 - 3.0 years

4 - 5 Lacs

Rajkot

Work from Office

Naukri logo

Technical Requirements: Excellent understanding of Linux commands. Thorough knowledge of CI/CD pipelines, automation, and debugging, particularly with Jenkins. Intermediate to advanced understanding of Docker and container orchestration platforms. Hands-on experience with web servers (Apache, Nginx), database servers (MongoDB, MySQL, PostgreSQL), and application servers (PHP, Node.js). Knowledge of proxies and reverse proxies is required. Good understanding and hands-on experience with site reliability tools such as Prometheus, Grafana, New Relic, Datadog, and Splunk. (Hands-on experience with at least one tool is highly desirable.) Ability to identify and fix security vulnerabilities at the OS, database, and application levels. Knowledge of cloud platforms, specifically AWS and DigitalOcean, and their commonly used services. Other Requirements: Good communication skills. Out-of-the-box problem-solving capabilities, especially in the context of technology automation and application architecture reviews. Hands-on experience with GKE, AKS, EKS, or ECS is a plus. Excellent understanding of how to craft effective AI prompts to solve specific issues.

Posted 3 weeks ago

Apply

5.0 - 8.0 years

1 - 5 Lacs

Bengaluru

Work from Office

Naukri logo

Job Summary We are looking for an experienced AWS SysOps Engineer with 5 to 8 years of experience to manage, monitor, and optimize AWS cloud infrastructure. The ideal candidate should have a strong background in AWS services, automation, system administration, and security best practices to ensure the stability, scalability, and security of cloud-based environments. Years of experience needed 5+ years of experience as AWS sysOps engineer Technical Skills: 5-8 years of hands-on experience in AWS System Administration, Cloud Operations, or Infrastructure Management. Strong expertise in AWS core services: EC2, RDS, S3, IAM, Route 53, CloudWatch, CloudTrail, Auto Scaling, and ELB. Hands-on experience with Linux and Windows administration in AWS environments. Strong scripting skills in Python, Bash, or PowerShell for automation and monitoring. Experience with monitoring and logging solutions like AWS CloudWatch, ELK Stack, Prometheus, or Datadog. Understanding of AWS cost optimization and billing management. Experience in backup and disaster recovery planning using AWS services. Deploy, manage, and maintain AWS cloud infrastructure, ensuring high availability, performance, and security. Manage user access, roles, and security policies using AWS IAM, AWS Organizations, and AWS SSO. Implement and manage patching, system updates, and security hardening across AWS infrastructure. Experience with Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or AWS CDK is a plus Knowledge of AWS networking (VPC, VPN, Direct Connect, Security Groups, NACLs). Good To Have below: Familiarity with security best practices (IAM, KMS, GuardDuty, Security Hub, WAF). Experience with containerization (Docker, Kubernetes, ECS, EKS). Knowledge of DevOps and CI/CD tools (Jenkins, GitHub Actions, AWS CodeDeploy). Exposure to multi-cloud and hybrid cloud environments. Experience with AWS Lambda, Step Functions, and other serverless services. Familiarity with secrets management using AWS Secrets Manager, HashiCorp Vault, or CyberArk. Certifications Needed: AWS Certified SysOps Administrator Associate

Posted 3 weeks ago

Apply

5.0 - 10.0 years

18 - 22 Lacs

Gurugram

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking an experienced Senior Software Engineer with 7+ years of professional full stack development experience. The ideal candidate will have a solid background in creating React/Java/NodeJs and possess in-depth knowledge of AWS services. The Senior Software Engineer will be responsible for supporting and mentoring junior engineers and Technical Development Program participants. Primary Responsibilities Design, develop, and maintain high-quality software solutions. Collaborate with cross-functional teams to gather and analyze requirements, and ensure the timely delivery of software projects Utilize AWS services such as EKS, ECR, and Github to implement scalable and reliable applications. Implement and maintain CI/CD pipelines using Jenkins to automate the software deployment process Containerize applications using Docker for easy deployment and scalability. Utilize Elastic Search and Logstash to develop efficient search and log management solutions Write comprehensive test cases using the Acceptance Test-Driven Development (ATDD) approach to ensure the quality and reliability of software solutions Support and mentor junior engineers and Technical Development Program participants, providing guidance and knowledge transfer Stay up-to-date with the latest industry trends and technologies, and actively participate in continuous learning and improvement initiatives Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor's degree in Computer Science, Engineering, or a related field Gen AI, microservices, kafka, K8s, Docker, CI/CD 7+ years of experience in software development, with a focus in Spring Boot, Java and GraphQL Experience with Jenkins and CI/CD pipelines Solid knowledge of AWS services, including EKS, ECR,MSK, Open Search, and Logstash Proficiency in Docker and containerization concepts. Solid understanding of software testing principles and experience with ATDD approach Proven excellent problem-solving and debugging skills. Solid communication and interpersonal skills Proven ability to work effectively in a collaborative team environment. Proven ability to mentor and support junior engineers and Technical Development Program participants

Posted 3 weeks ago

Apply

1.0 - 5.0 years

8 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

Junior DevOps Engineer / DevOps Engineer Location: Bengaluru South, Karnataka, India Experience: 1.53 Years Compensation: 815 LPA Employment Type: Full-Time | Work From Office Only ________________________________________ Are you an aspiring DevOps professional ready to work on a transformative platform? Join a purpose-led team building India’s most disruptive ecosystem at the intersection of technology, property, and sustainability. This role is ideal for engineers who are eager to learn, automate, and contribute to building reliable, scalable, and secure infrastructure. Key Responsibilities Assist in designing, implementing, and managing CI/CD pipelines using tools like Jenkins or GitLab CI to automate build, test, and deployment processes. Support the deployment and management of cloud infrastructure, primarily on AWS, with exposure to Azure or GCP. Contribute to infrastructure as code practices using Terraform, CloudFormation, or Ansible. Participate in maintaining and operating containerized applications using Docker and Kubernetes. Implement and manage monitoring and logging solutions using Grafana, Loki, Prometheus, or ELK stack. Collaborate with engineering and QA teams to streamline release pipelines, ensuring high availability and performance. Develop basic automation scripts in Python or Bash to optimize and streamline operational tasks. Gain exposure to serverless and event-driven architectures under guidance from senior engineers. Troubleshoot infrastructure issues and contribute to system security and performance optimization. Requirements 1.5 to 3 years of experience in DevOps, SRE, or related infrastructure roles. Solid understanding of cloud environments (AWS preferred; Azure/GCP a plus). Basic to intermediate scripting knowledge in Python or Bash. Familiarity with CI/CD concepts and tools such as Jenkins, GitLab CI, etc. Working knowledge of Docker and introductory experience with Kubernetes. Exposure to monitoring and logging stacks (Grafana, Loki, Prometheus, ELK). Understanding of infrastructure as code using tools like Terraform or Ansible. Familiarity with networking, DNS, firewalls, and system security practices. Strong problem-solving skills and a learning mindset. Preferred Qualifications Certifications in AWS, Azure, or GCP. Exposure to serverless architectures and event-driven systems. Experience with additional monitoring tools or scripting languages. Familiarity with geospatial systems, virtual mapping, or sustainability-oriented platforms. Passion for eco-conscious technology and impact-driven development. Why You Should Join Contribute to a next-gen PropTech platform promoting sustainable and inclusive land ownership. Work closely with senior engineers committed to mentorship and ecosystem building. Join a team where your ideas are valued, your skills are sharpened, and your work has real-world impact. Be part of a vibrant, office-first culture that encourages innovation, collaboration, and growth.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

12 - 17 Lacs

Hyderabad

Work from Office

Naukri logo

We are looking for a highly skilled Java Developer to join our team and contribute to the design, development, and maintenance of scalable applications. The ideal candidate should have strong hands-on experience in Core and Advanced Java, Spring Boot, Microservices, and cloud platforms like AWS. They must possess excellent problem-solving skills, clean coding practices, and an understanding of RESTful architecture. This role requires proficiency in front-end technologies (Angular/React/Sencha), database management, and containerization (Docker/Kubernetes) to build high-performance applications. Key Responsibilities : - Develop, test, and maintain scalable Java applications with Spring Boot and Microservices architecture. - Implement OOP principles, design patterns, and clean coding practices to ensure maintainability. - Work on Spring Security, Spring Data JPA, Hibernate, and ORM frameworks for database management. Design and develop RESTful APIs following industry best practices. - Utilize front-end frameworks (Angular, React,Sencha, JavaScript, jQuery, HTML, CSS) to build userfriendly interfaces. - Work with cloud platforms (AWS, Azure, or GCP) and containerization tools like Docker and Kubernetes. - Optimize application performance by writing efficient, scalable, and secure code. - Implement CI/CD pipelines and automate deployments using Docker, Kubernetes, or EKS. - Write unit and integration tests to ensure robust and error-free code. Collaborate with cross-functional teams to enhance application functionality and user experience. Required Qualifications & Skills : - 5-12 years of hands-on experience in Core and Advanced Java development. - Strong knowledge of multithreading, exception handling, servlets, and filters. Expertise in Object Oriented Design. - Experience in designing and developing Microservices-based architectures. - Proficiency in Spring Boot, Spring Security, Spring REST, and Hibernate (JPA). - Strong SQL scripting skills and knowledge of relational databases (MySQL, SQL Server, Oracle, etc.). - Hands-on experience with UI frameworks (Angular, React, Sencha, JavaScript, TypeScript). - Working experience with cloud platforms (AWS, Azure, or GCP). - Knowledge of CI/CD pipelines, Docker, Kubernetes (EKS), and RESTful application integration. - Understanding of OOP, SOLID principles, and clean code best practices. - Strong problem-solving, analytical, and debugging skills. Bachelor's degree in Computer Science, Software Engineering, or a related field.

Posted 3 weeks ago

Apply

12.0 - 17.0 years

15 - 20 Lacs

Pune

Hybrid

Naukri logo

Here is how, through this exciting role, YOU will contribute to BMC's and your own success: Play a vital role in project design to ensure scalability, reliability, and performance are met Design and develop new features as well as maintain existing features by adding improvements and fixing defects in complex areas (using Java) Design and maintain robust AI/ML pipelines using Python and industry-standard ML frameworks . Collaborate closely with AI researchers and data scientists to implement, test, and deploy machine learning models in production. Leverage prompt engineering techniques and implement LLM optimization strategies to enhance response quality and performance. Assist in troubleshooting complex technical problems in development and production Implement methodologies, processes tools Initiate projects and ideas to improve the teams results On-board and mentor new employees To ensure youre set up for success, you will bring the following skillset experience: Backend Development: FastAPI, RESTful APIs, Python Cloud Infrastructure: AWS, EKS, Docker, Kubernetes AI/ML Frameworks: LangChain, Scikit-learn, Bedrock, Hugging Face (optional to add if applicable) ML Pipelines: Python, Pandas, NumPy, joblib DevOps CI/CD: Git, Terraform (optional), Helm, GitHub Actions LLM Expertise: Prompt engineering, RAG (Retrieval Augmented Generation), vector databases (e.g., FAISS, Pinecone) You have 12+ years of experience in Java Backend development You have experience as a Backend Tech Lead You have experience in Spring, Swagger, REST API You worked with Spring Boot, Docker, Kubernetes You are a self-learner whos passionate about problem solving and technology You are a team player with good communication skills in English (verbal and written) Whilst these are nice to have, our team can help you develop in the following skills: Public Cloud (AWS, Azure, GCP) Python, Node.js, C/C++ Automation Frameworks such as Robot Framework

Posted 3 weeks ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

ABOUT THE ROLE Role Description: The role is responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Basic Qualifications and Experience: Master’s degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelor’s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Functional Skills: Must-Have Skills Proficiency in Python, PySpark, and Scala for data processing and ETL (Extract, Transform, Load) workflows, with hands-on experience in using Databricks for building ETL pipelines and handling big data processing Experience with data warehousing platforms such as Amazon Redshift, or Snowflake. Strong knowledge of SQL and experience with relational (e.g., PostgreSQL, MySQL) databases. Familiarity with big data frameworks like Apache Hadoop, Spark, and Kafka for handling large datasets. Experienced with software engineering best-practices, including but not limited to version control (GitLab, Subversion, etc.), CI/CD (Jenkins, GITLab etc.), automated unit testing, and Dev Ops Good-to-Have Skills: Experience with cloud platforms such as AWS particularly in data services (e.g., EKS, EC2, S3, EMR, RDS, Redshift/Spectrum, Lambda, Glue, Athena) Experience with Anaplan platform, including building, managing, and optimizing models and workflows including scalable data integrations Understanding of machine learning pipelines and frameworks for ML/AI models Professional Certifications: AWS Certified Data Engineer (preferred) Databricks Certified (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 3 weeks ago

Apply

10.0 - 12.0 years

7 - 11 Lacs

Chennai

Work from Office

Naukri logo

In this pivotal role : You will be instrumental in driving the technical direction of our PHP-based projects. You will leverage your deep expertise in PHP frameworks like CodeIgniter and Laravel, coupled with your understanding of modern development practices, to deliver high-quality solutions. You will not only be hands-on in development but also play a crucial role in guiding the team, ensuring code quality, and contributing to architectural decisions. Primary Skills : - Possess 10+ years of demonstrable hands-on experience in developing web applications using PHP. - Deep familiarity and proven experience with at least one of the following PHP frameworks CodeIgniter and/or Laravel. A strong preference for candidates with experience in both. - Comprehensive understanding of the latest PHP features, best practices, and design patterns (e.g., SOLID principles, dependency injection, etc.). - In-depth knowledge of the Laravel ecosystem, including Eloquent ORM, Blade templating engine, Artisan console, routing, middleware, and security features. - Solid understanding of JavaScript and experience with at least one popular library or framework such as jQuery. Experience with modern JavaScript frameworks (e.g., React, Angular, Vue.js) is a significant plus. - Proficiency in CSS and its preprocessors (e.g., Sass, Less) for creating well-structured and visually appealing user interfaces. Familiarity with responsive design principles and CSS frameworks (e.g., Bootstrap, Tailwind CSS) is advantageous. Secondary Skills : - Proven experience in designing, implementing, and optimizing MySQL databases. Understanding of database schema design, query optimization, indexing, and data integrity. - Familiarity with Amazon Web Services (AWS) for deploying, managing, and scaling web applications. - Experience with services like EC2, S3, RDS, ECS/EKS, and Lambda is highly desirable. - Solid understanding of API design principles (RESTful, GraphQL) and experience in building and consuming APIs. Familiarity with microservices architecture and its benefits. - Experience in implementing event-driven design patterns to build scalable and decoupled systems. Knowledge of message queues (e.g., RabbitMQ, Kafka) is a plus. - Strong understanding and practical experience with Test-Driven Development methodologies using phpUnit for writing unit, integration, and functional tests. - Familiarity with the Python programming language and its web frameworks (e.g., Django, Flask) is a valuable asset. Key Responsibilities : - Lead the development, maintenance, and enhancement of complex web applications utilizing PHP frameworks (CodeIgniter and/or Laravel). - Work closely with product managers, designers, and other engineers to define, design, and deliver innovative features that meet user needs and business objectives. - Write clean, well-documented, maintainable, and efficient PHP code, adhering to coding standards and best practices. - Identify and resolve performance bottlenecks, optimizing applications for maximum speed, scalability, and responsiveness. - Seamlessly integrate with various back-end services and third-party APIs to extend application functionality and data exchange. - Design, implement, and maintain MySQL databases, ensuring data integrity, security, and optimal performance. Perform database migrations and schema updates as needed. - Architect and implement solutions using event-driven design principles to create loosely coupled and scalable systems. - Utilize JavaScript (jQuery or other libraries) and CSS to develop interactive and user-friendly front-end interfaces. Collaborate with UI/UX designers to implement visually appealing and accessible designs. - Leverage AWS services for deploying, monitoring, and managing applications in the cloud environment, ensuring high availability and reliability. Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 3 weeks ago

Apply

4.0 - 9.0 years

11 - 21 Lacs

Bengaluru

Hybrid

Naukri logo

Java,Spring Boot, AWSGraphQ,RDBMS PostgreSQL, REST APIs. AWS services including EC2, EKS, S3, CloudWatch, Lambda, SNS, and SQS, Junit/Jest, and AI Tools like GitHub Copilot.Desirable-Node.js, Hasura frameworks.

Posted 3 weeks ago

Apply

8.0 - 12.0 years

35 - 60 Lacs

Pune

Work from Office

Naukri logo

About the Role: We are seeking a skilled Site Reliability Engineer (SRE) / DevOps Engineer to join our infrastructure team. In this role, you will design, build, and maintain scalable infrastructure, CI/CD pipelines, and observability systems to ensure high availability, reliability, and security of our services. You will work cross-functionally with development, QA, and security teams to automate operations, reduce toil, and enforce best practices in cloud-native environments. Key Responsibilities: Design, implement, and manage cloud infrastructure (GCP/AWS/Azure) using Infrastructure as Code (Terraform). Maintain and improve CI/CD pipelines using tools like circleci, GitLab CI, or ArgoCD. Ensure high availability and performance of services using Kubernetes (GKE/EKS/AKS) and container orchestration. Implement monitoring, logging, and alerting using Prometheus, Grafana, ELK, or similar tools. Collaborate with developers to optimize application performance and deployment processes. Manage and automate security controls such as IAM, RBAC, network policies, and vulnerability scanning. Basic Qualifications: Strong knowledge of Linux Experience with scripting languages such as Python, Bash, or Go. Experience with cloud platforms (GCP preferred, AWS or Azure acceptable). Proficient in Kubernetes operations, including Helm, operators, and service meshes. Experience with Infrastructure as Code (Terraform). Solid experience with CI/CD pipelines (GitLab CI, Circleci, ArgoCD, or similar). Familiarity with monitoring and observability tools (Prometheus, Grafana, ELK, etc.). Experience with scripting languages such as Python, Bash, or Go. Knowledge of networking concepts (TCP/IP, DNS, Load Balancers, Firewalls). Preferred Qualifications Experience with advanced networking solutions. Familiarity with SRE principles such as SLOs, SLIs, and error budgets. Exposure to multi-cluster or hybrid-cloud environments. Knowledge of service meshes (Istiol). Experience participating in incident management and postmortem processes.

Posted 3 weeks ago

Apply

1.0 - 3.0 years

3 - 7 Lacs

Mumbai

Work from Office

Naukri logo

about the role Cloud Engineers with experience in managing, planning, architecting, monitoring, and automating large scale deployments to Public Cloud.you will be part of a team of talented engineers to solve some of the most complex and exciting challenges faced in IT Automation and Hybrid Cloud Deployments. key responsibilities Consistently strive to acquire new skills on Cloud, DevOps, Big Data, AI and ML technologies Design, deploy and maintain Cloud infrastructure for Clients Domestic & International Develop tools and automation to make platform operations more efficient, reliable and reproducible Create Container Orchestration (Kubernetes, Docker), strive for full automated solutions, ensure the up-time and security of all cloud platform systems and infrastructure Stay up to date on relevant technologies, plug into user groups, and ensure our client are using the best techniques and tools Providing business, application, and technology consulting in feasibility discussions with technology team members, customers and business partners Take initiatives to lead, drive and solve during challenging scenarios preferred qualifications 1-3 years experience in Cloud Infrastructure and Operations domains Experience with Linux systems and/OR Windows servers Specialize in one or two cloud deployment platforms: AWS, GCP, Azure Hands on experience with AWS services (EKS, ECS, EC2, VPC, RDS, Lambda, GKE, Compute Engine) Experience with one or more programming languages (Python, JavaScript, Ruby, Java, .Net) Good understanding of Apache Web Server, Nginx, MySQL, MongoDB, Nagios Logging and Monitoring tools (ELK, Stackdriver, CloudWatch) DevOps Technologies Knowledge on Configuration Management tools such as Ansible, Terraform, Puppet, Chef Experience working with deployment and orchestration technologies (such as Docker, Kubernetes, Mesos) Deep experience in customer facing roles with a proven track record of effective verbal and written communications Dependable and good team player Desire to learn and work with new technologies Automation in your blood

Posted 3 weeks ago

Apply

8.0 - 14.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Responsibilities Calling all innovators - find your future at Fiserv. We're Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day - quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we're involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Advisor, Software Development Engineering What does a successful Application Specialist- Professional do As an experienced member of our SpendTrack platform, you will be responsible for effective management and maintenance of SpendTrack application and ensure the reliability, availability, and performance of our systems. You will collaborate with cross-functional teams to implement automation and continuous improvement processes to enhance efficiency and reduce downtime. The ideal candidate will have a strong background in AWS (EKS, Secrets, Dynamo DB, AWS Infra, Kubernetes etc), linux, Dynatrace, ServiceNow, SQL, and Splunk. What you will do: Management activities like Incident, change and release activities for mission critical application. Deployment/Re-deployment/Upgrade for SpendTrack Alerts and monitoring (Splunk and Dynatrace) Collaborate with other Developers and Engineers to build, evolve, and maintain a scalable continuous build and deployment pipeline using mature CI/CD automation practices. Troubleshoot infrastructure, application issues and work with the development team and SMEs to resolve issues faster. Create, track, drive problem and defects task to closure. Identify possible points of failure in the infrastructure/applications and improve/resolve the identified vulnerability. Troubleshoot application-related support requests and escalate as needed. Confluence Knowledge articles/documentations Automate the repetitive task to minimize manual efforts What you will need to have: Bachelor's degree preferably in Computer Science, Electrical/Computer Engineering, or related field Overall, 8-14 years of experience. Experience working with Linux, AWS (EKS, Secrets, Dynamo DB, AWS Infra, Kubernetes etc), SQL and backup technologies. Experience working on monitoring tools like Splunk, Dynatrace, Moogsoft, ServiceNow, Jira Ability to analyze and translate requirements and development stories into automation. Experience with Modern scripting language like Python Experience with Gitlab for CI/CD pipeline creation and automation Flexible to work in shift or weekend on business demand. Thank you for considering employment with Fiserv. Please: Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our commitment to Diversity and Inclusion: Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note to agencies: Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning about fake job posts: Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Responsibilities Calling all innovators - find your future at Fiserv. We're Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day - quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we're involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Specialist, Application Support What does a successful AWS Full stack As an experienced member of our SpendTrack platform, you will be responsible for effective management and maintenance of SpendTrack application and ensure the reliability, availability, and performance of our systems. You will collaborate with cross-functional teams to implement automation and continuous improvement processes to enhance efficiency and reduce downtime. The ideal candidate will have a strong background in AWS(EKS, Secrets, Dynamo DB, AWS Infra, Kubernetes etc), linux, Dynatrace, ServiceNow, SQL, and Splunk. What you will do: Management activities like Incident, change and release activities for mission critical application. Deployment/Re-deployment/Upgrade for SpendTrack Alerts and monitoring (Splunk and Dynatrace) Collaborate with other Developers and Engineers to build, evolve, and maintain a scalable continuous build and deployment pipeline using mature CI/CD automation practices. Troubleshoot infrastructure, application issues and work with the development team and SMEs to resolve issues faster. Create, track, drive problem and defects task to closure. Identify possible points of failure in the infrastructure/applications and improve/resolve the identified vulnerability. Troubleshoot application-related support requests and escalate as needed. Confluence Knowledge articles/documentations Automate the repetitive task to minimize manual efforts What you will need to have: Bachelor's degree preferably in Computer Science, Electrical/Computer Engineering, or related field Overall, 5-10 years of experience. Experience working with Linux, AWS(EKS, Secrets, Dynamo DB, AWS Infra, Kubernetes etc), SQL and backup technologies. Experience working on monitoring tools like Splunk, Dynatrace, Moogsoft, ServiceNow, Jira Ability to analyze and translate requirements and development stories into automation. Experience with Modern scripting language like Python Experience with Gitlab for CI/CD pipeline creation and automation Flexible to work in shift or weekend on business demand. Thank you for considering employment with Fiserv. Please: Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our commitment to Diversity and Inclusion: Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note to agencies: Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning about fake job posts: Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.

Posted 3 weeks ago

Apply

6.0 - 11.0 years

1 - 5 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title:Java AWS Developer Experience6-12 Years Location:Bangalore : : Experience in Java, J2ee, Spring boot. Experience in Design, Kubernetes, AWS (EKS, EC2) is needed. Experience in AWS cloud monitoring tools like Datadog, Cloud watch, Lambda is needed. Experience with XACML Authorization policies. Experience in NoSQL , SQL database such as Cassandra, Aurora, Oracle. Experience with Web Services SOA experience (SOAP as well as Restful with JSON formats), with Messaging (Kafka). Hands on with development and test automation tools/frameworks (e.g. BDD and Cucumber)

Posted 3 weeks ago

Apply

6.0 - 11.0 years

2 - 5 Lacs

Hyderabad, Bengaluru

Work from Office

Naukri logo

Job Title:Java AWS Experience6-12 Years Location:Hyderabad/ Bangalore : Experience in Java, J2ee, Spring boot. Experience in Design, Kubernetes, AWS (Lambda, EKS, EC2) is needed. Experience in AWS cloud monitoring tools like Datadog, Cloud watch, Lambda is needed. Experience with XACML Authorization policies. Experience in NoSQL , SQL database such as Cassandra, Aurora, Oracle. Experience with Web Services SOA experience (SOAP as well as Restful with JSON formats), with Messaging (Kafka). Hands on with development and test automation tools/frameworks (e.g. BDD and Cucumber)

Posted 3 weeks ago

Apply

6.0 - 8.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title:DevOps SME Experience6-8 Years Location:Bangalore : Technical skill We are looking for a Senior Continuous Integration/Continuous Delivery (CI/CD) Engineer. Ideal candidate would have worked with migration projects including DevOps pipelines. The Sr. CI/CD Engineer will be working on transformation of the pipelines post migration. The CI/CD Engineer will be responsible for the set-up, maintenance, and ongoing development of continuous build/ integration infrastructure. Create and maintain fully automated CI build processes for .Net, C#, Java plus UI Applications in various O.S platforms. (Windows & Linux based OS.) Write build and deployment scripts. Support CI/CD tools integration/ operations/ change management, and maintenance. Support full automation of CI/ Testing through the pipeline scripts. Proficient in Terraform for infrastructure as code, with experience in provisioning, managing, and automating scalable cloud resources in AWS and Azure. Skilled in writing modular, reusable code, enforcing best practices, and integrating Terraform workflows into CI/CD pipelines for efficient DevOps operations. Develop policies, standards, guidelines, governance, and related guidance for both CI/CD operations and for the work of developers. On-board/ train and support developers from source control, through build automation, merge resolution, CI, test automation, deployment based on tools usage and policies, standards. Enable DevOps by moving code from Dev/ Test to Staging and Production. Troubleshoot issues along the CI/CD pipeline. Expertise with AWS Cloud native DevOps tools. Mandatory Tools AWS DevOps & Jenkins Pipelines. Terraform. Containerization platform - Docker & Kubernetes (EKS/ECS) GIT, GitHub, Github Actions Azure Devops Datadog, Splunk Monitoring tools. Should be experienced with any of the scripting knowledge to automate the CI/CD process by Integrating the above tools. Should be able to work on multiple platforms. Developing the new CI/CD Pipeline based on the project requirement. Migration of DevOps pipelines form one server to another. Rewriting AWS Pipelines to Jenkins as part of transformation. Need to manage the build servers on On-Prem and cloud platform. Agile Software Development and Management methods and ability to excel within an "Agile" environment (i.e. user stories, iterative development, continuous integration, continuous delivery, shared ownership, test-driven development, etc.) Experience with: Build-time dependency management, Release Management. Unit testing and code-coverage tools – Junit, Puma, Sonar, Synopsis, Zap Good understanding of Quality Control and Test Automation in Agile-based Continuous Integration environments. Non-Technical Skills: Attitude towards learning new technologies and solving complex technical problems. Quick learner and team player Excellent communication skills B.Tech / BE / MCA or equivalent technical degree from a reputed college. Certification Good to have AWS DevOps certification.

Posted 3 weeks ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Responsibilities: Design and implement migration strategies for complex AWS workloads across regions Conduct detailed assessment of existing infrastructure and applications for migration planning Create and execute migration runbooks with minimal service disruption Implement infrastructure as code (IaC) using AWS CloudFormation/CDK Configure and optimize network connectivity between AWS Regions Monitor and optimize application performance during and after migrations Collaborate with application teams to ensure compatibility and resolve dependencies Document migration processes, configurations, and best practices Leadership Principles: Bias for Action, Deliver Results Mandatory Requirements: 3+ years of experience in systems engineering/cloud infrastructure 3+ years hands-on experience with AWS services and architecture Strong expertise in: EC2, VPC, Route 53, ELB, CloudFront Experience with infrastructure as code (AWS CDK,CloudFormation) Knowledge of networking concepts (TCP/IP, DNS, load balancing) Preferred skills: AWS Professional level certifications (Solutions Architect/DevOps Engineer) Experience with container orchestration (ECS, Lambda) Knowledge of CI/CD pipelines Experience with multi-region architectures Extensive experience with Linux/Unix environments Experience utilizing AWS cloud solutions in a DevOps environment Experience in automating, deploying, and supporting large-scale infrastructure Experience building services using AWS products and CDK Strong background with CI/CD pipelines and build processes Experience with container orchestration (EKS, ECS) Education or Certification: Any Graduation

Posted 3 weeks ago

Apply

3.0 - 7.0 years

15 - 20 Lacs

Pune

Work from Office

Naukri logo

About the job Sarvaha would like to welcome Kafka Platform Engineer (or a seasoned backend engineer aspiring to move into platform architecture) with a minimum of 4 years of solid experience in building, deploying, and managing Kafka infrastructure on Kubernetes platforms. Sarvaha is a niche software development company that works with some of the best funded startups and established companies across the globe. Please visit our website at What Youll Do - Deploy and manage scalable Kafka clusters on Kubernetes using Strimzi, Helm, Terraform, and StatefulSets - Tune Kafka for performance, reliability, and cost-efficiency - Implement Kafka security: TLS, SASL, ACLs, Kubernetes Secrets, and RBAC - Automate deployments across AWS, GCP, or Azure - Set up monitoring and alerting with Prometheus, Grafana, JMX Exporter - Integrate Kafka ecosystem components: Connect, Streams, Schema Registry - Define autoscaling, resource limits, and network policies for Kubernetes workloads - Maintain CI/CD pipelines (ArgoCD, Jenkins) and container workflows You Bring - BE/BTech/MTech (CS/IT or MCA), with an emphasis in Software Engineering - Strong foundation in the Apache Kafka ecosystem and internals (brokers, ZooKeeper/KRaft, partitions, storage) - Proficient in Kafka setup, tuning, scaling, and topic/partition management - Skilled in managing Kafka on Kubernetes using Strimzi, Helm, Terraform - Experience with CI/CD, containerization, and GitOps workflows - Monitoring expertise using Prometheus, Grafana, JMX - Experience on EKS, GKE, or AKS preferred - Strong troubleshooting and incident response mindset - High sense of ownership and automation-first thinking - Excellent collaboration with SREs, developers, and platform teams - Clear communicator, documentation-driven, and eager to mentor/share knowledge

Posted 3 weeks ago

Apply

8.0 - 12.0 years

15 - 19 Lacs

Pune

Work from Office

Naukri logo

Key Responsibilities : - Architect and implement end-to-end AWS cloud solutions leveraging services such as EC2, ECS/EKS, Lambda, API Gateway, RDS/Aurora, S3, CloudFront, and VPC to support scalable and resilient applications. - Define cloud infrastructure blueprints and automation pipelines using Infrastructure as Code (IaC) tools like Terraform, AWS CloudFormation, or AWS CDK for repeatable and auditable deployments. - Establish security architecture frameworks by implementing least-privilege IAM policies, encryption standards (KMS), network segmentation (VPC/Subnet/NACL/Security Groups), and compliance monitoring via AWS Config and GuardDuty. - Optimize performance, cost, and operational efficiency through advanced monitoring (CloudWatch, X-Ray), auto-scaling strategies, and cost analysis tools like AWS Cost Explorer and Trusted Advisor. - Lead architectural reviews and hands-on collaboration with development, DevOps, and data engineering teams to integrate AWS services into CI/CD pipelines, containerized workloads, and event-driven architectures. Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 3 weeks ago

Apply

Exploring EKS Jobs in India

The job market for EKS (Elastic Kubernetes Service) professionals in India is rapidly growing as more companies are adopting cloud-native technologies. EKS is a managed Kubernetes service provided by Amazon Web Services (AWS), allowing users to easily deploy, manage, and scale containerized applications using Kubernetes.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities are known for their strong technology sectors and have a high demand for EKS professionals.

Average Salary Range

The average salary range for EKS professionals in India varies based on experience levels: - Entry-level: ₹4-6 lakhs per annum - Mid-level: ₹8-12 lakhs per annum - Experienced: ₹15-25 lakhs per annum

Career Path

A typical career path in EKS may include roles such as: - Junior EKS Engineer - EKS Developer - EKS Administrator - EKS Architect - EKS Consultant

Related Skills

Besides EKS expertise, professionals in this field are often expected to have knowledge or experience in: - Kubernetes - Docker - AWS services - DevOps practices - Infrastructure as Code (IaC)

Interview Questions

  • What is EKS and how does it differ from self-managed Kubernetes clusters? (basic)
  • How do you monitor the performance of EKS clusters? (medium)
  • Can you explain the process of deploying a new application on EKS? (medium)
  • What are the key security considerations for EKS deployments? (medium)
  • How do you handle scaling in EKS based on varying workloads? (medium)
  • What is the difference between a Deployment and a StatefulSet in Kubernetes? (advanced)
  • How do you troubleshoot networking issues in an EKS cluster? (advanced)
  • Explain the concept of a Pod in Kubernetes and its significance in EKS. (basic)
  • What tools do you use for managing and monitoring EKS clusters? (medium)
  • How do you ensure high availability for applications running on EKS? (medium)
  • Describe the process of upgrading Kubernetes versions in an EKS cluster. (medium)
  • How do you optimize resource utilization in EKS clusters? (medium)
  • What are the advantages of using EKS over self-managed Kubernetes clusters? (basic)
  • Can you explain the concept of a Service in Kubernetes and its role in EKS? (basic)
  • How do you handle persistent storage in EKS for stateful applications? (medium)
  • What is the role of a ConfigMap in Kubernetes and how is it used in EKS? (basic)
  • How do you automate the deployment process in EKS? (medium)
  • Explain the concept of a Namespace in Kubernetes and its significance in EKS. (basic)
  • How do you ensure security compliance in EKS deployments? (medium)
  • What are the best practices for managing secrets in EKS clusters? (medium)
  • How do you implement CI/CD pipelines for applications deployed on EKS? (medium)
  • Describe a challenging issue you faced in managing an EKS cluster and how you resolved it. (advanced)
  • How do you handle rolling updates in EKS deployments? (medium)
  • What are the key considerations for disaster recovery planning in EKS? (medium)

Closing Remark

As you explore opportunities in the EKS job market in India, remember to showcase your expertise in EKS, Kubernetes, and related technologies during interviews. Prepare thoroughly, stay updated with industry trends, and apply confidently to secure exciting roles in this fast-growing field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies