Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
4 - 7 years
12 - 16 Lacs
Hyderabad
Work from Office
We are looking for a highly skilled Full Stack Developer with hands-on experience in Python, GenAI, and AWS cloud services. The ideal candidate should have proficiency in backend development using NodeJS, ExpressJS, Python Flask/FastAPI, and RESTful API design. On the frontend, strong skills in Angular, ReactJS, TypeScript, etc. are required. ### Roles and Responsibility Design and develop cloud-native applications and services using AWS services such as Lambda, API Gateway, ECS, EKS, DynamoDB, Glue, Redshift, EMR. Implement CI/CD pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy to automate application deployment and updates. Collaborate with architects and other engineers to design scalable and secure application architectures on AWS. Monitor application performance and implement optimizations to enhance reliability, scalability, and efficiency. Implement security best practices for AWS applications, including identity and access management (IAM), encryption, and secure coding practices. Design and deploy containerized applications using AWS services such as Amazon ECS (Elastic Container Service), Amazon EKS (Elastic Kubernetes Service), and AWS Fargate. Configure and manage container orchestration, scaling, and deployment strategies, while optimizing container performance and resource utilization by tuning settings and configurations. Implement and manage application observability tools such as AWS CloudWatch, AWS X-Ray, Prometheus, Grafana, and ELK Stack (Elasticsearch, Logstash, Kibana). Develop and configure monitoring, logging, and alerting systems to provide insights into application performance and health, creating dashboards and reports to visualize application metrics and logs for proactive monitoring and troubleshooting. Integrate AWS services with application components and external systems, ensuring smooth and efficient data flow and diagnose/troubleshoot issues related to application performance, availability, and reliability. Create and maintain comprehensive documentation for application design, deployment processes, and configuration.### Job Requirements Proficiency in AWS services such as Lambda, API Gateway, ECS, EKS, DynamoDB, S3, and RDS, Glue, Redshift, EMR. Experience in developing and deploying AI solutions with Python and JavaScript. Strong background in machine learning, deep learning, and data modeling. Good understanding of Agile methodologies and version control systems like Git. Familiarity with container orchestration concepts and tools, including Kubernetes and Docker Swarm. Understanding of AWS security best practices, including IAM, KMS, and encryption. Observability Tools: Proficiency in using observability tools like AWS CloudWatch, AWS X-Ray, Prometheus, Grafana, and ELK Stack. Monitoring: Experience with monitoring and logging tools such as AWS CloudWatch, CloudTrail, or ELK Stack. Collaboration: Strong teamwork and communication skills with the ability to work effectively with cross-functional teams.
Posted 1 month ago
5 - 7 years
25 - 35 Lacs
Bengaluru
Work from Office
Senior Data Scientist Experience: 5 - 7 Years Exp Salary : Upto INR 35 Lacs per annum Preferred Notice Period : Within 30 Days Shift : 10:00AM to 7:00PM IST Opportunity Type: Onsite (Bengaluru) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Azure Devops Or GitHub, Azure SQL DB or Cosmo DB, EKS, ETL, MLOps, supply chain, Machine Learning Good to have skills : CI/CD, Databricks, Demand Forecasting, Spark Aioneers (One of Uplers' Clients) is Looking for: Senior Data Scientist who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description As a Senior Data Scientist, you will be a pivotal member of the data science team within aioneers. You will be leading the solution implementation of data science work streams within our supply chain analytics projects. Often the projects involve solving complex supply chain problems like demand forecasting, multi-echelon inventory optimization, production scheduling, intelligent order fulfillment, rough cut capacity planning (RCCP) etc. You will also provide technical guidance and mentorship junior data scientists in the team. You will own development and implementation of end-to-end life cycle of machine learning solutions from data modelling, feature engineering, ML solution structuring to MLOps process implementation of automated model serving You will play the role of a technology architect to design efficient MLOps process for large scale model deployments and high frequency servings using Azure data and ML services You will provide thought leadership to solution architects and project managers to come up with effective solution architecture for clients problems You will be building heuristics, Operations research techniques based (like linear programming and discrete optimization) solutions to solve optimization problems in supply chain space You will also lead the data engineering work activities within the projects to create required data models with features stores for ML implementations and post processing activities to make the outputs consumable for business use cases YOUR PROFILE We are looking for someone with 5+ years of relevant data science and machine learning experience in solving supply chain problems Data Science And Machine Learning Skills Understanding of statistical methods (e.g., regression, hypothesis testing) and optimization techniques like linear programming and mixed-integer programming for supply chain problems Proficiency in methods like ARIMA, SARIMAX, Prophet, or advanced techniques using neural networks (e.g., LSTMs, Temporal Fusion Transformer) Familiarity with supervised and unsupervised learning for classification (e.g., demand segmentation) and clustering (e.g., supplier categorization) Knowledge of CI/CD pipelines for ML, including retraining, deployment, and monitoring models using Azure DevOps or GitHub Actions Supply Chain Domain Knowledge: Seasoned expertise in demand forecasting using ML. Understanding the nuances of intermittent, erratic and lumpy demand patterns and how to solve them using ML techniques Knowledge of EOQ, reorder point models, safety stock modelling and inventory simulation techniques would be a plus Programming and Technical Skills Expertise in setting up end to end MLOps processes - model training, deployment, and tracking experiments Expertise in creating data pipelines for ETL processes and connecting supply chain data sources Proficiency in integrating ERP data from systems like SAP into Azure via connectors or APIs Deploying scalable ML models as APIs using AKS (Kubernetes) Expertise in handling large-scale supply chain datasets using Spark, Databricks, or Azure Synapse Advanced query skills in Azure SQL Database or Cosmos DB for real-time analytics Advanced proficiency in Python for ML modelling, data analysis, and libraries like Scikit-learn, PyTorch, TensorFlow Version control and automating deployments using Azure DevOps or GitHub Actions Ability to think through automation, pipeline design and other MLOps processes Conceptual and pragmatic knowledge of the concepts of data modelling, feature engineering, fine tuning machine learning models, statistical model validation Educational Background and Experience Engineering degree in computer science, informatics, data analytics and other relevant branches Affinity for new technologies and a drive for independent learning Affinity for an open feedback culture with flat hierarchies WHY AIONEERS? At aioneers, we are building the next generation innovative solutions on supply chain technologies. What we can offer is a wonderful team culture, flexible work hours, respect for your ideas, open discussions / open door policies and attractive remuneration. Your results count and not the hours. You will have the chance to actively participate in the development and execution of innovative business strategies on an international scale. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: aioneers is a software and consulting company headquartered in Mannheim, Germany. We help businesses optimize their supply chain using our best-in-class supply chain expertise and our AI-powered technology, the AIO Platform. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 month ago
2 - 5 years
4 - 7 Lacs
Hyderabad
Work from Office
Overview Be part of the DevOps and cloud infrastructure team that is responsible for Cloud security, infrastructure provisioning, maintaining existing platforms and provides our partner teams with guidance for building, maintain and optimizing integration and deployment pipelines as code for deploying our applications to run in AWS & Azure. Responsibilities Deploy infrastructure in AWS AND Azure cloud using terraform and Infra-as-code best practices. Triage, troubleshoot incidents related to our AWS and Azure cloud resources. Participate in development of Ci/CD workflows to launch application from build to deployment using modern devOps tools like Kubernetes, ArgoCD/Flux, terraform, helm. Create automation and tooling using Python, Bash or any OOP language. Configure monitoring, respond to incidents triggered by our existing notification systems. Ensure our existing Ci/CD pipelines operate without interruption and are constantly optimized as needed. Create automation, tooling and integration in our Jira projects that make your life easy and benefit the entire org and business. Evaluate and support onboarding of 3rd party SaaS applications or work with teams to integrate new tools and services into existing apps. Create documentation, runbooks, disaster recovery plans and processes. Qualifications 2+ years of experience deploying infrastructure to Azure and AWS platforms, AKS, EKS, ECR, ACR, Key Vault, IAM, Entra, VPC, VNET, IAM etc. 1+ year of experience with using terraform or writing terraform modules. 1+ year of experience with Git, Gitlab or GitHub. 1+ year of creating Ci/CD pipelines in any templatized format, yaml or jenkins. 1+ year of Kubernetes, ideally running workloads in a production environment. 1+ year of Bash and any other OOP language. Good understanding of software development lifecycle. Familiarity with: Monitoring tools like Datadog, Splunk etc. Automated build process and tools Able to administer and run basic SQL queries in Postgres, mySQL or any relational database. Cloud Security Posture management. Current skills in following technologies: Kubernetes Terraform
Posted 1 month ago
5 - 7 years
8 - 10 Lacs
Bengaluru
Work from Office
We are actively seeking an exceptionally motivated individual who thrives on continuous learning and embraces the dynamic environment of a high-velocity team. Joining the Content Productization & Delivery (CPD) organization at Thomson Reuters, you will play a pivotal role in ensuring the quality, reliability, and availability of critical systems. These systems provide a suite of infrastructure services supporting a common set of search and information retrieval capabilities necessary for Thomson Reuters's research-based applications and APIs across its core products. Your responsibilities will encompass delivering content via shared services that underpin all our Tax and Legal Research products. About the role: In this opportunity as a Senior Software Engineer, you will : Actively participates and collaborates in meetings, processes, agile ceremonies, and interaction with other technology groups. Works with Lead Engineers and Architects to develop high performing and scalable software solutions to meet requirement and design specifications. Provides technical guidance, mentoring, or coaching to software or systems engineering teams that are distributed across geographic locations. Proactively share knowledge and best practices on using new and emerging technologies across all the development and testing groups. Assists in identifying and correcting software performance bottlenecks. Provides regular progress and status updates to management. Provides technical support to operations or other development teams by assisting in troubleshooting, debugging, and solving critical issues in the production environment promptly to minimize user and revenue impact. Ability to interpret code and solve problems based on existing standards. Creates and maintains all required technical documentation / manual related to assigned components to ensure supportability. About You: You're a fit for the role of Senior Software Engineer, if your background includes: Bachelors or masters degree in computer science, engineering, information technology or equivalent experience 5+ years of professional software development experience 2+ years of experience with Java and REST based services 2+ years of Python experience Ability to debug and diagnose issues. Experience with version control (Git, GitHub) Experience working with various AWS technologies (DynamoDB, S3, EKS) Experience with Linux Infrastructure as Code, CICD Pipelines Excellent and creative problem-solving skills Strong written and oral communication skills Knowledge of Artificial Intelligence AWS Bedrock, Azure Open AI Large Language Models (LLMs) Prompt Engineering
Posted 1 month ago
7 - 11 years
9 - 11 Lacs
Bengaluru
Work from Office
Lead Software Engineer: We are seeking a highly motivated and experienced Lead Software Engineer to join our dynamic team. As a key member of our team, you will contribute to the development of innovative cutting-edge solutions, collaborating with cross-functional teams, and driving the delivery of new products and features that meet our customer needs. About the Role: Work closely with business partners and stakeholders to identify requirements and prioritize new enhancements and features Collaborate with software engineers, architects, technical management, and business partners across geographical and organizational boundaries Assist in architecture direction and finalize design with Architects Break down deliverables into meaningful stories for the development team Provide technical leadership, mentoring, and coaching to software or systems engineering teams Share knowledge and best practices on using new and emerging technologies Provide technical support to operations or other development teams by troubleshooting, debugging, and solving critical issues Interpret code and solve problems based on existing standards Create and maintain technical documentation related to assigned components About You: To be successful in this role, you should have: Bachelor's or Master's degree in Computer Science, Engineering, Information Technology, or equivalent experience 7+ years of professional software development experience Strong technical skills, including: Python programming (3+ years) AWS experience with EKS/Kubernetes Experience with LLMs, AI Solutions, and evaluation Understanding of agentic systems and workflows Experience with Retrieval Systems leveraging tools like OpenSearch Experience with event-driven/asynchronous programming Experience with high-concurrency systems Experience with CI/CD using GitHub Actions and AWS services (Code Pipeline/Code Build) Strong understanding of Microservices and RESTful APIs FastAPI Celery Data Engineering background Experience with AWS services (Redis, DynamoDB, S3, SQS, Kinesis, KMS, IAM, Secret Manager, etc.) Performance optimization and security practices Self-driven with ability to work with minimal direction Strong context-switching abilities Problem-solving mindset Clear communication Strong documentation habits Attention to detail
Posted 1 month ago
6 - 10 years
8 - 12 Lacs
Bengaluru
Work from Office
When you join Verizon You want more out of a career. A place to share your ideas freely even if theyre daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the V Team Life. What youll be doing... Turn ideas into innovative products with design, development, deployment and support throughout the product/software development life cycle. Architect and Develop key software components of high-quality products. Architecting and expanding a GNSS network related offerings for the Verizon Location Services products. Participate in requirement gathering, idea validation, and concept prototyping. Design end and end solutions to bring ideas into innovative products. Refine product designs to provide an excellent user experience. Develop/code key software components of products. Integrate key software components with various systems like Thingspace, Frisco, Gizmo, Smart Home, etc. Work with system engineers to create system/network designs and architecture. Present technical product information to internal audience and executives Work with performance engineers to refine software design and codes to improve performance and capacity. Use agile and iterative methods to demo product features and refine the user experience. What were looking for... Youll need to have: Bachelor's degree of six or more years of work experience. Experience in developing software products. Experience working on GNSS technology (RTK, PPP , DGPS, GPS) Experience working with RTKLIB greatly desired Experience with agile software development. Advanced knowledge of application, data and infrastructure architecture disciplines. Understanding of architecture and design across all systems. Experience with Java/J2EE, Springboot/MVC, JMS & Kafka. Designing and developing APIs. Knowledge of Database (Oracle), Linux/Unix, NOSQL DB (e.g. MongoDB, HBase) Knowledge of Microservice Architecture, Cloud Computing, Docker Containers, Restful API, EKS. Familiarity with developing and deploying services in AWS. Knowledge of Object-Oriented Design, Agile Scrum, Test Driven Development. Good communication skills and ability to present technical information in a clear and concise manner Even better if you have: Good communication skills and ability to present technical information in a clear and concise manner. Experience working with RINEX, RTCM formatted data. #TPDNONCDIO Where youll be workingIn this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager.Scheduled Weekly Hours40Diversity and Inclusion Were proud to be an equal opportunity employer. At Verizon, we know that diversity makes us stronger. We are committed to a collaborative, inclusive environment that encourages authenticity and fosters a sense of belonging. We strive for everyone to feel valued, connected, and empowered to reach their potential and contribute their best. Check out our page to learn more.
Posted 1 month ago
5 - 10 years
5 - 10 Lacs
Hyderabad, Pune
Work from Office
Role & responsibilities We required 4+ years exp resource in performance tester. Conducting capacity plans Preparing Impact assessment for new project requirements workload modeling, gatling scripting and scenario creation stub creation and deployment CICD using harness and Jenkins Performance testing (Load & Soak testing) Cloud services: (Amazon Elastic cache, RDS, ECS, EKS, CloudFormation) Performance monitoring tools: AppDynamics, Grafana, Splunk and AWS CloudWatch Analysis and Reporting Conduct reviews and grant approval for impact assessments and test summary reports from other performance test engineers Version control: Git, Github Confluence, Rally and Control center Interested candidate can share me there updated resume in recruiter.wtr26@walkingtree.in
Posted 1 month ago
7 - 10 years
20 - 35 Lacs
Pune
Hybrid
8+ yrs of exp in S/W dev with a focus on AWS solutions , architecture. Exp in architecture applications using EKS. AWS certifications - AWS Certified Solutions Architect Design, develop, and implement microservices-based AWS using Java
Posted 1 month ago
6 - 10 years
15 - 30 Lacs
Pune
Remote
5+ yrs exp in AWS Ecosystem – EKS, EC2, DynamoDB, Lambda Should have worked with Observability team having Dynatrace experience Monitoring Site, trend analysis, log analysis, implement capacity planning strategies. Good Devops practices.
Posted 1 month ago
3 - 8 years
13 - 15 Lacs
Bengaluru
Work from Office
Company Profile: Job Title: Senior Software Engineer - Node JS, Terraform with AWS Position: Senior Software Engineer Experience: 5-8 Years Category: Software Development/ Engineering Main location: Bangalore Position ID: J0525-0430 Employment Type: Full Time Familiarity with ORM/ODM libraries (e.g., Sequelize, Mongoose). Proficiency in using Git for version control. Understanding of testing frameworks (e.g., Jest, Mocha, Chai) and writing unit and integration tests. Collaborate with front-end developers to integrate user-facing elements with server-side logic. Design and implement efficient database schemas and ensure data integrity. Write clean, well-documented, and testable code. Participate in code reviews to ensure code quality and adherence to coding standards. Troubleshoot and debug issues in development and production environments. Knowledge of security best practices for web applications (authentication, authorization, data validation). Strong communication and collaboration skills. Effective communication skills to interact with technical and non-technical stakeholders. Required qualifications to be successful in this role: We are looking for an experienced Android Developer to join our team. The ideal candidate should be passionate about coding and developing scalable and high-performance applications. You will work closely with our front-end developers, designers, and other members of the team to deliver quality solutions that meet the needs of our clients. Qualification: Bachelors degree in Computer Science or related field or higher with minimum 3 years of relevant experience. Must-Have Skills: Design, develop, and maintain robust and scalable server-side applications using Node.js and JavaScript/TypeScript. Develop and consume RESTful APIs and integrate with third-party services. In-depth knowledge of AWS cloud including familiarity with services such as S3, Lambda, DynamoDB, Glue, Apache Airflow, SQS, SNS, ECS and Step Functions, EMR, EKS (Elastic Kubernetes Service), Key Management Service, Elastic MapReduce Handon Experience on Terraform Specializing in designing and developing fully automated end-to-end data processing pipelines for large-scale data ingestion, curation, and transformation. Experience in deploying Spark-based ingestion frameworks, testing automation tools, and CI/CD pipelines. Knowledge of unit testing frameworks and best practices. Working experience in databases- SQL and NO-SQL (preferred)-including joins, aggregations, window functions, date functions, partitions, indexing, and performance improvement ideas. Experience with database systems such as Oracle, MySQL, PostgreSQL, MongoDB, or other NoSQL databases. Skills: Node.Js RESTful (Rest-APIs) Terraform
Posted 1 month ago
3 - 5 years
3 - 6 Lacs
Hyderabad
Work from Office
AWS Cloud Engineer What you will do The AWS Cloud Engineer will be responsible for maintaining scalable, secure, and reliable AWS cloud infrastructure. This is a hands-on engineering role requiring deep expertise in Infrastructure as Code (IaC), automation, cloud networking, and security . The ideal candidate should have strong AWS knowledge and be capable of writing and maintaining Terraform, CloudFormation, and CI/CD pipelines to streamline cloud deployments. AWS Infrastructure Design & Implementation Implement, and manage highly available AWS cloud environments . Maintain VPCs, Subnets, Security Groups, and IAM policies to enforce security best practices. Optimize AWS costs using reserved instances, savings plans, and auto-scaling . Infrastructure as Code (IaC) & Automation Maintain, and enhance Terraform & CloudFormation templates for cloud provisioning. Automate deployment, scaling, and monitoring using AWS-native tools & scripting. Implement and manage CI/CD pipelines for infrastructure and application deployments. Cloud Security & Compliance Enforce best practices in IAM, encryption, and network security. Ensure compliance with SOC2, ISO27001, and NIST standards. Implement AWS Security Hub, GuardDuty, and WAF for threat detection and response. Monitoring & Performance Optimization Set up AWS CloudWatch, Prometheus, Grafana, and logging solutions for proactive monitoring. Implement autoscaling, load balancing, and caching strategies for performance optimization. Troubleshoot cloud infrastructure issues and conduct root cause analysis. Collaboration & DevOps Practices Work closely with software engineers, SREs, and DevOps teams to support deployments. Maintain GitOps standard processes for cloud infrastructure versioning. Support on-call rotation for high-priority cloud incidents. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree and 1 to 3 years of computer science, IT, or related field experience OR Bachelor’s degree and 3 to 5 years of computer science, IT, or related field experience OR Diploma and 7 to 9 years of computer science, IT, or related field experience Hands-on experience with AWS (EC2, S3, RDS, Lambda, VPC, IAM, ECS/EKS, API Gateway, etc.) . Expertise in Terraform & CloudFormation for AWS infrastructure automation. Strong knowledge of AWS networking (VPC, Direct Connect, Transit Gateway, VPN, Route 53) . Experience with Linux administration, scripting (Python, Bash), and CI/CD tools (Jenkins, GitHub Actions, CodePipeline, etc.) . Troubleshooting and debugging skills in cloud networking, storage, and security . Preferred Qualifications: Experience with Kubernetes (EKS) and service mesh architectures . Knowledge of AWS Lambda and event-driven architectures . Familiarity with AWS CDK, Ansible, or Packer for cloud automation. Exposure to multi-cloud environments (Azure, GCP) . Familiarity with HPC, DGX Cloud . Professional Certifications (preferred): AWS Certified Solutions Architect – Associate or Professional AWS Certified DevOps Engineer – Professional Soft Skills: Strong analytical and problem-solving skills. Ability to work effectively with global, virtual teams Effective communication and collaboration with multi-functional teams. Ability to work in a fast-paced, cloud-first environment. Shift Information: This position is required to be onsite and participate in 24/5 and weekend on call in rotation fashion and may require you to work a later shift. Candidates must be willing and able to work off hours, as required based on business requirements. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 month ago
2 - 6 years
3 - 6 Lacs
Hyderabad
Work from Office
AWS Cloud Engineer What you will do The AWS Cloud Engineer will be responsible for maintaining scalable, secure, and reliable AWS cloud infrastructure. This is a hands-on engineering role requiring deep expertise in Infrastructure as Code (IaC), automation, cloud networking, and security . The ideal candidate should have strong AWS knowledge and be capable of writing and maintaining Terraform, CloudFormation, and CI/CD pipelines to streamline cloud deployments. AWS Infrastructure Design & Implementation Implement, and manage highly available AWS cloud environments . Maintain VPCs, Subnets, Security Groups, and IAM policies to enforce security best practices. Optimize AWS costs using reserved instances, savings plans, and auto-scaling . Infrastructure as Code (IaC) & Automation Maintain, and enhance Terraform & CloudFormation templates for cloud provisioning. Automate deployment, scaling, and monitoring using AWS-native tools & scripting. Implement and manage CI/CD pipelines for infrastructure and application deployments. Cloud Security & Compliance Enforce best practices in IAM, encryption, and network security. Ensure compliance with SOC2, ISO27001, and NIST standards. Implement AWS Security Hub, GuardDuty, and WAF for threat detection and response. Monitoring & Performance Optimization Set up AWS CloudWatch, Prometheus, Grafana, and logging solutions for proactive monitoring. Implement autoscaling, load balancing, and caching strategies for performance optimization. Troubleshoot cloud infrastructure issues and conduct root cause analysis. Collaboration & DevOps Practices Work closely with software engineers, SREs, and DevOps teams to support deployments. Maintain GitOps standard processes for cloud infrastructure versioning. Support on-call rotation for high-priority cloud incidents. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelor’s degree and 0 to 3 years of computer science, IT, or related field experience OR Diploma and 4 to 7 years of computer science, IT, or related field experience Hands-on experience with AWS (EC2, S3, RDS, Lambda, VPC, IAM, ECS/EKS, API Gateway, etc.) . Expertise in Terraform & CloudFormation for AWS infrastructure automation. Strong knowledge of AWS networking (VPC, Direct Connect, Transit Gateway, VPN, Route 53) . Experience with Linux administration, scripting (Python, Bash), and CI/CD tools (Jenkins, GitHub Actions, CodePipeline, etc.) . Troubleshooting and debugging skills in cloud networking, storage, and security . Preferred Qualifications: Experience with Kubernetes (EKS) and service mesh architectures . Knowledge of AWS Lambda and event-driven architectures . Familiarity with AWS CDK, Ansible, or Packer for cloud automation. Exposure to multi-cloud environments (Azure, GCP) . Familiarity with HPC, DGX Cloud . Professional Certifications (preferred): AWS Certified Solutions Architect – Associate or Professional AWS Certified DevOps Engineer – Professional Soft Skills: Strong analytical and problem-solving skills. Ability to work effectively with global, virtual teams Effective communication and collaboration with multi-functional teams. Ability to work in a fast-paced, cloud-first environment. Shift Information: This position is required to be onsite and participate in 24/5 and weekend on call in rotation fashion and may require you to work a later shift. Candidates must be willing and able to work off hours, as required based on business requirements. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 month ago
8 - 12 years
27 - 30 Lacs
Maharashtra
Work from Office
Experience in building efficient backend services using Node js / Nest Js (Mandatory) Experience in building infrastructure resources on AWS efficiently using Terraform (Mandatory) Experience in implementing CI/CD pipelines with tools like GitHub Actions / AWS code pipeline (Preferred) Experience in working with Docker / EKS (Mandatory) Experience in development of interactive and responsive user interfaces, leveraging ReactJS, Tailwinds CSS (Preferred) Experience in of the following skill tools is an added advantage Kafka / KeyClock / Grafana / Elasticsearch Strong problem-solving skills in a fast-paced environment
Posted 1 month ago
6 - 8 years
22 - 25 Lacs
Maharashtra
Work from Office
Experience in building efficient backend services using Node js / Nest Js (Mandatory) Experience in building infrastructure resources on AWS efficiently using Terraform (Mandatory) Experience in implementing CI/CD pipelines with tools like GitHub Actions / AWS code pipeline (Preferred) Experience in working with Docker / EKS (Mandatory) Experience in development of interactive and responsive user interfaces, leveraging ReactJS, Tailwinds CSS (Preferred) Experience in of the following skill tools is an added advantage Kafka / KeyClock / Grafana / Elasticsearch Strong problem-solving skills in a fast-paced environment
Posted 1 month ago
5 - 10 years
20 - 27 Lacs
Bengaluru
Work from Office
About Zscaler Serving thousands of enterprise customers around the world including 40% of Fortune 500 companies, Zscaler (NASDAQ: ZS) was founded in 2007 with a mission to make the cloud a safe place to do business and a more enjoyable experience for enterprise users. As the operator of the world’s largest security cloud, Zscaler accelerates digital transformation so enterprises can be more agile, efficient, resilient, and secure. The pioneering, AI-powered Zscaler Zero Trust Exchange™ platform, which is found in our SASE and SSE offerings, protects thousands of enterprise customers from cyberattacks and data loss by securely connecting users, devices, and applications in any location. Named a Best Workplace in Technology by Fortune and others, Zscaler fosters an inclusive and supportive culture that is home to some of the brightest minds in the industry. If you thrive in an environment that is fast-paced and collaborative, and you are passionate about building and innovating for the greater good, come make your next move with Zscaler. Our Engineering team built the world’s largest cloud security platform from the ground up, and we keep building. With more than 100 patents and big plans for enhancing services and increasing our global footprint, the team has made us and our multitenant architecture today's cloud security leader, with more than 15 million users in 185 countries. Bring your vision and passion to our team of cloud architects, software engineers, security experts, and more who are enabling organizations worldwide to harness speed and agility with a cloud-first strategy. We're looking for an experienced DevSecOps Engineer to join our team reporting to a Director of Engineering, you'll be responsible for: Designing/Architecting, Implementing, and supporting end to end CI/CD systems for mission critical distributed application deployments on Zscaler Private and Public clouds such as AWS, GCP Assisting developers with merging, resolving conflicts, creating and managing pre-commit hooks and own administration for DevSecOps tools such as (GitLab, GitHub, Bitbucket, Bamboo, Jenkins, Grafana, Prometheus, Artifactory, ArgoCD/Flux, etc) Security of the code, applications and infrastructure with a strong working experience in Security scanning (SAST/SCA/DAST) tools such as SonarQube, Snyk, BlackDuck, Coverity, CheckMarx, TruffleHog, etc Automating infrastructure provisioning and configuration (IaC) using tools like Terraform, Chef, Ansible, Puppet, etc Tracking and monitoring build metrics such as code coverage, build times, build queue times, usage/consumption for build agents, and chart them over time using tools such as Prometheus, Grafana, CloudWatch, Splunk, Loki, etc What We're Looking for (Minimum Qualifications) You would need a Bachelor of Engineering/Technology degree in Computer Science, Information Technology, or related field with at least 4 years hands-on experience in managing AWS, Google Cloud (GCP) and/or Private Cloud Environments Strong application development/Automation experience with one of the OOPS languages C/C++/Java/Python/GO Experience with SAST, SCA, DAST, Secret scans and familiarity with scanning tools such as SonarQube, Snyk, Coverity, BlackDuck, CheckMarx, TruffleHog, etc Experience with container orchestration technologies such as Docker, Podman, Kubernetes, EKS/GKE and proficiency in automation using tools such as Terrafrom, CloudFormation, Ansible, Chef, Puppet, etc Experience with Git and GitOps based pipelines using GitLab, GitHub, Bitbucket and CI automation tools like Jenkins, GitHub actions, Bamboo What Will Make You Stand Out (Preferred Qualifications) Experience writing and developing yaml based CI/CD Pipelines using GitLab, GitHub and knowledge of build tools like makefiles/gradle/npm/maven etc Experience with Networking, Load Balancers, Firewalls, Web Security Experience with AI and ML tools in day to day DevSecOps activities #LI-Onsite #LI-AC10 At Zscaler, we are committed to building a team that reflects the communities we serve and the customers we work with. We foster an inclusive environment that values all backgrounds and perspectives, emphasizing collaboration and belonging. Join us in our mission to make doing business seamless and secure. Our Benefits program is one of the most important ways we support our employees. Zscaler proudly offers comprehensive and inclusive benefits to meet the diverse needs of our employees and their families throughout their life stages, including: Various health plans Time off plans for vacation and sick time Parental leave options Retirement options Education reimbursement In-office perks, and more! By applying for this role, you adhere to applicable laws, regulations, and Zscaler policies, including those related to security and privacy standards and guidelines. Zscaler is committed to providing equal employment opportunities to all individuals. We strive to create a workplace where employees are treated with respect and have the chance to succeed. All qualified applicants will be considered for employment without regard to race, color, religion, sex (including pregnancy or related medical conditions), age, national origin, sexual orientation, gender identity or expression, genetic information, disability status, protected veteran status, or any other characteristic protected by federal, state, or local laws. See more information by clicking on the Know Your Rights: Workplace Discrimination is Illegal link. Pay Transparency Zscaler complies with all applicable federal, state, and local pay transparency rules. Zscaler is committed to providing reasonable support (called accommodations or adjustments) in our recruiting processes for candidates who are differently abled, have long term conditions, mental health conditions or sincerely held religious beliefs, or who are neurodivergent or require pregnancy-related support.
Posted 1 month ago
12 - 22 years
17 - 22 Lacs
Hyderabad
Work from Office
Key Responsibilities: Design and implement robust DevOps architectures for cloud-native applications, including IoT solutions, utilizing Azure and AWS services. Develop and execute a comprehensive DevOps strategy and roadmap aligned with company objectives. Architect and maintain scalable, secure, and highly available cloud infrastructure. Lead and mentor a team of DevOps engineers, fostering a culture of collaboration and innovation. Oversee the development of CI/CD pipelines using tools such as Jenkins, GitHub, GitLab, and ArgoCD to optimize the software delivery process. Implement best practices for code quality, testing, and deployment to facilitate rapid and reliable software delivery. Drive automation initiatives with Terraform and Ansible to improve operational efficiency. Continuously monitor cloud environments to proactively identify and address performance issues, outages, and security threats. Conduct security audits and implement best practices to ensure compliance with regulatory requirements. Collaborate with cross-functional teams to troubleshoot and resolve infrastructure issues efficiently. Stay abreast of industry trends and advancements to ensure a competitive technology stack. Architect and implement Continuous Integration and Continuous Deployment workflows, enhancing automation pipelines. Design and implement scalable DevOps architecture for nightly builds, pull requests, zero-downtime production releases, rollbacks, and GitFlow processes. Requirements: 13+ years of experience in software development, system architecture, or IT operations, including at least 5 years in a leadership role. Proven expertise in designing and implementing cloud architecture in AWS and Azure. Demonstrated experience in implementing end-to-end CI/CD solutions, including SAST and DAST, in public and private cloud platforms. Excellent communication and interpersonal skills, with the ability to lead a high-performing team. Experience with configuration management tools (e.g., Ansible, Puppet, Chef). Strong expertise in Infrastructure as Code (IaC) tools, particularly Terraform. Proficiency in containerization and orchestration technologies (e.g., Kubernetes, AKS, EKS, KEDA). Experience architecting and implementing automated pipelines for OS installation, software updates, network configuration, packaging, deployments, and version management. Familiarity with IoT services such as Azure IoT Hub, AWS IoT Core, and their integration into cloud solutions. Proven experience in pre-sales activities for DevOps/CloudOps. Develop and present proof-of-concept (PoC) demos configured for both internal and external audiences. Ability to thrive in a fast-paced, dynamic work environment. Bachelors degree in computer science, Engineering, or a related field Preferred candidate profile
Posted 1 month ago
7 - 9 years
30 - 32 Lacs
Chennai, Bengaluru
Work from Office
Hiring Cloud Engineers for an 8-month contract role based in Chennai or Bangalore with hybrid/remote flexibility. The ideal candidate will have 8+ years of IT experience, including 4+ years in AWS cloud migrations, with strong hands-on expertise in AWS MGN, EC2, EKS, Terraform, and scripting using Python or Shell. Responsibilities include leading lift-and-shift migrations, automating infrastructure, migrating storage to EBS, S3, EFS, and modernizing legacy applications. AWS/Terraform certifications and experience in monolithic and microservices architectures are preferred Cloud Engineer, AWS Migration, AWS MGN
Posted 1 month ago
10 - 15 years
30 - 35 Lacs
Bengaluru
Work from Office
Expertise in Spring Boot,API Gateway,OAuth,Kubernetes (EKS) orchestration. Hands-on experience in CI/CD pipeline automation,DevSecOps best practices,performance tuning. Strong knowledge of AWS networking,IAM policies,security compliance. Required Candidate profile Bachelor/Masters in Computer Science,IT,related field. AWS Certified Solutions Architect/Kubernetes Certification. 10-15yrs of exp in backend architecture,API security,cloud-based microservices.
Posted 1 month ago
3 - 5 years
6 - 10 Lacs
Bengaluru
Work from Office
locationsIndia, Bangalore time typeFull time posted onPosted 30+ Days Ago job requisition idJR0034909 Job Title: SDET About Trellix: Trellix, the trusted CISO ally, is redefining the future of cybersecurity and soulful work. Our comprehensive, GenAI-powered platform helps organizations confronted by todays most advanced threats gain confidence in the protection and resilience of their operations. Along with an extensive partner ecosystem, we accelerate technology innovation through artificial intelligence, automation, and analytics to empower over 53,000 customers with responsibly architected security solutions. We also recognize the importance of closing the 4-million-person cybersecurity talent gap. We aim to create a home for anyone seeking a meaningful future in cybersecurity and look for candidates across industries to join us in soulful work. More at . Role Overview: Trellix is looking for SDETs who are self-driven and passionate to work on Endpoint Detection and Response (EDR) line of products. The team is the ultimate quality gate before shipping to Customers. Tasks range from manual and, automated testing (including automation development), non-functional (performance, stress, soak), solution, security testing and much more. Work on cutting edge technology and AI driven analysis. About the role: Peruse requirements documents thoroughly and thus design relevant test cases that cover new product functionality and the impacted areas Execute new feature and regression cases manually, as needed for a product release Identify critical issues and communicate them effectively in a timely manner Familiarity with bug tracking platforms such as JIRA, Bugzilla, etc. is helpful. Filing defects effectively, i.e., noting all the relevant details that reduces the back-and-forth, and aids quick turnaround with bug fixing is an essential trait for this job Identify cases that are automatable, and within this scope segregate cases with high ROI from low impact areas to improve testing efficiency Hands-on with automation programming languages such as Python, Java, etc. is advantageous. Execute, monitor and debug automation runs Author automation code to improve coverage across the board Willing to explore and increase understanding on Cloud/ On-prem infrastructure About you: 3-5 years of experience in a SDET role with a relevant degree in Computer Science or Information Technology is required Show ability to quickly learn a product or concept, viz., its feature set, capabilities, functionality and nitty-gritty Solid fundamentals in any programming language (preferably, Python or JAVA) and OOPS concepts. Also, hands-on with CI/CD with Jenkins or similar is a must RESTful API testing using tools such as Postman or similar is desired Familiarity and exposure to AWS and its offerings, such as, S3, EC2, EBS, EKS, IAM, etc., is required. Exposure to Docker, helm, argoCD is an added advantage Strong foundational knowledge in working on Linux based systems. This includes, setting up git repos, user management, network configurations, use of package managers, etc. Hands-on with non-functional testing, such as, performance and load, is desirable. Exposure to Locust or JMeter tools will be an added advantage Any level of proficiency with prometheus, grafana, service metrics, would be nice to have Understanding of Endpoint security concepts around Endpoint Detection and Response (EDR) would be advantageous. Company Benefits and Perks: We work hard to embrace diversity and inclusion and encourage everyone to bring their authentic selves to work every day. We offer a variety of social programs, flexible work hours and family-friendly benefits to all of our employees. Retirement Plans Medical, Dental and Vision Coverage Paid Time Off Paid Parental Leave Support for Community Involvement We're serious about our commitment to diversity which is why we prohibit discrimination based on race, color, religion, gender, national origin, age, disability, veteran status, marital status, pregnancy, gender expression or identity, sexual orientation or any other legally protected status.
Posted 1 month ago
1 - 6 years
8 - 13 Lacs
Pune
Work from Office
Cloud Observability Administrator JOB_DESCRIPTION.SHARE.HTML CAROUSEL_PARAGRAPH JOB_DESCRIPTION.SHARE.HTML Pune, India India Enterprise IT - 22685 about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. Cloud Observability Administrator ZS is looking for a Cloud Observability Administrator to join our team in Pune. As a Cloud Observability Administrator, you will be working on configuration of various Observability tools and create solutions to address business problems across multiple client engagements. You will leverage information from requirements-gathering phase and utilize past experience to design a flexible and scalable solution; Collaborate with other team members (involved in the requirements gathering, testing, roll-out and operations phases) to ensure seamless transitions. What Youll Do: Deploying, managing, and operating scalable, highly available, and fault tolerant Splunk architecture. Onboarding various kinds of log sources like Windows/Linux/Firewalls/Network into Splunk. Developing alerts, dashboards and reports in Splunk. Writing complex SPL queries. Managing and administering a distributed Splunk architecture. Very good knowledge on configuration files used in Splunk for data ingestion and field extraction. Perform regular upgrades of Splunk and relevant Apps/add-ons. Possess a comprehensive understanding of AWS infrastructure, including EC2, EKS, VPC, CloudTrail, Lambda etc. Automation of manual tasks using Shell/PowerShell scripting. Knowledge of Python scripting is a plus. Good knowledge of Linux commands to manage administration of servers. What Youll Bring: 1+ years of experience in Splunk Development & Administration, Bachelor's Degree in CS, EE, or related discipline Strong analytic, problem solving, and programming ability 1-1.5 years of relevant consulting-industry experience working on medium-large scale technology solution delivery engagements; Strong verbal, written and team presentation communication skills Strong verbal and written communication skills with ability to articulate results and issues to internal and client teams Proven ability to work creatively and analytically in a problem-solving environment Ability to work within a virtual global team environment and contribute to the overall timely delivery of multiple projects Knowledge on Observability tools such as Cribl, Datadog, Pagerduty is a plus. Knowledge on AWS Prometheus and Grafana is a plus. Knowledge on APM concepts is a plus. Knowledge on Linux/Python scripting is a plus. Splunk Certification is a plus. Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying? At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At
Posted 1 month ago
6 - 11 years
15 - 30 Lacs
Indore, Ahmedabad
Work from Office
Key Responsibilities: • Design, deploy, and manage AWS infrastructure using Terraform and Docker. • Manage and optimize Kubernetes clusters in EKS to ensure smooth and efficient operations. • Proficiency in CI/CD tools such as Jenkins, GitHub Actions or Bitbucket Pipeline. • Collaborate with cross-functional teams using GitHub, Jira, and Confluence to streamline workflows and achieve project goals. • Ensure robust security practices across all infrastructure, applications, and CI/CD pipelines. • Leverage expertise in DataDog monitoring to track and improve system performance. • Write and maintain strong shell scripts and Python scripts for automation and operational efficiency. • Utilize Infrastructure as Code (IaC) using Terraform and configuration management using Ansible. • Strong expertise in AWS core services (EC2, S3, RDS, Lambda, CloudWatch, Config, Control Tower, DynamoDB, EKS). • Knowledge of networking and security architectures (VNets, Firewalls, NATs, ACLs, Security Groups, Routing) • Implement best practices for infrastructure and application monitoring, scaling, and disaster recovery. Required Qualifications: • Bachelors degree in computer science, Information Technology, or a related field. Proven experience as a DevOps Engineer or similar role in the IT industry. • 5+ years of experience in cloud infrastructure engineering, with a strong focus on automation. • 5+ years of experience in Implementation, configuration and maintenance of DevOps and AWS • Strong proficiency in Linux and shell scripting. • Extensive experience with AWS, including the use of Terraform for infrastructure provisioning. • Proficiency in managing Kubernetes clusters, particularly in EKS. • In-depth knowledge of CI/CD pipelines. • Familiarity with Python code linting tools and best practices for clean, efficient code. • Strong working knowledge of GitHub, Jira, and Confluence for collaboration and project management. • Expertise in Docker for containerization and orchestration. • Strong focus on security best practices in infrastructure and application development. • Solid experience with DataDog for monitoring and logging. • Excellent problem-solving, communication, and teamwork skills.
Posted 1 month ago
8 - 13 years
15 - 25 Lacs
Pune
Work from Office
Experience-8+ Years Job Locations-Pune Notice Period-30 Days Job Description-Cloud Application Developer 8+ years of experience in software development with a focus on AWS solutions architecture. Proven experience in architecting microservices-based applications using EKS. Relevant AWS certifications - AWS Certified Solutions Architect Roles & Responsibilities- Design, develop, and implement robust microservices-based applications on AWS using Java. • Lead the architecture and design of EKS-based solutions, ensuring seamless deployment and scalability. Collaborate with cross-functional teams to gather and analyze functional requirements, translating them into technical specifications. Define and enforce best practices for software development, including coding standards, code reviews, and documentation. Identify non-functional requirements such as performance, scalability, security, and reliability; ensure these are met throughout the development lifecycle. Conduct architectural assessments and provide recommendations for improvements to existing systems. Mentor and guide junior developers in best practices and architectural principles. Proficiency in Java programming language with experience in frameworks such as Spring Boot. • Strong understanding of RESTful APIs and microservices architecture. Experience with AWS services, especially EKS, Lambda, S3, RDS, DynamoDB, and CloudFormation. Familiarity with CI/CD pipelines and tools like Jenkins or GitLab CI. Ability to design data models for relational and NoSQL databases. Experience in designing applications for high availability, fault tolerance, and disaster recovery. Knowledge of security best practices in cloud environments. Strong analytical skills to troubleshoot performance issues and optimize system efficiency. Excellent communication skills to articulate complex concepts to technical and non-technical stakeholders.
Posted 1 month ago
10 - 15 years
15 - 27 Lacs
Noida, Bengaluru
Work from Office
• 10+ years of hands-on DevOps experience, with at least 3 years in a lead or senior hands-on role. • Strong proficiency with infrastructure-as-code tools (Terraform, AWS CloudFormation). • Experience with containerization (Docker, ECS, EKS, AKS). Required Candidate profile • AWS and/or Azure certifications (e.g., AWS Solutions Architect Professional, DevOps Engineer, Any Azure certification good to have) [Must have]. • GitHub and SonarQube integration is required.
Posted 1 month ago
3 - 6 years
4 - 9 Lacs
Chennai, Bengaluru, Delhi / NCR
Hybrid
Hi, Urgent opening for DevSecOps Engineer with EY GDS at Pan India Location. Please apply if Available for Virtual Interview on 17th May 2025. https://careers.ey.com/job-invite/1590844/ Basis your availability we will be sharing invites post your application. EXP :3-6 Yrs Location: Pan India Mandatory Skills: Terraform ( Write code modify) CI/CD, Kubernetes,(aks,EKS,GKE ,) Python Ansible write modify update - Good to Have Desired Profile Any Bachelors degree 3-6 years of hands-on experience in Cloud and DevOps roles. Proficiency in Terraform for infrastructure as code. Strong experience with at least one major cloud platform (AWS, Azure, or GCP). Solid understanding and practical experience with CI/CD concepts and tools (Jenkins, GitLab CI, CircleCI, etc.). Hands-on experience with Kubernetes and Helm charts. Proficiency in Python or strong scripting skills (Bash, PowerShell, etc.). Experience with containerization technologies (Docker). Excellent problem-solving skills and a proactive attitude. Strong communication and collaboration skills. Technical Skills & Certifications Relevant OEM Level Certifications from: Certifications in cloud platforms (e.g., AWS Certified Solutions Architect, Azure Administrator, Google Professional Cloud Architect). Terraform Kubernetes
Posted 1 month ago
6 - 11 years
15 - 30 Lacs
Pune, Bengaluru, Mumbai (All Areas)
Work from Office
Roles and Responsibilities Design, develop, test, deploy, and maintain scalable cloud-based applications using AWS EKS. Collaborate with cross-functional teams to identify requirements and implement solutions that meet business needs. Ensure high availability, scalability, security, and performance of deployed applications on Amazon EC2 instances. Troubleshoot issues related to containerized applications running on Fargate or Lambda functions. Participate in code reviews to ensure adherence to coding standards and best practices.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for EKS (Elastic Kubernetes Service) professionals in India is rapidly growing as more companies are adopting cloud-native technologies. EKS is a managed Kubernetes service provided by Amazon Web Services (AWS), allowing users to easily deploy, manage, and scale containerized applications using Kubernetes.
These cities are known for their strong technology sectors and have a high demand for EKS professionals.
The average salary range for EKS professionals in India varies based on experience levels: - Entry-level: ₹4-6 lakhs per annum - Mid-level: ₹8-12 lakhs per annum - Experienced: ₹15-25 lakhs per annum
A typical career path in EKS may include roles such as: - Junior EKS Engineer - EKS Developer - EKS Administrator - EKS Architect - EKS Consultant
Besides EKS expertise, professionals in this field are often expected to have knowledge or experience in: - Kubernetes - Docker - AWS services - DevOps practices - Infrastructure as Code (IaC)
As you explore opportunities in the EKS job market in India, remember to showcase your expertise in EKS, Kubernetes, and related technologies during interviews. Prepare thoroughly, stay updated with industry trends, and apply confidently to secure exciting roles in this fast-growing field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2