Jobs
Interviews

638 Eks Jobs - Page 18

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 years

0 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consulta nt - Le ad AWS Cloud Engineer! In this role, you will own the vision, architecture, and governance of cloud infrastructure supporting scalable, secure, and high-performing AI/ GenAI platforms across the enterprise. Your mandate includes building resilient, compliant, and cost-efficient cloud ecosystems primarily on AWS, but with a strong foundation for multi-cloud operability Responsibilities Define and maintain cloud infrastructure architecture across AWS accounts, environments, and regions. Architect multi-tenants, secure VPC and networking models, supporting cross-account and hybrid integrations. Standardize Infrastructure-as-Code (Terraform) strategy for AI/ML/ GenAI workloads across teams. Govern security frameworks, including encryption, IAM boundary enforcement, secrets management, and logging. Oversee cloud automation in CI/CD pipelines and support deployment of GenAI workloads (LLM APIs, vector DBs). Design, review and implement disaster recovery, backup, and high availability strategies. Optimize cloud cost and performance with tagging, resource planning, and usage analytics. Define and support multi-cloud readiness, including network peering, SSO/SAML, and logging across clouds. Collaborate with MLOps , Compliance, InfoSec, and Architecture teams to align infrastructure with enterprise goals. Engaging in the design, development and maintenance of data pipelines for various AI use cases Active contribution to key deliverables as part of an agile development team Collaborating with others to source, analyse, test and deploy data processes Qualifications we seek in you! Minimum Qualifications hands-on AWS infrastructure experience in production environments. Experience developing, testing, and deploying data pipelines Clear and effective communication skills to interact with team members, stakeholders and end users Degree/qualification in Computer Science or a related field, or equivalent work experience Knowledge of governance and compliance policies, standards, and procedures Proven ability to manage enterprise-wide IAC, AWS CLI, and Python or Bash scripting, versioning strategy. Expert in IAM, S3, DevOps, VPC, ECS/EKS, Lambda, and serverless computing. Experience supporting AI/ML or GenAI pipelines in AWS (especially for compute and networking). Hands on experience to multi-cloud architecture basics (e.g., SSO, networking, blob exchange, shared VPC setups). Deep understanding of CI/CD automation, AI workload optimization, and infrastructure governance. Hands-on experience designing or managing infrastructure in at least one other cloud (Azure or GCP). Hands on experience to multiple AI / ML /RAG/LLM workloads and model deployment infrastructure. AWS Certified Solutions Architect - Professional or Advanced Networking Specialty. Preferred Qualifications/ Skills Experience deploying infrastructure in both AWS and another major cloud provider (Azure or GCP). Designed or migrated enterprise workloads to multi-cloud or hybrid setups. Experience with cross-cloud monitoring, networking (VPNs, Transit Gateways), and DR policies. Familiarity with multi-cloud tools (e.g., HashiCorp Vault, Kubernetes with cross-cloud clusters). Strong understanding of DevSecOps best practices and compliance requirements. In-depth exposure to regulated industries (BFSI, healthcare) requiring auditability and compliance. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Bengaluru

Work from Office

Experience in Modernizing applications to Container based platform using EKS, ECS, Fargat Proven experience on using DevOps tools during Modernization. Solid experience around No-SQL database. Should have used Orchestration engine like Kubernetes, Mesos Java8, spring boot, sql, Postgres DB and AWS Secondary Skills: React, redux, JavaScript Experience level knowledge on AWS Deployment Services, AWS beanstalk, AWS tools & SDK, AWS Cloud9, AWS CodeStar, AWS Command line interface etc and hands on experience on AWS ECS, AWS ECR, AWS EKS, AWS Fargate, AWS Lambda function, Elastic Chache, S3 objects, API Gateway, AWS Cloud Watch and AWS SNS Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Creative problem-solving skills and superb communication Skill. Should have worked on at least 3 engagements modernizing client applications to Container based solutions. Should be expert in any of the programming languages like Java, .NET, Node .js, Python, Ruby, Angular .js Preferred technical and professional experience Experience in distributed/scalable systems Knowledge of standard tools for optimizing and testing code Knowledge/Experience of Development/Build/Deploy/Test life cycle

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Hyderabad

Work from Office

Experience in Modernizing applications to Container based platform using EKS, ECS, Fargate Proven experience on using DevOps tools during Modernization. Solid experience around No-SQL database. Should have used Orchestration engine like Kubernetes, Mesos Java8, spring boot, sql, Postgres DB and AWS Secondary Skills: React, redux , JavaScript Experience level knowledge on AWS Deployment Services, AWS beanstalk, AWS tools & SDK, AWS Cloud9, AWS CodeStar, AWS Command line interface etc and hands on experience on AWS ECS, AWS ECR, AWS EKS, AWS Fargate, AWS Lambda function, Elastic Chache, S3 objects, API Gateway, AWS Cloud Watch and AWS SNS Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Creative problem-solving skills and superb communication Skill. Should have worked on at least 3 engagements modernizing client applications to Container based solutions. Should be expert in any of the programming languages like Java, .NET, Node .js, Python, Ruby, Angular .js Preferred technical and professional experience Experience in distributed/scalable systems Knowledge of standard tools for optimizing and testing code Knowledge/Experience of Development/Build/Deploy/Test life cycle

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Noida

Work from Office

"Ensure platform reliability and performance: Monitor, troubleshoot, and optimize production systems running on Kubernetes (EKS, GKE, AKS). Automate operations: Develop and maintain automation for infrastructure provisioning, scaling, and incident response. Incident response & on-call support: Participate in on-call rotations to quickly detect, mitigate, and resolve production incidents. Kubernetes upgrades & management: Own and drive Kubernetes version upgrades, node pool scaling, and security patches. Observability & monitoring: Implement and refine observability tools (Datadog, Prometheus, Splunk, etc.) for proactive monitoring and alerting. Infrastructure as Code (IaC): Manage infrastructure using Terraform, Terragrunt, Helm, and Kubernetes manifests. Cross-functional collaboration: Work closely with developers, DBPEs (Database Production Engineers), SREs, and other teams to improve platform stability. Performance tuning: Analyze and optimize cloud and containerized workloads for cost efficiency and high availability. Security & compliance: Ensure platform security best practices, incident response, and compliance adherence.." Required education None Preferred education Bachelor's Degree Required technical and professional expertise Strong expertise in Kubernetes (EKS, GKE, AKS) and container orchestration. Experience with AWS, GCP, or Azure, particularly in managing large-scale cloud infrastructure. Proficiency in Terraform, Helm, and Infrastructure as Code (IaC). Strong understanding of Linux systems, networking, and security best practices. Experience with monitoring & logging tools (Datadog, Splunk, Prometheus, Grafana, ELK, etc.). Hands-on experience with automation & scripting (Python, Bash, or Go). Preferred technical and professional experience Experience in incident management & debugging complex distributed systems. Familiarity with CI/CD pipelines and release automation.

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad

Work from Office

We are seeking a Senior DevOps Software Engineer to join the Labs software engineering practice. This role is integral to developing top-tier talent, setting engineering best practices, and evangelizing full-stack development capabilities across the organization. The Senior DevOps Software Engineer will design and implement deployment strategies for AI systems using the AWS stack, ensuring high availability, performance, and scalability of applications. Roles & Responsibilities: Design and implement deployment strategies using the AWS stack, including EKS, ECS, Lambda, SageMaker, and DynamoDB. Configure and manage CI/CD pipelines in GitLab to streamline the deployment process. Develop, deploy, and manage scalable applications on AWS, ensuring they meet high standards for availability and performance. Implement infrastructure-as-code (IaC) to provision and manage cloud resources consistently and reproducibly. Collaborate with AI product design and development teams to ensure seamless integration of AI models into the infrastructure. Monitor and optimize the performance of deployed AI systems, addressing any issues related to scaling, availability, and performance. Lead and develop standards, processes, and best practices for the team across the AI system deployment lifecycle. Stay updated on emerging technologies and best practices in AI infrastructure and AWS services to continuously improve deployment strategies. Familiarity with AI concepts such as traditional AI, generative AI, and agentic AI, with the ability to learn and adopt new skills quickly. Functional Skills: Deep expertise in designing and maintaining CI/CD pipelines and enabling software engineering best practices and overall software product development lifecycle. Ability to implement automated testing, build, deployment, and rollback strategies. Advanced proficiency managing and deploying infrastructure with the AWS cloud platform, including cost planning, tracking and optimization. Proficiency with backend languages and frameworks (Python, FastAPI, Flask preferred). Experience with databases (Postgres/DynamoDB) Experience with microservices architecture and containerization (Docker, Kubernetes). Good-to-Have Skills: Familiarity with enterprise software systems in life sciences or healthcare domains. Familiarity with big data platforms and experience in data pipeline development (Databricks, Spark). Knowledge of data security, privacy regulations, and scalable software solutions. Soft Skills: Excellent communication skills, with the ability to convey complex technical concepts to non-technical stakeholders. Ability to foster a collaborative and innovative work environment. Strong problem-solving abilities and attention to detail. High degree of initiative and self-motivation. Basic Qualifications: Bachelors degree in Computer Science, AI, Software Engineering, or related field. 5+ years of experience in full-stack software engineering.

Posted 1 month ago

Apply

10.0 - 15.0 years

12 - 22 Lacs

Pune

Hybrid

So, what’s the role all about? The Senior Specialist Technical Support Engineer role is to deliver technical support to end users about how to use and administer the NICE Service and Sales Performance Management, Contact Analytics and/or WFM software solutions efficiently and effectively in fulfilling business objectives. We are seeking a highly skilled and experienced Senior Specialist Technical Support Engineer to join our global support team. In this role, you will be responsible for diagnosing and resolving complex performance issues in large-scale SaaS applications hosted on AWS. You will work closely with engineering, DevOps, and customer success teams to ensure our customers receive world-class support and performance optimization. How will you make an impact? Serve as a subject matter expert in troubleshooting performance issues across distributed SaaS environments in AWS. Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address CSS Recording and Compliance application related product issues and resolve high-level issues. Analyze logs, metrics, and traces using tools like CloudWatch, X-Ray, Datadog, New Relic, or similar. Collaborate with development and operations teams to identify root causes and implement long-term solutions. Provide technical guidance and mentorship to junior support engineers. Act as an escalation point for critical customer issues, ensuring timely resolution and communication. Develop and maintain runbooks, knowledge base articles, and diagnostic tools to improve support efficiency. Participate in on-call rotations and incident response efforts. Have you got what it takes? 10+ years of experience in technical support, site reliability engineering, or performance engineering roles. Deep understanding of AWS services such as EC2, RDS, S3, Lambda, ELB, ECS/EKS, and CloudFormation. Proven experience troubleshooting performance issues in high-availability, multi-tenant SaaS environments. Strong knowledge of networking, load balancing, and distributed systems. Proficiency in scripting languages (e.g., Python, Bash) and familiarity with infrastructure-as-code tools (e.g., Terraform, CloudFormation). Excellent communication and customer-facing skills. Preferred Qualifications: AWS certifications (e.g., Solutions Architect, DevOps Engineer). Experience with observability platforms (e.g., Prometheus, Grafana, Splunk). Familiarity with CI/CD pipelines and DevOps practices. Experience working in ITIL or similar support frameworks. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7554 Reporting into: Tech Manager Role Type: Individual Contributor

Posted 1 month ago

Apply

8.0 - 13.0 years

15 - 25 Lacs

Gurugram

Remote

Minimum 6 years of hands-on experience deploying, enhancing, and troubleshooting foundational AWS Services (EC2, S3, RDS, VPC, CloudTrail, CloudFront, Lambda, EKS, ECS, etc.) • 3+ years of experience with serverless technologies, services, and container technologies (Docker, Kubernetes, etc.) o Manage Kubernetes charts using helm. o Managed production application deployments in Kubernetes cluster using KubeCTL. o Expertise in deploying distributed apps with containers (Docker) & orchestration (Kubernetes EKS,). o Experience in infrastructure-as-code tools for provisioning and managing Kubernetes infrastructure. o (Preferred) Certification in container orchestration systems and/or Certified Kubernetes Administrator. o Experience with Log Management and Analytics tools such as Splunk / ELK • 3+ years of experience with writing, debugging, and enhancing Terraform to write infrastructure as code to create scrips for EKS, EC2, S3, and other AWS services. o Expertise with working with Terraform Key features such as Infrastructure as code, execution plans, resource graphs, and change automation. o Implemented cluster services using Kubernetes and docker to manage local deployments in Kubernetes by building self-hosted Kubernetes clusters using Terraform. o Managed provisioning of AWS infrastructure using Terraform. o Develop and maintain infrastructure-as-code solutions using Terraform. • Ability to write scripts in JavaScript, Bash, Python, Typescript, or similar languages. • Able to work independently and as a team to architect and implement new solutions and technologies. • Very strong written and verbal communication skills; the ability to communicate verbally and in writing with all levels of employees and management, capable of successful formal and informal communication, speaks and writes clearly and understandably at the right level. • Ability to identify, evaluate, learn, and POC new technologies for implementation. • Experience in designing and implementing highly resilient AWS solutions.

Posted 1 month ago

Apply

6.0 - 10.0 years

6 - 10 Lacs

Mumbai, Maharashtra, India

On-site

Design Containerized & cloud-native Micro services Architecture Plan & Deploy Modern Application Platforms & Cloud Native Platforms Good understanding of AGILE process & methodology Plan & Implement Solutions & best practices for Process Automation, Security, Alerting & Monitoring, and Availability solutions Should have good understanding of Infrastructure-as-code deployments Plan & design CI/CD pipelines across multiple environments Support and work alongside a cross-functional engineering team on the latest technologies Iterate on best practices to increase the quality & velocity of deployments Sustain and improve the process of knowledge sharing throughout the engineering team Keep updated on modern technologies & trends, and advocate the benefits Should possess good team management skills Ability to drive goals / milestones, while valuing & maintaining a strong attention to detail Excellent Judgement, Analytical & problem-solving skills Excellent in communication skills Experience maintaining and deploying highly-available, fault-tolerant systems at scale Practical experience with containerization and clustering (Kubernetes/OpenShift/Rancher/Tanzu/GKE/AKS/EKS etc) Version control system experience (e.g. Git, SVN) Experience implementing CI/CD (e.g. Jenkins, TravisCI) Experience with configuration management tools (e.g. Ansible, Chef) Experience with infrastructure-as-code (e.g. Terraform, Cloud formation) Expertise with AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda) Container Registry Solutions (Harbor, JFrog, Quay etc) Operational (e.g. HA/Backups) NoSQL experience (e.g. Cassandra, MongoDB, Redis) Good understanding on Kubernetes Networking & Security best practices Monitoring Tools like DataDog, or any other open source tool like Prometheus, Nagios Load Balancer Knowledge (AVI Networks, NGINX) Location: Pune / Mumbai [Work from Office]

Posted 1 month ago

Apply

6.0 - 10.0 years

6 - 10 Lacs

Pune, Maharashtra, India

On-site

Design Containerized & cloud-native Micro services Architecture Plan & Deploy Modern Application Platforms & Cloud Native Platforms Good understanding of AGILE process & methodology Plan & Implement Solutions & best practices for Process Automation, Security, Alerting & Monitoring, and Availability solutions Should have good understanding of Infrastructure-as-code deployments Plan & design CI/CD pipelines across multiple environments Support and work alongside a cross-functional engineering team on the latest technologies Iterate on best practices to increase the quality & velocity of deployments Sustain and improve the process of knowledge sharing throughout the engineering team Keep updated on modern technologies & trends, and advocate the benefits Should possess good team management skills Ability to drive goals / milestones, while valuing & maintaining a strong attention to detail Excellent Judgement, Analytical & problem-solving skills Excellent in communication skills Experience maintaining and deploying highly-available, fault-tolerant systems at scale Practical experience with containerization and clustering (Kubernetes/OpenShift/Rancher/Tanzu/GKE/AKS/EKS etc) Version control system experience (e.g. Git, SVN) Experience implementing CI/CD (e.g. Jenkins, TravisCI) Experience with configuration management tools (e.g. Ansible, Chef) Experience with infrastructure-as-code (e.g. Terraform, Cloud formation) Expertise with AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda) Container Registry Solutions (Harbor, JFrog, Quay etc) Operational (e.g. HA/Backups) NoSQL experience (e.g. Cassandra, MongoDB, Redis) Good understanding on Kubernetes Networking & Security best practices Monitoring Tools like DataDog, or any other open source tool like Prometheus, Nagios Load Balancer Knowledge (AVI Networks, NGINX) Location: Pune / Mumbai [Work from Office]

Posted 1 month ago

Apply

3.0 - 7.0 years

3 - 14 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

We are looking for an experienced Cloud Engineer - AWS Cloud App Migration Specialist to join our team. This role involves migrating on-premises applications to AWS , optimizing cloud infrastructure, and ensuring seamless transitions across various AWS services. The ideal candidate will have hands-on experience with EC2 , EKS , S3 , EBS , and EFS , and a strong understanding of the Lift and Shift migration process. The candidate will lead the AWS migration efforts for monolithic and microservices applications, leveraging AWS migration tools and automation to simplify and accelerate the process. You'll collaborate with cross-functional teams to resolve migration challenges and ensure application reliability in the cloud. Key Responsibilities: Plan and execute AWS migrations of on-prem applications using AWS MGN (Migration Hub Network) . Implement Lift and Shift migrations and utilize the 7Rs strategy (Rehost, Replatform, Repurchase, Refactor, Retire, Retain, and Relocate). Strong hands-on experience with AWS EC2, ASG, EBS, EFS, and ALB to ensure proper application performance and scalability. Design and implement automated application migrations , leveraging best practices and cloud migration tools. Collaborate with cross-functional teams to troubleshoot migration issues and provide effective solutions. Work on monolithic and microservices web applications migration strategies, ensuring minimal disruption and optimal performance post-migration. Optimize cloud infrastructure and recommend improvements to ensure cost-effective and high-performing AWS environments. Qualifications: 3+ years of hands-on experience working with AWS cloud services , particularly EC2, EKS, S3, EBS, and EFS . Strong knowledge of AWS migration tools (e.g., AWS MGN ) and techniques for cloud application migration. Proficiency in Terraform for cloud infrastructure automation and management. Deep understanding of AWS cloud services , architecture, and best practices for application migrations. AWS Certification (e.g., AWS Certified Solutions Architect - Associate or AWS Certified DevOps Engineer ) and Terraform certification are preferred. Strong analytical and troubleshooting skills, with the ability to resolve migration challenges quickly and efficiently. Excellent collaboration and communication skills, working with multiple teams to drive migration success.

Posted 1 month ago

Apply

5.0 - 8.0 years

8 - 13 Lacs

Mumbai, Hyderabad, Pune

Work from Office

Develop and productionize cloud-based services and full-stack applications utilizing NLP solutions, including GenAI models Implement and manage CI/CD pipelines to ensure efficient and reliable software delivery Automate cloud infrastructure using Terraform Write unit tests, integration tests and performance tests Work in a team environment using agile practices Monitor and optimize application performance and infrastructure costs Collaborate with data scientists and other developers to integrate and deploy data science models into production environments Work closely with cross-functional teams to ensure seamless integration and operation of services Proficiency JavaScript for full-stack development Strong experience with AWS cloud services, including EKS, Lambda, and S3 Knowledge of Docker containers and orchestration tools including Kubernetes

Posted 1 month ago

Apply

6.0 - 11.0 years

35 - 45 Lacs

Noida

Remote

Job Title: Backend Engineer Python & Microservices Location: Remote Employment Type: Full-time Experience Required: 5–7+ years Industry: SaaS / Energy / Mobility / Cloud Infrastructure About the Role We are looking for a highly skilled and autonomous Backend Engineer with deep expertise in Python, microservices architecture, and API design to join a high-impact engineering team working on scalable internal tools and enterprise SaaS platforms. You will play a key role in system architecture, PoC development, and cloud-native service delivery, collaborating closely with cross-functional teams. Key Responsibilities Design and implement robust, scalable microservices using Python and related frameworks. Develop and maintain high-performance, production-grade RESTful APIs and background jobs. Lead or contribute to PoC architecture, system modularization, and microservice decomposition. Design and manage relational and NoSQL data models (PostgreSQL, MongoDB, DynamoDB). Build scalable, async batch jobs and distributed processing pipelines using Kafka, RabbitMQ, and SQS. Drive best practices around error handling, logging, security, and observability (Grafana, CloudWatch, Datadog). Collaborate across engineering, product, and DevOps to ship reliable features in cloud environments (AWS preferred). Contribute to documentation, system diagrams, and CI/CD pipelines (Terraform, GitHub Actions). Requirements 5–7+ years of hands-on experience as a backend engineer Strong proficiency in Python (Flask, FastAPI, Django, etc.) Solid experience with microservices architecture and containerized environments (Docker, Kubernetes, EKS) Proven expertise in REST API design, rate limiting, security, and performance optimization Familiarity with NoSQL & SQL databases (MongoDB, PostgreSQL, DynamoDB, ClickHouse) Experience with cloud platforms (AWS, Azure, or GCP – AWS preferred) CI/CD and Infrastructure as Code (Jenkins, GitHub Actions, Terraform) Exposure to distributed systems, data processing, and event-based architectures (Kafka, SQS) Excellent written and verbal communication skills Bonus: Experience integrating with tools like Zendesk, Openfire, or ticketing/chat systems Preferred Qualifications Bachelor’s or Master’s degree in Computer Science or related field Certifications in System Design or Cloud Architecture Experience working in agile, distributed teams with a strong ownership mindset

Posted 1 month ago

Apply

7.0 - 12.0 years

0 Lacs

Pune, Bengaluru, Delhi / NCR

Work from Office

Cloud Application Integration Lead [8-10+ years of relevant experience] Job Purpose and primary objectives: Application Integration Lead with good hands-on knowledge of AWS Integration related Native services. Experience and expertise in AWS Cloud services. Key responsibilities/ Skills/Knowledge • Define and architect solutions using various AWS cloud native services. • Architect and design application solution solutions, focusing on application Integration using various serverless and cloud native services like - SQS, SNS, Pub/Sub architecture, Lambda, Kinesis Firehose, EKS, AWS S3, AWS API gateway. • 3+ years of direct experience architecting/designing high throughput applications on AWS. Must have experience with resiliency, reliability and high availability engineering. • Proven ability to architect, design and implement cloud-based and/or cloud-native solutions and extensive knowledge on API Gateway and API based integrations. • Hands-on experience with containerization platforms and serverless computing, such as Docker, EKS, Lambda, Kubernetes event driven Autoscaling knowledge KEDA is preferred. • Hands-on experience with AWS Elastic Beanstalk and basic experience creating RESTful services. • Working knowledge of Databases, such as Amazon Aurora (PostgreSQL or MySQL), DynamoDB, Redis. • Experience with migrations to the cloud from both physical and virtual environments Experience required: • Bachelor’s degree or equivalent experience in a software engineering discipline • 8-10+ years of relevant experience in a professional setting as an Infrastructure Architect with Cloud experience • Strong communication skills • Certifications (preferred) – AWS Solution Architect Professional and AWS Specialty certific

Posted 1 month ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Hyderabad, Bengaluru

Work from Office

Role & responsibilities Key Skills Java Microservices, Spring boot, Kafka, AWS (ECS fargate container, S3, lamda, postgress/mongo/aurora, Redis cache, NLB, ALB, AWS Route 53), Terraform Experience in Java J2ee Spring boot Experience in Design Kubernetes AWS EKS EC2 is needed( Mandatory) Experience in AWS cloud monitoring tools like Datadog Cloud watch, Lambda is needed Experience with XACML Authorization policies Experience in NoSQL SQL database such as Cassandra Aurora Oracle Experience with Web Services SOA experience SOAP as well as Restful with JSON formats with Messaging Kafka Hands on with development and test automation tools or frameworks eg BDD and Cucumber has context menu. Interested candidates share your updated cv on:- Sanchit@mounttalent.com

Posted 1 month ago

Apply

4.0 - 7.0 years

9 - 12 Lacs

Pune

Hybrid

So, what’s the role all about? In NiCE as a Senior Software professional specializing in designing, developing, and maintaining applications and systems using the Java programming language. They play a critical role in building scalable, robust, and high-performing applications for a variety of industries, including finance, healthcare, technology, and e-commerce How will you make an impact? Working knowledge of unit testing Working knowledge of user stories or use cases Working knowledge of design patterns or equivalent experience. Working knowledge of object-oriented software design. Team Player Have you got what it takes? Bachelor’s degree in computer science, Business Information Systems or related field or equivalent work experience is required. 4+ year (SE) experience in software development Well established technical problem-solving skills. Experience in Java, spring boot and microservices. Experience with Kafka, Kinesis, KDA, Apache Flink Experience in Kubernetes operators, Grafana, Prometheus Experience with AWS Technology including (EKS, EMR, S3, Kinesis, Lambda’s, Firehose, IAM, CloudWatch, etc) You will have an advantage if you also have: Experience with Snowflake or any DWH solution. Excellent communication skills, problem-solving skills, decision-making skills Experience in Databases Experience in CI/CD, git, GitHub Actions Jenkins based pipeline deployments. Strong experience in SQL What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 6965 Reporting into: Tech Manager Role Type: Individual Contributor

Posted 1 month ago

Apply

9.0 - 14.0 years

30 - 37 Lacs

Hyderabad

Hybrid

Primary Responsibilities: Design, implement, and maintain scalable, reliable, and secure infrastructure on AWS and EKS Develop and manage observability and monitoring solutions using Datadog, Splunk, and Kibana Collaborate with development teams to ensure high availability and performance of microservices-based applications Automate infrastructure provisioning, deployment, and monitoring using Infrastructure as Code (IaC) and CI/CD pipelines Build and maintain GitHub Actions workflows for continuous integration and deployment Troubleshoot production issues and lead root cause analysis to improve system reliability Ensure compliance with healthcare data standards and regulations (e.g., HIPAA, HL7, FHIR) Work closely with data engineering and analytics teams to support healthcare data pipelines and analytics platforms Mentor junior engineers and contribute to SRE best practices and culture Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelors degree in Engineering (B.Tech) or equivalent in Computer Science, Information Technology, or a related field 10+ years of experience in Site Reliability Engineering, DevOps, or related roles Hands-on experience with AWS services, EKS, and container orchestration Experience with healthcare technology solutions, health data interoperability standards (FHIR, HL7), and healthcare analytics Experience with GitHub Actions or similar CI/CD tools Solid expertise in Datadog, Splunk, Kibana, and other observability tools Deep understanding of microservices architecture and distributed systems Proficiency in Python for scripting and automation Solid scripting and automation skills (e.g., Bash, Terraform, Ansible) Proven excellent problem-solving, communication, and collaboration skills Preferred Qualifications: Certifications in AWS, Kubernetes, or healthcare IT (e.g., AWS Certified DevOps Engineer, Certified Kubernetes Administrator) Experience with security and compliance in healthcare environments

Posted 1 month ago

Apply

5.0 - 8.0 years

15 - 25 Lacs

Pune

Hybrid

So, what’s the role all about? We are seeking a skilled and experienced Developer with expertise in .net programming along with knowledge on LLM and AI to join our dynamic team. As a Contact Center Developer, you will be responsible for developing and maintaining contact center applications, with a specific focus on AI functionality. Your role will involve designing and implementing robust and scalable AI solutions, ensuring efficient agent experience. You will collaborate closely with cross-functional teams, including software developers, system architects, and managers, to deliver cutting-edge solutions that enhance our contact center experience. How will you make an impact? Develop, enhance, and maintain contact center applications with an emphasis on copilot functionality. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Perform system analysis, troubleshooting, and debugging to identify and resolve issues. Conduct regular performance monitoring and optimization of code to ensure optimal customer experiences. Maintain documentation, including technical specifications, system designs, and user manuals. Stay up to date with industry trends and emerging technologies in contact center, AI, LLM and .Net development and apply them to enhance our systems. Participate in code reviews and provide constructive feedback to ensure high-quality code standards. Deliver high quality, sustainable, maintainable code. Participate in reviewing design and code (pull requests) for other team members – again with a secure code focus. Work as a member of an agile team responsible for product development and delivery Adhere to agile development principles while following and improving all aspects of the scrum process. Follow established department procedures, policies, and processes. Adheres to the company Code of Ethics and CXone policies and procedures. Excellent English and experience in working in international teams are required. Have you got what it takes? BS or MS in Computer Science or related degree 5-8 years’ experience in software development. Strong knowledge of working and developing Microservices. Design, develop, and maintain scalable .NET applications specifically tailored for contact center copilot solutions using LLM technologies. Good understanding of .Net and design patterns and experience in implementing the same Experience in developing with REST API Integrate various components including LLM tools, APIs, and third-party services within the .NET framework to enhance functionality and performance. Implement efficient database structures and queries (SQL/NoSQL) to support high-volume data processing and real-time decision-making capabilities. Utilize Redis for caching frequently accessed data and optimizing query performance, ensuring scalable and responsive application behavior. Identify and resolve performance bottlenecks through code refactoring, query optimization, and system architecture improvements. Conduct thorough unit testing and debugging of applications to ensure reliability, scalability, and compliance with specified requirements. Utilize Git or similar version control systems to manage source code and coordinate with team members on collaborative projects. Experience with Docker/Kubernetes is a must. Experience with cloud service provider - Amazon Web Services (AWS) is must. Experience with AWS Could on any technology (preferred are Kafka, EKS, Kubernetes) Experience with Continuous Integration workflow and tooling. Stay updated with industry trends, emerging technologies, and best practices in .NET development and LLM applications to drive innovation and efficiency within the team. You will have an advantage if you also have: Strong communication skills Experience with cloud service provider like Amazon Web Services (AWS), Google Cloud Engine, Azure or equivalent Cloud provider is a must. Experience with ReactJS. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7443 Reporting into: Sandip Bhattcharjee Role Type: Individual Contributor

Posted 1 month ago

Apply

2.0 - 4.0 years

13 - 17 Lacs

Bengaluru

Work from Office

Description Enphase Energy is a global energy technology company and leading provider of solar, battery, and electric vehicle charging products. Founded in 2006, Enphase transformed the solar industry with our revolutionary microinverter technology, which turns sunlight into a safe, reliable, resilient, and scalable source of energy to power our lives. Today, the Enphase Energy System helps people make, use, save, and sell their own power. Enphase is also one of the fastest growing and innovative clean energy companies in the world, with approximately 68 million products installed across more than 145 countries. We are building teams that are designing, developing, and manufacturing next-generation energy technologies and our work environment is fast-paced, fun and full of exciting new projects. If you are passionate about advancing a more sustainable future, this is the perfect time to join Enphase! About the role At Enphase, we think big. We re on a mission to bring solar energy to the next level, one where it s ready to meet the energy demands of an entire globe. As we work towards our vision for a solar-powered planet, we need visionary and talented people to join our team as Senior Back-End engineers. The Back-End engineer will develop, maintain, architect expand cloud microservices for the EV (Electric Vehicle) Business team. Codebase uses Java, Spring Boot, Mongo, REST APIs, MySQL. Applications are dockized and hosted in AWS using a plethora of AWS services. What you will be doing Programming in Java + Spring Boot REST API with JSON, XML etc. for data transfer Multiple database proficiency including SQL and NoSQL (Cassandra, MongoDB) Ability to develop both internal facing and external facing APIs using JWT and OAuth2.0 Familiar with HA/DR, scalability, performance, code optimizations Experience with working with highly performance and throughput systems Ability to define, track and deliver items to one s own schedule. Good organizational skills and the ability to work on more than one project at a time. Exceptional attention to detail and good communication skills Who you are and what you bring B.E/B.Tech in Computer Science from top tier college and >70% marks More than 4 years of overall Back-End development experience Experience with SQL + NoSQL (Preferably MongoDB) Experience with Amazon Web Services, JIRA, Confluence, GIT, Bitbucket etc. Ability to work independently and as part of a project team. Strong organizational skills, proactive, and accountable Excellent critical thinking and analytical problem-solving skills Ability to establish priorities and proceed with objectives without supervision. Ability to communicate effectively and accurately. clear concise written project status update throughout the project lifecycle Highly skilled at facilitating and documenting requirements Excellent facilitation, collaboration, and presentation skills Comfort with ambiguity, frequent change, or unpredictability Good Practice of writing clean and scalable code Exposure or knowledge in Renewable Tech companies Good understanding of cloud technologies, such as Docker, Kubernetes, EKS, Kafka, AWS Kinesis etc. Knowledge of NoSQL Database systems like MongoDB or CouchDB, including Graph Databases Ability to work in a fast-paced environment. Exposure or knowledge in Renewable Tech companies

Posted 1 month ago

Apply

14.0 - 16.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Job Category Software Engineering Job Details About Salesforce We're Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too - driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good - you've come to the right place. Role Description Join the AI team at Salesforce and make a real impact with your software designs and code! This position requires technical skills, outstanding analytical and influencing skills, and extraordinary business insight. It is a multi-functional role that requires building alignment and communication with several engineering organisations. We work in a highly collaborative environment, and you will partner with a highly cross functional team comprised of Data Scientists, Software Engineers, Machine learning engineers, UX experts, and product managers to build upon Agentforce, our innovative new AI framework. We value execution, clear communication, feedback and making learning fun. Your impact - You will: Architect, design, implement, test and deliver highly scalable AI solutions: Agents, AI Copilots/assistants, Chatbots, AI Planners, RAG solutions. Be accountable for defining and driving software architecture and enterprise capabilities (scalability, fault tolerance, extensibility, maintainability, etc.) Independently design sophisticated software systems for high-end solutions, while working in a consultative fashion with other senior engineers and architects in AI Cloud and across the company Determine overall architectural principles, frameworks, and standards to craft vision and roadmaps Analyze and provide feedback on product strategy and technical feasibility Drive long-term design strategies that span multiple sophisticated projects, deliver technical reports and performance presentations to customers and at industry events Actively communicate with, encourage and motivate all levels of staff. Be a domain expert for multiple products, while writing code and working closely with other developers, PM, and UX to ensure features are delivered to meet business and quality requirements Troubleshoot complex production issues and work with support and customers as needed Drives long-term design strategies that span multiple sophisticated projects, deliver technical reports and performance presentations to customers and at industry events Required Skills: 14+ years of experience in building highly scalable Software-as-a-Service applications/ platform Experience building technical architectures that address complex performance issues Thrive in dynamic environments, working on cutting edge projects that often come with ambiguity. Innovation/startup mindset to be able to adapt Deep knowledge of object oriented programming and experience with at least one object oriented programming language, preferably Java Proven ability to mentor team members to support their understanding and growth of software engineering architecture concepts and aid in their technical development High proficiency in at least one high-level programming language and web framework (NodeJS, Express, Hapi, etc.) Proven understanding of web technologies, such as JavaScript, CSS, HTML5, XML, JavaScript, JSON, and/or Ajax Data model design, database technologies (RDBMS & NoSQL), and languages such as SQL and PL/SQL Experience delivering or partnering with teams that ship AI products at high scale. Experience in automated testing including unit and functional testing using Java, JUnit, JSUnit, Selenium Demonstrated ability to drive long-term design strategies that span multiple complex projects Experience delivering technical reports and presentations to customers and at industry events Demonstrated track record of cultivating strong working relationships and driving collaboration across multiple technical and business teams to resolve critical issues Experience with the full software lifecycle in highly agile and ambiguous environments Excellent interpersonal and communication skills. Preferred Skills: Solid experience in API development, API lifecycle management and/or client SDKs development Experience with machine learning or cloud technology platforms like AWS sagemaker, terraform, spinnaker, EKS, GKE Experience with AI/ML and Data science, including Predictive and Generative AI Experience with data engineering, data pipelines or distributed systems Experience with continuous integration (CI) and continuous deployment (CD), and service ownership Familiarity with Salesforce APIs and technologies Ability to support/resolve production customer escalations with excellent debugging and problem solving skills BENEFITS & PERKS Comprehensive benefits package including well-being reimbursement, generous parental leave, adoption assistance, fertility benefits, and more! World-class enablement and on-demand training with Exposure to executive thought leaders and regular 1:1 coaching with leadership Volunteer opportunities and participation in our 1:1:1 model for giving back to the community For more details, visit Accommodations If you require assistance due to a disability applying for open positions please submit a request via this . Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that's inclusive, and free from discrimination. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications - without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education.

Posted 1 month ago

Apply

4.0 - 8.0 years

15 - 27 Lacs

Bengaluru

Hybrid

Machine Learning & Data Pipelines Strong understanding of Machine Learning principles, lifecycle, and deployment practices Experience in designing and building ML pipelines Knowledge of deploying ML models on AWS Lambda, EKS, or other relevant services Working knowledge of Apache Airflow for orchestration of data workflows Proficiency in Python for scripting, automation, and ML model development with Data Scientists Basic understanding of SQL for querying and data analysis Cloud and DevOps Experience Hands-on experience with AWS services, including but not limited to: AWS Glue, Lambda, S3, SQS, SNS Proficient in checking and interpreting CloudWatch logs and setting up alarm. Infrastructure as Code (IaC) experience using Terraform Experience with CI/CD pipelines, particularly using GitLab for code and infrastructure deployments Understanding of cloud cost optimization and budgeting, with the ability to assess cost implications of various AWS services

Posted 1 month ago

Apply

5.0 - 8.0 years

15 - 27 Lacs

Bengaluru

Work from Office

Strong experience with Python, SQL, pySpark, AWS Glue. Good to have - Shell Scripting, Kafka Good knowledge of DevOps pipeline usage (Jenkins, Bitbucket, EKS, Lightspeed) Experience of AWS tools (AWS S3, EC2, Athena, Redshift, Glue, EMR, Lambda, RDS, Kinesis, DynamoDB, QuickSight etc.). Orchestration using Airflow Good to have - Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming Good debugging skills Should have strong hands-on design and engineering background in AWS, across a wide range of AWS services with the ability to demonstrate working on large engagements. Strong experience and implementation of Data lakes, Data warehousing, Data Lakehouse architectures. Ensure data accuracy, integrity, privacy, security, and compliance through quality control procedures. Monitor data systems performance and implement optimization strategies. Leverage data controls to maintain data privacy, security, compliance, and quality for allocated areas of ownership. Demonstrable knowledge of applying Data Engineering best practices (coding practices to DS, unit testing, version control, code review). Experience in Insurance domain preferred.

Posted 1 month ago

Apply

5.0 - 9.0 years

12 - 20 Lacs

Bengaluru

Work from Office

Exprence 5-8 Years Location - Bangalore Mode C2H Hands on data engineering experience. Hands on experience with Python programming Hands-on Experience with AWS & EKS Working knowledge of Unix, Databases, SQL Working Knowledge on Databricks Working Knowledge on Airflow and DBT

Posted 1 month ago

Apply

10.0 - 12.0 years

1 - 2 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

As an AWS Cloud Architect, you will be responsible for designing, implementing, and managing our cloud infrastructure on Amazon Web Services (AWS) with a focus on AWS Elastic Kubernetes Service (EKS). Your expertise in DevOps tools such as Git and Jenkins will be crucial for automating deployment pipelines and managing release processes. You will work closely with development teams to ensure seamless integration, scalability, and performance of our cloud-based applications. Overall 10+ years of experience with at least 7+ years of experience in AWS. Architect and design scalable, secure, and highly available cloud solutions on AWS, with a strong emphasis on AWS EKS. Develop and maintain architecture diagrams, documentation, and best practices. Solution design of requirements working with domain architects, infrastructure teams, and developers. Capacity and utilization management of current infrastructure; compliance management Implement and manage CI/CD pipelines using Jenkins for efficient deployment processes. Security scanning and automating the building, packaging, testing, and deployment of applications. Understanding the currentlandscape,identifying gaps, and assisting in closing these gaps. JOB DESCRIPTION s DETAILS: Automate infrastructure provisioning and configuration using Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation. Oversee and manage the release deployment process, ensuring smooth transition from development to production environments. Coordinate with development and operations teams to schedule and deploy application updates and enhancements. Troubleshoot and resolve deployment issues, ensuring minimal downtime and disruption. Work closely with cross-functional teams to understand requirements and deliver effective cloud solutions. Provide technical support and guidance to developers and operations teams on AWS best practices and EKS usage. Implement and enforce security policies and controls for cloud resources. Ensure compliance with industry standards and regulatory requirements. Desired Skills and Experience: Experience with Docker containerization and clustering (Kubernetes, Docker, Helm, OpenShift/GCP/Azure/AWS experience). Relevant experience in Microservices/API Development and deployment Deep technical skills in at least one core language environment (e.g., Java, Python, Go, etc.) Experience in automation tools such as Jenkins, GitHub Actions, GitLab, Helm Experience with dev tooling C tools such as Artifactory, Confluence, Jira Experience with application development and multi-platform architectures Strong understanding of security principles and technologies, such as encryption, authentication, and network security Preferred qualification: AWS Certified Solutions Architect or AWS Certified DevOps Engineer Professional. Certified Kubernetes Developer or Administrator certification Experience with other IaC tools such as Terraform. Knowledge of monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack).

Posted 1 month ago

Apply

1.0 - 3.0 years

3 - 7 Lacs

Thane

Work from Office

Role & responsibilities : Deploy, configure, and manage infrastructure across cloud platforms like AWS, Azure, and GCP. Automate provisioning and configuration using tools such as Terraform. Design and maintain CI/CD pipelines using Jenkins, GitLab CI, or CircleCI to streamline deployments. Build, manage, and deploy containerized applications using Docker and Kubernetes. Set up and manage monitoring systems like Prometheus and Grafana to ensure performance and reliability. Write scripts in Bash or Python to automate routine tasks and improve system efficiency. Collaborate with development and operations teams to support deployments and troubleshoot issues. Investigate and resolve technical incidents, performing root cause analysis and implementing fixes. Apply security best practices across infrastructure and deployment workflows. Maintain documentation for systems, configurations, and processes to support team collaboration. Continuously explore and adopt new tools and practices to improve DevOps workflows.

Posted 1 month ago

Apply

8.0 - 12.0 years

30 - 40 Lacs

Pune

Work from Office

Assessment & Analysis Review CAST software intelligence reports to identify technical debt, architectural flaws, and cloud readiness. Conduct manual assessments of applications to validate findings and prioritize migration efforts. Identify refactoring needs (e.g., monolithic to microservices, serverless adoption). Evaluate legacy systems (e.g., .NET Framework, Java EE) for compatibility with AWS services. Solution Design Develop migration strategies (rehost, replatform, refactor, retire) for each application. Architect AWS-native solutions using services like EC2, Lambda, RDS, S3, and EKS. Design modernization plans for legacy systems (e.g., .NET Framework .NET Core, Java EE Spring Boot). Ensure compliance with AWS Well-Architected Framework (security, reliability, performance, cost optimization). Collaboration & Leadership Work with cross-functional teams (developers, DevOps, security) to validate designs. Partner with clients to align technical solutions with business objectives. Mentor junior architects and engineers on AWS best practices. Roles and Responsibilities Job Title: Senior Solution Architect - Cloud Migration & Modernization (AWS) Location: [Insert Location] Department: Digital Services Reports To: Cloud SL

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies