Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Description: DevOps Engineer (Onsite – Mumbai) Location: Onsite – Mumbai, India Experience: 3+ years About the Role: We are looking for a skilled and proactive DevOps Engineer with 3+ years of hands-on experience to join our engineering team onsite in Mumbai . The ideal candidate will have a strong background in CI/CD pipelines , cloud platforms (AWS, Azure, or GCP), infrastructure as code , and containerization technologies like Docker and Kubernetes. This role involves working closely with development, QA, and operations teams to automate, optimize, and scale our infrastructure. Key Responsibilities: Design, implement, and maintain CI/CD pipelines for efficient and reliable deployment processes Manage and monitor cloud infrastructure (preferably AWS, Azure, or GCP) Build and manage Docker containers , and orchestrate with Kubernetes or similar tools Implement and manage Infrastructure as Code using tools like Terraform , CloudFormation , or Ansible Automate configuration management and system provisioning tasks Monitor system health and performance using tools like Prometheus , Grafana , ELK , etc. Ensure system security through best practices and proactive monitoring Collaborate with developers to ensure smooth integration and deployment Must-Have Skills: 3+ years of DevOps or SRE experience in a production environment Experience with cloud services (AWS, GCP, Azure) Strong knowledge of CI/CD tools such as Jenkins, GitLab CI, CircleCI, or similar Proficiency with Docker and container orchestration (Kubernetes preferred) Hands-on with Terraform , Ansible , or other infrastructure-as-code tools Good understanding of Linux/Unix system administration Familiar with version control systems (Git) and branching strategies Knowledge of scripting languages (Bash, Python, or Go) Good-to-Have (Optional): Exposure to monitoring/logging stacks: ELK, Prometheus, Grafana Experience in securing cloud environments Knowledge of Agile and DevOps culture Understanding of microservices and service mesh tools (Istio, Linkerd) Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview PepsiCo eCommerce has an opportunity for a Cloud Infrastructure security or DevSecOps engineer focused on our applications running in Azure and AWS. You will be part of the DevOps and cloud infrastructure team that is responsible for Cloud security, infrastructure provisioning, maintaining existing platforms and provides our partner teams with guidance for building, maintain and optimizing integration and deployment pipelines as code for deploying our applications to run in AWS & Azure. This role offers many exciting challenges and opportunities, some of the major duties are: Work with engineering teams to develop and improve our CI / CD pipelines that enforce proper versioning and branching practices using technologies like Github, Github Actions, ArgoCD, Kubernetes, Docker and Terraform. Create, deploy & maintain Kubernetes based platforms for a variety of different workloads in AWS and Azure. Responsibilities Deploy infrastructure in Azure & AWS cloud using terraform and Infra-as-code best practices. Participate in development of Ci/CD workflows to launch application from build to deployment using modern devOps tools like Kubernetes, ArgoCD/Flux, terraform, helm. Ensure the highest possible uptime for our Kubernetes based developer productivity platforms. Partner with development teams to recommend best practices for application uptime and recommend best practices for cloud native infrastructure architecture. Collaborate in infra & application architecture discussions decision making that is part of continually improving and expanding these platforms. Automate everything. Focus on creating tools that make your life easy and benefit the entire org and business. Evaluate and support onboarding of 3rd party SaaS applications or work with teams to integrate new tools and services into existing apps. Create documentation, runbooks, disaster recovery plans and processes. Collaborate with application development teams to perform RCA. Implement and manage threat detection protocols, processes and systems. Conduct regular vulnerability assessments and ensure timely remediation of flagged incidents. Ensure compliance with internal security policies and external regulations like PCI. Lead the integration of security tools such as Wiz, Snyk, DataDog and others within the Pepsico infrastructure. Coordinate with PepsiCo's broader security teams to align Digital Commerce security practices with corporate standards. Provide security expertise and support to various teams within the organization. Advocate and enforce security best practices, such as RBAC and the principle of least privilege. Continuously review, improve and document security policies and procedures. Participate in on-call rotation to support our NOC and incident management teams. Qualifications 8+ years of IT Experience. 5+ year of Kubernetes, ideally running workloads in a production environment on AKS or EKS platforms. 4+ year of creating Ci/CD pipelines in any templatized format in Github, Gitlab or Azure ADO. 3+ year of Python, bash and any other OOP language. (Please be prepared for coding assessment in your language of choice.) 5+ years of experience deploying infrastructure to Azure platforms. 3+ year of experience with using terraform or writing terraform modules. 3+ year of experience with Git, Gitlab or GitHub. 2+ year experience as SRE or supporting micro services in containerized environment like Nomad, docker swarn or K8s. Kubernetes certifications like KCNA, KCSA, CKA, CKAD or CKS preferred Good understanding of software development lifecycle. Familiarity with: Site Reliability Engineering AWS, Azure, or similar cloud platforms Automated build process and tools Service Mesh like Istio, linkerd Monitoring tools like Datadog, Splunk etc. Able to administer and run basic SQL queries in Postgres, mySQL or any relational database. Current skills in following technologies: Kubernetes Terraform AWS or Azure (Azure Preferred). GitHub Actions or Gitlab workflow. Familiar with Agile processes and tools such as Jira; good to have experience being part of Agile teams, continuous integration, automated testing, and test-driven development BSc/MSc in computer science, software engineering or related field is a plus, alternatively completion of a devOps or Infrastructure training course or bootcamp is acceptable as well. Self-starter; bias for action and for quick iteration on ideas / concepts; strong interest in proving out ideas and technologies with rapid prototyping Ability to interact well across various engineering teams Team player; excellent listening skills; welcoming of ideas and new ways of looking at things; able to efficiently take part in brainstorming sessions Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive. This role is based in India and as such all normal working days must be carried out in India. Job Description Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career We're offering this role as associate level What you'll do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. We are seeking a highly skilled and motivated AWS Cloud Engineer with deep expertise in Amazon EKS, Kubernetes, Docker, and Helm chart development. The ideal candidate will be responsible for designing, implementing, and maintaining scalable, secure, and resilient containerized applications in the cloud. You’ll also be: Design, deploy, and manage Kubernetes clusters using Amazon EKS. Develop and maintain Helm charts for deploying containerized applications. Build and manage Docker images and registries for microservices. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation). Monitor and troubleshoot Kubernetes workloads and cluster health. Support CI/CD pipelines for containerized applications. Collaborate with development and DevOps teams to ensure seamless application delivery. Ensure security best practices are followed in container orchestration and cloud environments. Optimize performance and cost of cloud infrastructure. The skills you'll need You’ll need a background in software engineering, software design, architecture, and an understanding of how your area of expertise supports our customers. You'll need experience in Java full stack including Microservices, ReactJS, AWS, Spring, SpringBoot, SpringBatch, Pl/SQL, Oracle, PostgreSQL, Junit, Mockito, Cloud, REST API, API Gateway, Kafka and API development. You’ll also need: 3+ years of hands-on experience with AWS services, especially EKS, EC2, IAM, VPC, and CloudWatch. Strong expertise in Kubernetes architecture, networking, and resource management. Proficiency in Docker and container lifecycle management. Experience in writing and maintaining Helm charts for complex applications. Familiarity with CI/CD tools such as Jenkins, GitLab CI, or GitHub Actions. Solid understanding of Linux systems, shell scripting, and networking concepts. Experience with monitoring tools like Prometheus, Grafana, or Datadog. Knowledge of security practices in cloud and container environments. Preferred Qualifications: AWS Certified Solutions Architect or AWS Certified DevOps Engineer. Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools like ArgoCD or Flux. Experience with logging and observability tools (e.g., ELK stack, Fluentd). Show more Show less
Posted 1 week ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Operations Management Level Senior Associate Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. As a business application consulting generalist at PwC, you will provide consulting services for a wide range of business applications. You will leverage a broad understanding of various software solutions to assist clients in optimising operational efficiency through analysis, implementation, training, and support. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within Devops Developer , will provide you with the opportunity to help our clients leverage to enhance their customer Responsibilities: . CI/CD(Jenkins/Azure DevOps/goCD/ArgoCD) 2. Containerization(Docker, Kubernetes) 3. Cloud & observability (AWS, Terraform, AWS CDK, ElasticStack, Istio, Linkerd, OpenTelemetry ) Mandatory skill sets: . CI/CD(Jenkins/Azure DevOps/goCD/ArgoCD) 2. Containerization(Docker, Kubernetes) 3. Cloud & observability (AWS, Terraform, AWS CDK, ElasticStack, Istio, Linkerd, OpenTelemetry) Preferred skill sets: CI/CD(Jenkins/Azure DevOps/goCD/ArgoCD) 2. Containerization(Docker, Kubernetes) 3. Cloud & observability (AWS, Terraform, AWS CDK, ElasticStack, Istio, Linkerd, OpenTelemetry) Years of experience required: 4+Yrs Education qualification: BE/B.Tech/MBA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Master of Business Administration, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills CI/CD Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Analytical Thinking, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Communication, Creativity, Documentation Development, Embracing Change, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Performance Assessment, Performance Management Software {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Hyderābād
On-site
Hyderabad, Telangana, India Category: Information Technology Hire Type: Employee Job ID 9330 Date posted 02/24/2025 We Are: At Synopsys, we drive the innovations that shape the way we live and connect. Our technology is central to the Era of Pervasive Intelligence, from self-driving cars to learning machines. We lead in chip design, verification, and IP integration, empowering the creation of high-performance silicon chips and software content. Join us to transform the future through continuous technological innovation. You Are: You are a forward-thinking Cloud DevOps Engineer with a passion for modernizing infrastructure and enhancing the capabilities of CI/CD pipelines, containerization strategies, and hybrid cloud deployments. You thrive in environments where you can leverage your expertise in cloud infrastructure, distributed processing workloads, and AI-driven automation. Your collaborative spirit drives you to work closely with development, data, and GenAI teams to build resilient, scalable, and intelligent DevOps solutions. You are adept at integrating cutting-edge technologies and best practices to enhance both traditional and AI-driven workloads. Your proactive approach and problem-solving skills make you an invaluable asset to any team. What You’ll Be Doing: Designing, implementing, and optimizing CI/CD pipelines for cloud and hybrid environments. Integrating AI-driven pipeline automation for self-healing deployments and predictive troubleshooting. Leveraging GitOps (ArgoCD, Flux, Tekton) for declarative infrastructure management. Implementing progressive delivery strategies (Canary, Blue-Green, Feature Flags). Containerizing applications using Docker & Kubernetes (EKS, AKS, GKE, OpenShift, or on-prem clusters). Optimizing service orchestration and networking with service meshes (Istio, Linkerd, Consul). Implementing AI-enhanced observability for containerized services using AIOps-based monitoring. Automating provisioning with Terraform, CloudFormation, Pulumi, or CDK. Supporting and optimizing distributed computing workloads, including Apache Spark, Flink, or Ray. Using GenAI-driven copilots for DevOps automation, including scripting, deployment verification, and infra recommendations. The Impact You Will Have: Enhancing the efficiency and reliability of CI/CD pipelines and deployments. Driving the adoption of AI-driven automation to reduce downtime and improve system resilience. Enabling seamless application portability across on-prem and cloud environments. Implementing advanced observability solutions to proactively detect and resolve issues. Optimizing resource allocation and job scheduling for distributed processing workloads. Contributing to the development of intelligent DevOps solutions that support both traditional and AI-driven workloads. What You’ll Need: 5+ years of experience in DevOps, Cloud Engineering, or SRE. Hands-on expertise with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI, ArgoCD, Tekton, etc.). Strong experience with Kubernetes, container orchestration, and service meshes. Proficiency in Terraform, CloudFormation, Pulumi, or Infrastructure as Code (IaC) tools. Experience working in hybrid cloud environments (AWS, Azure, GCP, on-prem). Strong scripting skills in Python, Bash, or Go. Knowledge of distributed data processing frameworks (Spark, Flink, Ray, or similar). Who You Are: You are a collaborative and innovative professional with a strong technical background and a passion for continuous learning. You excel in problem-solving and thrive in dynamic environments where you can your expertise to drive significant improvements. Your excellent communication skills enable you to work effectively with diverse teams, and your commitment to excellence ensures that you consistently deliver high-quality results. The Team You’ll Be A Part Of: You will join a dynamic team focused on optimizing cloud infrastructure and enhancing workloads to contribute to overall operational efficiency. This team is dedicated to driving the modernization and optimization of Infrastructure CI/CD pipelines and hybrid cloud deployments, ensuring that Synopsys remains at the forefront of technological innovation. Rewards and Benefits: We offer a comprehensive range of health, wellness, and financial benefits to cater to your needs. Our total rewards include both monetary and non-monetary offerings. Your recruiter will provide more details about the salary range and benefits during the hiring process. At Synopsys, we want talented people of every background to feel valued and supported to do their best work. Synopsys considers all applicants for employment without regard to race, color, religion, national origin, gender, sexual orientation, age, military veteran status, or disability.
Posted 1 week ago
20.0 - 22.0 years
20 - 22 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Position Summary As the Group Director of Software Engineering for Runtime Platforms at Walmart Global Tech, you will be at the helm of foundational platform transformation. This is a high-impact leadership role overseeing the design, development, and global delivery of Walmart's next-generation Runtime and Traffic Management platforms. You will lead a global team responsible for critical platform functionscontainer and VM management, configuration and secrets, and all ingress/egress traffic for Walmart applications. Your strategic and technical expertise will directly support business continuity, security, operational excellence, and customer satisfaction at scale. About the Team The Global Technology Platform (GTP) powers Walmart's digital transformation and is the engine behind every customer experience. The Runtime Platform (RTP) is core to this foundationenabling scalable compute, seamless deployment, secure traffic routing, and service observability across Walmart's multicloud environments. You will focus primarily on the Traffic Management team , which handles: Global DNS and API Gateway Software load balancing CDN, edge computing, and eBPF-based traffic optimization Routing and security policies at scale Handling 10M+ requests per second across Walmart's digital footprint This is a critical, highly performant system supporting mission-critical traffic flows for Walmart globally. What You'll Do Lead and grow an elite engineering team responsible for Walmart's global runtime and traffic platforms. Drive strategy and delivery of Walmart's container/VM orchestration, ingress/egress management, configuration and secrets storage systems. Define, evangelize, and execute on a unified traffic visioncombining routing, DNS, security, observability, and scalability. Collaborate with product, operations, security, and business leadership to align technology strategy with Walmart's digital goals. Architect and modernize a world-class traffic platform using service mesh, eBPF, software load balancers, and multi-cloud routing policies. Create a platform that supports dynamic traffic shaping (rate limiting, circuit breakers, segmentation, etc.) while improving latency, resilience, and cost efficiency. Establish high standards for availability, observability, automation, and disaster recovery. Lead platform engineering with a strong developer-first approach, ensuring self-service, reliability, and documentation. Convert CxO-level strategic goals into measurable engineering OKRs and deliver through strong governance and ownership. What You'll Bring 20+ years of engineering leadership with a proven track record of building and operating large-scale distributed systems/platforms. Deep experience in platform engineering, especially in traffic management, container orchestration (Kubernetes) , VMs , and networking . Practical expertise in: Cloud-native networking (GCP, Azure, AWS) Service mesh technologies (e.g., Envoy, Istio, Linkerd) TCP/IP, HTTP(S), DNS, Load Balancing Rate limiting, DDoS mitigation, routing optimization Edge computing and CDN architectures Prior success leading engineering organizations of 100+ , including senior architects and engineering managers. Strategic mindset with the ability to define and implement long-term vision across globally distributed teams. Ability to distill complex technical architectures for executive audiences and align cross-functional stakeholders. Strong focus on metrics, observability, automation, and operational excellence (including fast service restoration SLAs). Excellent communication, collaboration, and executive influencing skills. Prior experience contributing to or leading open-source projects is a plus. Bachelor's or Master's in Computer Science, Engineering, or related field. Preferred Qualifications Experience with: eCommerce platforms Open-source contributions in traffic/network domains Agile at scale (SAFe or similar) Security-first design in platform engineering Master's degree in Computer Science or equivalent
Posted 2 weeks ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive. This role is based in India and as such all normal working days must be carried out in India. Job Description Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career We're offering this role as associate level What you'll do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. We are seeking a highly skilled and motivated AWS Cloud Engineer with deep expertise in Amazon EKS, Kubernetes, Docker, and Helm chart development. The ideal candidate will be responsible for designing, implementing, and maintaining scalable, secure, and resilient containerized applications in the cloud. You’ll also be: Design, deploy, and manage Kubernetes clusters using Amazon EKS. Develop and maintain Helm charts for deploying containerized applications. Build and manage Docker images and registries for microservices. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation). Monitor and troubleshoot Kubernetes workloads and cluster health. Support CI/CD pipelines for containerized applications. Collaborate with development and DevOps teams to ensure seamless application delivery. Ensure security best practices are followed in container orchestration and cloud environments. Optimize performance and cost of cloud infrastructure. The skills you'll need You’ll need a background in software engineering, software design, architecture, and an understanding of how your area of expertise supports our customers. You'll need experience in Java full stack including Microservices, ReactJS, AWS, Spring, SpringBoot, SpringBatch, Pl/SQL, Oracle, PostgreSQL, Junit, Mockito, Cloud, REST API, API Gateway, Kafka and API development. You’ll also need: 3+ years of hands-on experience with AWS services, especially EKS, EC2, IAM, VPC, and CloudWatch. Strong expertise in Kubernetes architecture, networking, and resource management. Proficiency in Docker and container lifecycle management. Experience in writing and maintaining Helm charts for complex applications. Familiarity with CI/CD tools such as Jenkins, GitLab CI, or GitHub Actions. Solid understanding of Linux systems, shell scripting, and networking concepts. Experience with monitoring tools like Prometheus, Grafana, or Datadog. Knowledge of security practices in cloud and container environments. Preferred Qualifications: AWS Certified Solutions Architect or AWS Certified DevOps Engineer. Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools like ArgoCD or Flux. Experience with logging and observability tools (e.g., ELK stack, Fluentd). Show more Show less
Posted 2 weeks ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Rocketlane Rocketlane is a fast-growing, innovative SaaS company making waves in customer onboarding and professional services automation. Our mission? To empower B2B companies with a smooth, consistent, and efficient way to onboard customers and manage client projects—reducing chaos and boosting customer satisfaction across industries. We’re a close-knit team of over 100 passionate professionals, all focused on building a product that teams love to use. Our journey has been fueled by $45M in funding from top investors, including 8VC, Matrix Partners, and Nexus Venture Partners. What will you do? At Rocketlane, we’re all about building a great product and a great place to work. Here’s why you’ll actually look forward to Mondays. Drive the design, development, and enhancement of core features and functionalities of our observability platform, leveraging cutting-edge technologies to deliver scalable and reliable solutions Act as a subject matter expert, guiding and mentoring a team of talented software engineers to achieve technical excellence and deliver high-quality code Collaborate with cross-functional teams to design and implement robust, scalable, and efficient systems that meet the demands of our growing customer base Stay ahead of industry trends and emerging technologies, constantly researching and experimenting with innovative solutions to enhance our observability platform Work closely with product managers, designers, and stakeholders to translate business requirements into technical solutions, advocating for best practices and promoting a collaborative work environment Be proactive in identifying and addressing performance bottlenecks, applying optimizations, and maintaining the stability and availability of our platform Encourage a culture of continuous learning, improvement, and innovation within the engineering team, sharing knowledge and promoting professional growth You should apply if 6+ years of experience in backend, DevOps, or platform engineering roles. Proficient in one or more of: Go, Python, Java, Rust . Solid understanding of data structures , algorithms, and system design patterns. Experience with distributed systems , microservices, messaging queues (e.g., Kafka, RabbitMQ). Hands-on experience with cloud platforms like AWS, GCP, or Azure. Deep understanding of Linux internals , networking concepts, and containerization (e.g., Docker, Kubernetes). Familiarity with CI/CD pipelines and tools like GitHub Actions, Jenkins, or ArgoCD. Strong grip on monitoring & observability tools: Prometheus, Grafana, Datadog , or equivalent. Experience with Infrastructure as Code (Terraform, Pulumi). Proficiency with Git , version control workflows, and collaborative development tools. Strong debugging skills and ability to analyze logs, metrics, and traces to identify root causes of issues. Good to Have Experience with service mesh (e.g., Istio, Linkerd), API gateways , and load balancing strategies. Knowledge of SRE principles , incident response, and on-call practices. Familiarity with performance profiling tools (pprof, perf, flame graphs, etc.). Exposure to secrets management (e.g., Vault, AWS Secrets Manager). Why join us? At Rocketlane, we’re all about building a great product and a great place to work. Here’s why you’ll actually look forward to Mondays: Impact and ownership : You won’t just be another cog in the machine; here, you’re more like a turbocharged engine part. Bring your ideas, make them happen. Work with the best : We’re a team of passionate, quirky, and ridiculously talented people. Come for the work, stay for the memes. Celebrate wins : Whether we’re hitting major milestones or celebrating new funding, we like to mix it up. From rap videos to team outings, we believe in celebrating big. Learn and grow : We’re all about learning—and we’re not just talking about the latest SaaS trends. You’ll grow your career, pick up new skills, and maybe even learn to love Excel (or at least tolerate it). Flexibility and balance : While we love collaborating in the office five days a week, we know everyone has their own rhythm. That’s why we offer flexibility around hours—so you can bring your best energy, whether you’re an early bird or a night owl. Pajamas optional (at least outside the office). Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Neudesic Passion for technology drives us, but it’s innovation that defines us . From design to development and support to management, Neudesic offers decades of experience, proven frameworks, and a disciplined approach to quickly deliver reliable, quality solutions that help our customers go to market faster. What sets us apart from the rest is an amazing collection of people who live and lead with our core values. We believe that everyone should be Passionate about what they do, Disciplined to the core, Innovative by nature, committed to a Team and conduct themselves with Integrity . If these attributes mean something to you - we'd like to hear from you. Role Profile We are currently looking for Azure Cloud Devops Engineers to become a member of Neudesic’ s Cloud Transformation - CIS team. Experience : A minimum of 5 yrs relevant experience. Key Technology requirements & Responsibilities Build and manage CI/CD pipelines using Azure DevOps for efficient software delivery Implement containerization and orchestration solutions using Docker, Kubernetes, and AKS Configure and manage networking, security, and monitoring components within the cloud infrastructure Develop and maintain data pipelines using Azure Data Factory and Databricks Design and deploy highly available, scalable, and secure cloud infrastructure on Azure using industry best practices and frameworks such as WAF and CAF Ensure adherence to cloud governance policies, cost optimization, and security best practices. Key Technology Requirements Expertise in AKS, Kubernetes, Docker and Docker Compose Experience with Prometheus, ElasticSearch, Grafana, Loki, and Dynatrace Knowledge of Istio, Linkerd, Kong Mesh, NGINX, Traefik, and Kong ingress controllers Understanding of networking concepts related to container orchestration and microservices Proficiency in Azure infrastructure components (VMs, Storage Accounts, Networking, etc.) Experience with Azure DevOps, CI/CD pipelines, and artifact repositories Knowledge of build and test systems (CMake, Makefile, cmocka, Maven, MS Build, NPM, SCA) Understanding of Azure Policy, RBAC, cost management, monitoring, and alerting Strong knowledge of DevSecOps practices and WAF/CAF application Proficiency in Python, Golang, Shell, and Bash scripting\ Additional Skills & Competencies Must be a self-starter who requires minimal supervision. Good communication is a must Experienced in problem solving, and able to follow a methodical implementation process * Be aware of phishing scams involving fraudulent career recruiting and fictitious job postings; visit our Phishing Scams page to learn more Neudesic is an Equal Employment Opportunity Employer All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or loca l law.Neudesic is an IBM subsidiary which has been acquired by IBM and will be integrated into the IBM organization. Neudesic will be the hiring entity. By proceeding with this application, you understand that Neudesic will share your personal information with other IBM companies involved in your recruitment process, wherever these are located. More Information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here: https://www.ibm.com/us-en/privacy?lnk=flg-priv-usen Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
About Neudesic Passion for technology drives us, but it’s innovation that defines us . From design to development and support to management, Neudesic offers decades of experience, proven frameworks, and a disciplined approach to quickly deliver reliable, quality solutions that help our customers go to market faster. What sets us apart from the rest is an amazing collection of people who live and lead with our core values. We believe that everyone should be Passionate about what they do, Disciplined to the core, Innovative by nature, committed to a Team and conduct themselves with Integrity . If these attributes mean something to you - we'd like to hear from you. Role Profile We are currently looking for Azure Cloud Devops Engineers to become a member of Neudesic’ s Cloud Transformation - CIS team. Experience : A minimum of 5 yrs relevant experience. Key Technology requirements & Responsibilities Build and manage CI/CD pipelines using Azure DevOps for efficient software delivery Implement containerization and orchestration solutions using Docker, Kubernetes, and AKS Configure and manage networking, security, and monitoring components within the cloud infrastructure Develop and maintain data pipelines using Azure Data Factory and Databricks Design and deploy highly available, scalable, and secure cloud infrastructure on Azure using industry best practices and frameworks such as WAF and CAF Ensure adherence to cloud governance policies, cost optimization, and security best practices. Key Technology Requirements Expertise in AKS, Kubernetes, Docker and Docker Compose Experience with Prometheus, ElasticSearch, Grafana, Loki, and Dynatrace Knowledge of Istio, Linkerd, Kong Mesh, NGINX, Traefik, and Kong ingress controllers Understanding of networking concepts related to container orchestration and microservices Proficiency in Azure infrastructure components (VMs, Storage Accounts, Networking, etc.) Experience with Azure DevOps, CI/CD pipelines, and artifact repositories Knowledge of build and test systems (CMake, Makefile, cmocka, Maven, MS Build, NPM, SCA) Understanding of Azure Policy, RBAC, cost management, monitoring, and alerting Strong knowledge of DevSecOps practices and WAF/CAF application Proficiency in Python, Golang, Shell, and Bash scripting\ Additional Skills & Competencies Must be a self-starter who requires minimal supervision. Good communication is a must Experienced in problem solving, and able to follow a methodical implementation process * Be aware of phishing scams involving fraudulent career recruiting and fictitious job postings; visit our Phishing Scams page to learn more Neudesic is an Equal Employment Opportunity Employer All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or loca l law.Neudesic is an IBM subsidiary which has been acquired by IBM and will be integrated into the IBM organization. Neudesic will be the hiring entity. By proceeding with this application, you understand that Neudesic will share your personal information with other IBM companies involved in your recruitment process, wherever these are located. More Information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here: https://www.ibm.com/us-en/privacy?lnk=flg-priv-usen Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Job Title: DevOps Engineer Employment Type: Full-Time, 5 Days/Week Working Hours: Flexible Compensation: ₹8–12 LPA Location: Remote (India-based candidates only) Why Join Us? Remote-First : Work from anywhere in India with complete scheduling flexibility. Growth & Learning : Exposure to cutting-edge cloud-native, automation, and DevSecOps technologies. Innovative Culture : Collaborative, inclusive environment that values innovation, ownership, and continuous improvement. Perks : Unlimited leave policy, health and wellness support, and regular virtual team-building events. Role Overview We’re hiring a DevOps Engineer to architect, implement, and manage the CI/CD infrastructure, cloud deployments, and automation workflows that power our AI-driven platforms. You’ll work closely with software engineers, QA, and product teams to improve development velocity, system reliability, and operational efficiency. Key Responsibilities Design, implement, and maintain CI/CD pipelines using tools like Jenkins , GitHub Actions , or GitLab CI/CD . Manage infrastructure as code (IaC) using Terraform , Pulumi , or CloudFormation . Set up, monitor, and scale containerized applications using Docker and Kubernetes (EKS/GKE/AKS) . Automate build, test, and deployment processes to support agile software delivery . Monitor system performance and uptime using Prometheus , Grafana , ELK Stack , or Datadog . Collaborate with development and QA teams to integrate automated testing and ensure release readiness. Implement and manage cloud infrastructure across AWS , Azure , or Google Cloud Platform (GCP) . Ensure security and compliance by integrating DevSecOps practices and tools like Aqua Security , Snyk , or Trivy . Respond to incidents, troubleshoot environments, and optimize system performance and resilience. Required Skills & Qualifications 3+ years of hands-on experience in a DevOps, Site Reliability Engineer (SRE), or infrastructure automation role. Strong experience with CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD). Proficient in managing container orchestration platforms (Docker, Kubernetes). Hands-on experience with cloud services : AWS, Azure, or GCP. Knowledge of scripting languages like Bash , Python , or Go . Expertise in IaC tools like Terraform or CloudFormation. Strong grasp of Linux systems administration and networking fundamentals . Experience with monitoring and logging frameworks (ELK, Grafana, Prometheus). Familiarity with Git workflows , release management, and Agile development processes. Nice to Have Experience with service mesh technologies like Istio or Linkerd . Exposure to serverless architectures (AWS Lambda, Google Cloud Functions). Familiarity with security compliance frameworks (SOC2, ISO 27001). Relevant certifications: AWS Certified DevOps Engineer , CKA/CKAD , HashiCorp Certified: Terraform Associate . Hiring Process Phone Screen – Introduction and background check. Technical Interview – Deep dive into DevOps skills, cloud systems, and live problem-solving. Culture-Fit Interview – Meet the leadership team to assess alignment with our values and mission. Keywords for SEO DevOps Engineer, DevOps jobs India, CI/CD automation, Docker, Kubernetes, Terraform, Jenkins, GitHub Actions, AWS DevOps, Azure DevOps, SRE jobs remote, Infrastructure as Code, cloud infrastructure, DevSecOps, Site Reliability Engineering, remote DevOps jobs, Python scripting, monitoring tools, cloud-native engineering, EKS, GKE, AKS, Grafana, Prometheus, GitOps, Kubernetes jobs India. If you're passionate about automation, cloud-native architecture, and scaling AI-powered platforms, we'd love to hear from you. Apply now with your resume and a short note on your DevOps experience! Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About VOIS VO IS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group’s partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, VO IS has evolved into a global, multi-functional organisation, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. About VOIS India In 2009, VO IS started operating in India and now has established global delivery centres in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, VO IS India supports global markets and group functions of Vodafone, and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Job Role Key Responsibilities: Design and manage scalable infrastructure on Google Cloud Platform (GCP) to support various application and data workloads. Implement and manage IAM policies, roles, and permissions to ensure secure access across GCP services. Build and optimize workflows using Cloud Composer (Airflow) and manage data processing pipelines via Dataproc. Provision and maintain Compute Engine VMs and integrate them into broader system architectures. Set up and query data in BigQuery, and manage data flows securely and efficiently. Develop and maintain CI/CD pipelines using Argo CD, Jenkins, or GitOps methodologies. Administer Kubernetes clusters (GKE) including node scaling, workload deployments, and Helm chart management. Create and maintain YAML files for defining infrastructure as a code. Monitor system health and performance using tools like Prometheus, Grafana, and GCP’s native monitoring stack. Troubleshoot infrastructure issues, perform root cause analysis, and implement preventative measures. Collaborate with development teams to integrate infrastructure best practices and support application delivery. Document infrastructure standards, deployment processes, and operational procedures. Participate in Agile ceremonies, contributing to sprint planning, daily stand-ups, and retrospectives. Experience Must-Have: At least 3+ years of hands-on experience with Google Cloud Platform (GCP). Strong understanding and hands-on experience with IAM, Cloud Composer, Dataproc, Compute VMs, BigQuery, and networking on GCP. Proven experience with Kubernetes cluster administration (GKE preferred). Experience with CI/CD pipelines using tools like Argo CD, Jenkins, or GitOps workflows. Experience in writing and managing Helm charts for Kubernetes deployments. Proficiency in scripting (Bash, Python, or similar) for automation. Familiarity with observability and monitoring tools (Prometheus, Grafana, Cloud Monitoring). Solid understanding of DevOps practices, infrastructure-as-code, and container orchestration. Good To Have Experience working in an Agile environment. Familiarity with data workflows, data engineering pipelines, or ML orchestration on GCP. Exposure to GitHub Actions, Spinnaker, or other CI/CD tools. Experience with service mesh (Istio, Linkerd) and secrets management tools (Vault, Secret Manager). India VOIS Equal Opportunity Employer Commitment: VO IS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees’ growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, colour, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 10 Best Workplaces for Millennials, Equity, and Inclusion , Top 50 Best Workplaces for Women , Top 25 Best Workplaces in IT & IT-BPM and 10th Overall Best Workplaces in India by the Great Place to Work Institute in 2024. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we’ll be in touch! Show more Show less
Posted 2 weeks ago
15.0 years
0 Lacs
India
On-site
At First American (India), we don’t just build software—we build the future of real estate technology. Our people-first culture empowers bold thinkers and passionate technologists to solve real-world challenges through scalable architecture and innovative design. If you're driven by impact, thrive in collaborative environments, and want to shape how world-class products are delivered, this is the place for you. About the Role 1.Job Title: Principal Platform Engineer (15+ Years Experience) We are seeking a visionary Principal Platform Engineer with 15+ years of experience to lead the transformation of our DevOps strategy, driving innovation, automation, and cloud excellence with a strong focus on cloud technologies. This role is critical in reimagining our cloud infrastructure to be more resilient, scalable, and future proof. You will spearhead initiatives to modernize our CI/CD pipelines, cloud security, and observability, ensuring a seamless and efficient development lifecycle. Working closely with engineering, security, and operations teams, you will be at the forefront of building a platform culture that accelerates business growth and technological excellence. Key Responsibilities Design and implement standardized, reusable, and secure CI/CD pipelines (Golden Paths) across engineering teams. Architect, deploy, and manage Kubernetes (EKS) infrastructure with a strong focus on automation, security, and observability. Lead the integration and extension of Backstage for enhanced service cataloguing, discoverability, and developer self-service. Collaborate with application developers, SREs, and product teams to ensure CI/CD practices align with business and technical goals. Define and enforce best practices around containerization, deployment strategies, secrets management, and infrastructure-as-code (IaC). Implement and manage tools for monitoring, logging, and alerting to ensure system health and performance. Demonstrate deep experience in technologies of – infrastructure automation, CI/CD, data observability and platforms Continuously evaluate and improve developer workflows, toolchains, and productivity platforms. Provide guidance and mentorship to junior engineers and promote a culture of engineering excellence. Commitment to an automation-first approach and continuous improvement of platform capabilities. Key Requirements 15+ years of hands-on experience in platform engineering, DevOps, or related fields. Minimum 8+ years of experience in Application development using .net and Java tech stack. Deep expertise in CI/CD tools like GitHub Actions, ArgoCD, Jenkins, or similar. Proven experience with Amazon EKS, Helm, and Kubernetes cluster operations in production. Strong knowledge of IaC tools like Terraform, Pulumi, or CloudFormation. Experience with Backstage, including customization and plugin development. Solid understanding of cloud-native principles and microservices architecture. Familiarity with monitoring and observability stacks (Prometheus, Grafana, Loki, ELK, etc.). Skilled in scripting and automation using Bash, Python, Powershell, or Go. Good knowledge of version control tools like GIT, Bitbucket Knowledge in configuration management tools like Ansible, Chef or Puppet Comfortable working in agile environments, with strong collaboration and communication skills. Passion for building developer platforms that boost productivity, reliability, and security. Nice to Have Experience with service mesh technologies (e.g., Istio, Linkerd). Knowledge of security frameworks like Zero Trust, SSO, and OAuth. Experience with GitOps workflows and progressive delivery strategies. Exposure to policy-as-code frameworks (e.g., OPA/Gatekeeper). Experience with System & IT operation – Windows and Linux OS administration. Understanding of networking principles and technologies (DNS, Load Balancers, Reverse Proxies), Microsoft Active Directory and Active Directory Federation Services (ADFS). 2.Job Title: Staff Engineer (12+ Years Experience) About the Role We are seeking a seasoned Platform Engineer with 12+ years of experience to join our platform engineering team. This role will play a critical part in designing, building, and maintaining the internal platforms and tools that enable software development teams to work efficiently and effectively. As a Staff Engineer, you will play a pivotal role in designing and implementing golden paths to streamline development processes across the organization. Your expertise will be crucial in enhancing application observability, ensuring robust monitoring and diagnostics capabilities. You will collaborate closely with cross-functional teams to create scalable, resilient, and efficient platforms that support the organization's strategic goals and drive innovation. Key Responsibilities Design, develop, and maintain robust cloud platforms (e.g., AWS, Azure, Google Cloud). Enhance monitoring and diagnostics capabilities to ensure high availability and performance. Assist in building technical roadmap and advancing technical capabilities of the platform engineering team. Work closely with cross-functional teams to align platform capabilities with organizational goals. Identify and resolve complex technical issues, ensuring the stability and scalability of the platforms. Lead and mentor a team of developers and engineers, fostering a collaborative and innovative environment. Key Requirements 12+ years of hands-on experience in platform engineering, DevOps, or related fields. Minimum 5+ years of experience in Application development using .net and Java tech stack. Technical lead who has consistently demonstrated the capability to develop high level technical design Help in building product technical roadmap and advancing technical capabilities Advanced knowledge in AWS and Azzure Infrastructure Advanced knowledge in scripting like Python Advanced knowledge in IaC tools Terraform or CloudFormation Advanced knowledge in designing and implementing CI/CD pipeline with tools like Jenkins, CodePipeline, or similar Good knowledge of version control tools like GIT, Bitbucket or Team Foundation Server Good knowledge of working on containers with Docker and Kubernetes Good knowledge in configuration management tools like Ansible, Chef or Puppet Hands-on experience in Code Reviews and Design Reviews. Nice to Have AWS Certifications or similar Serverless automation Experience with GitOps workflows and progressive delivery strategies. Experience with System & IT operation – Windows and Linux OS administration. Understanding of networking principles and technologies (DNS, Load Balancers, Reverse Proxies), Microsoft Active Directory and Active Directory Federation Services (ADFS). Show more Show less
Posted 2 weeks ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Senior Kubernetes Developer Location: On-Site (Gurugram) Employment Type: Full-Time Experience Level: 7+ Years Job Summary: We are seeking a highly skilled Senior Kubernetes Developer with 7+ years of experience in container orchestration, cloud-native application development, and infrastructure automation. The ideal candidate will have deep expertise in Kubernetes architecture and development, and a solid understanding of cloud platforms, DevOps practices, and distributed systems. You will be instrumental in designing, building, and optimizing scalable Kubernetes-based infrastructure and applications. Key Responsibilities: Design, develop, and maintain Kubernetes-based infrastructure and microservices architecture. Build and manage containerized applications using Docker and deploy them via Kubernetes. Develop custom Kubernetes controllers/operators using Go or Python as needed. Implement CI/CD pipelines integrated with Kubernetes for automated testing and deployment. Collaborate with DevOps and cloud infrastructure teams to ensure secure and scalable solutions. Troubleshoot and optimize performance, availability, and reliability of Kubernetes clusters. Monitor cluster health and implement observability tools (Prometheus, Grafana, etc.). Ensure best practices around Kubernetes security, networking, and configuration management. Contribute to internal documentation, design reviews, and knowledge sharing sessions. Required Qualifications: 7+ years of experience in software development or infrastructure engineering. Minimum 4 years of hands-on experience with Kubernetes in production environments. Proficiency with container technologies (Docker) and Kubernetes orchestration. Experience with Helm, Kustomize, and Kubernetes Operators. Strong knowledge of Linux systems and shell scripting. Experience in at least one programming language (Go, Python, or Java preferred). Familiarity with infrastructure as code tools like Terraform, Pulumi, or Ansible. Working knowledge of cloud platforms such as AWS, GCP, or Azure. Understanding of service mesh (Istio, Linkerd) and ingress controllers (NGINX, Traefik). Preferred Qualifications: Certified Kubernetes Administrator (CKA) or Certified Kubernetes Application Developer (CKAD). Experience with GitOps tools (ArgoCD, Flux). Familiarity with DevSecOps practices and security hardening. Exposure to event-driven architectures and message brokers (Kafka, NATS). Strong understanding of networking, DNS, load balancing, and security in Kubernetes. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Gurugram, Haryana
On-site
Job Information Date Opened 05/30/2025 Job Type Full time Industry Financial Services Work Experience 5+ years City Gurgaon State/Province Haryana Country India Zip/Postal Code 122002 About Us indiagold has built a product & technology platform that enables regulated entities to launch or grow their asset backed products across geographies; without investing in operations, technology, people or taking any valuation, storage or transit risks. Our use of deep-tech is changing how asset backed loans have been done traditionally. Some examples of our innovation are – lending against digital gold, 100% paperless/digital loan onboarding process, computer vision to test gold purity as opposed to manual testing, auto- scheduling of feet-on-street, customer self-onboarding, gold locker model to expand TAM & launch zero-touch gold loans, zero network business app & many more. We are rapidly growing team passionate about solving massive challenges around financial well-being. We are a rapidly growing organisation with empowered opportunities across Sales, Business Development, Partnerships, Sales Operations, Credit, Pricing, Customer Service, Business Product, Design, Product, Engineering, People & Finance across several cities. We value the right aptitude & attitude than past experience in a related role, so feel free to reach out if you believe we can be good for each other. Job Description About the Role We are seeking a Staff Software Engineer to lead and mentor engineering teams while driving the architecture and development of robust, scalable backend systems and cloud infrastructure. This is a senior hands-on role with a strong focus on technical leadership, system design, and cross-functional collaboration across development, DevOps, and platform teams. Key Responsibilities Mentor engineering teams to uphold high coding standards and best practices in backend and full-stack development using Java, Spring Boot, Node.js, Python, and React. Guide architectural decisions to ensure performance, scalability, and reliability of systems. Architect and optimize relational data models and queries using MySQL. Define and evolve cloud infrastructure using Infrastructure as Code (Terraform) across AWS or GCP. Lead DevOps teams in building and managing CI/CD pipelines, Kubernetes clusters, and related cloud-native tooling. Drive best practices in observability using tools like Grafana, Prometheus, OpenTelemetry, and centralized logging frameworks (e.g., ELK, CloudWatch, Stackdriver). Provide architectural leadership for microservices-based systems deployed via Kubernetes, including tools like ArgoCD for GitOps-based deployment strategies. Design and implement event-driven systems that are reliable, scalable, and easy to maintain. Own security and compliance responsibilities in cloud-native environments, ensuring alignment with frameworks such as ISO 27001, CISA, and CICRA. Ensure robust design and troubleshooting of container and Kubernetes networking, including service discovery, ingress, and inter-service communication. Collaborate with product and platform teams to define long-term technical strategies and implementation plans. Perform code reviews, lead technical design discussions, and contribute to engineering-wide initiatives. Requirements Required Qualifications 7+ years of software engineering experience with a focus on backend development and system architecture. Deep expertise in Java and Spring Boot, with strong working knowledge of Node.js, Python, and React.js. Proficiency in MySQL and experience designing complex relational databases. Hands-on experience with Terraform and managing infrastructure across AWS or GCP. Strong understanding of containerization, Kubernetes, and CI/CD pipelines. Solid grasp of container and Kubernetes networking principles and troubleshooting techniques. Experience with GitOps tools such as ArgoCD and other Kubernetes ecosystem components. Deep knowledge of observability practices, including metrics, logging, and distributed tracing. Experience designing and implementing event-driven architectures using modern tooling (e.g., Kafka, Pub/Sub, etc.). Demonstrated experience in owning and implementing security and compliance measures, with practical exposure to standards like ISO 27001, CISA, and CICRA. Excellent communication skills and a proven ability to lead cross-functional technical efforts. Preferred (Optional) Qualifications Contributions to open-source projects or technical blogs. Experience leading or supporting compliance audits such as ISO 27001, SOC 2, or similar. Exposure to service mesh technologies (e.g., Istio, Linkerd). Experience with policy enforcement in Kubernetes (e.g., OPA/Gatekeeper, Kyverno). Benefits Why Join Us? Lead impactful engineering initiatives and mentor talented developers. Work with a modern, cloud-native stack across AWS, GCP, Kubernetes, and Terraform. Contribute to architectural evolution and long-term technical strategy. Competitive compensation, benefits, and flexible work options. Inclusive and collaborative engineering culture.
Posted 2 weeks ago
12.0 - 15.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Our Company Celigo is one of the fastest growing, Silicon Valley profitable & funded startup companies pioneering the future of cloud-based application integration with its Integrator.io iPaaS platform and pre-built Integration Apps. Over 3500 companies rely on Celigo to synchronize data, automate processes, and streamline operations by integrating their cloud applications. Our Integrator.io iPaaS platform offers a simple and powerful platform through a guided user interface, integration templates, and other tools that empower both business users and IT to easily integrate any of their cloud applications. Our core mission at Celigo is simple: to enable independent best-of-breed applications to work together as one. We believe that every independent department and every business end-user should always have choices when it comes to picking software, and that integration challenges should never stand in the way. We are full of fresh ideas with like-minded people offering opportunities to highly-talented individuals committed to working with the highest quality products in the area of business cloud computing (SaaS). Location - Hyderabad, India. OPPORTUNITY Celigo is looking for a rockstar quality engineering architect who will be responsible for the Quality of Celigo Product suite, leading new and existing Quality Engineering initiatives. Will be instrumental in championing quality engineering process improvement, software test strategies, driving test methodologies and automation across the products. Validates quality processes by establishing product specifications and quality attributes; measuring production; documenting evidence. Develop quality assurance plans by conducting hazard analyses; identifying critical control points and preventive measures; establishing critical limits, monitoring procedures, corrective actions, and verification procedures. Should have the drive to improve the job knowledge by constantly studying trends in and developments in quality management. What You’ll Do Design and architect modular, reusable, scalable functional and non-functional test automation tools/frameworks with latest tools and technologies Review, define and implement test strategies to make sure we don’t compromise on the quality of the product. Wherever possible influence teams to adopt best testing strategies. Collaborate effectively with engineers and architects to solve complex problems spanning their respective areas to deliver end-to-end quality in our technology and customer experience Actively participate and contribute in functional, system, performance, and regression testing activities Work closely with the development team to analyze, debug and resolve any issues Work closely with the test team to identify new automation opportunities to improve product quality. Collaborate with DevOps team to integrate quality into in CI/CD pipeline with shift-left approach Integrate quality engineering processes within CI/CD pipelines using Jenkins, GitHub Actions, streamlining testing within DevOps workflows. Regularly meet with Product Managers, services & support leads to identify bottlenecks or gaps in the process and work on enhancing them Work with Customer Success team on customer escalations and the overall process to provide the right guidance both to internal & external stakeholders Design and develop test plans; test cases based upon functional and design specifications Influence development managers to insure appropriate levels of quality on owned technologies Ensure the team follows various auditing processes and meets the compliance standards Hire, train and mentor new joiners Excellent communication skills (written and verbal), with specific experience and demonstrable success with the full software development lifecycle; and using the Agile Development processes Estimate and perform risk analysis for large features What You’ll Need To Succeed Bachelors in Engineering or Technology in an industry related field Min 12 - 15 years of working experience in QA for a large-scale Product development organization with experience working on large-scale SaaS products Strong experience in designing and implementing automation framework from scratch for performing REST API, UI and API testing to test different layers of products with different tools Strong experience in understanding system architecture and building test plans to test end-to-end systems like microservices with the intention of breaking the system to full scale. Experience in working closely with other Architects to help design complex features and enable the scrum teams to build solid test designs. Experience in driving performance metric benchmarks. Possess strong experience in software test automation, test planning, test design, functional and performance testing Should have good attitude and strong aptitude and passion for software quality with a focus on continuous improvement Strong hands-on experience using Selenium Web Driver with Java or Playwright with TypeScript using Rest API testing tools like Karate or similar tool Unit testing frameworks for node (Ex: Jest/Mocha) and java (Ex: Junit/TestNG) applications Cucumber BDD framework JMeter or any other performance testing tool JIRA, Confluence and Zephyr Python and other tools Building AI solutions in QA context Leverage AI-based tools to optimize test case generation, test data creation,defect prediction etc., Drive innovation by introducing AI/ML-powered frameworks for smarter and faster test execution. Identify and implement tools that utilize natural language processing (NLP) for test scripting and result Analysis.Solid understanding of cloud native technologies and well versed with AWS cloud platform, service mesh (Istio/Linkerd), Kubernetes, Dockers/Containers, Cloud log services (Splunk) Experience testing microservices architecture-based product in functionality, sizing, resiliency, rolling deployment and upgrade Expertise with continuous integration tools like Jenkins, Travis CI or similar tools Knowledge on Chaos Monkey/Gremlin for Resiliency Testing Knowledge/Experience testing Kafka and MongoDB based applications. Experience working in an Agile development environment. Self motivated, able to work proficiently both independently and in a team environment THE BEST CANDIDATE IS Passionate about being part of a world-class software organization. Experience architecting quality assurance and testing strategies for large-scale distributed platforms. Enjoys a fast-paced environment, working with a highly-talented team and shifting priorities. Excellent problem solving and analytical skills Ability to build strong relationships with stakeholders and key partners for the program Strong business and technical vision. Can stay abstract or detail oriented as the situation demands Demonstrated ability of thinking big, bringing new ideas, building teams & infrastructure for the future. Learn quickly. Must know when to listen, and when to take charge. Show more Show less
Posted 2 weeks ago
8.0 - 11.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Us: Airtel Payments Bank, India's first payments bank is a completely digital and paperless bank. The bank aims to take basic banking services to the doorstep of every Indian by leveraging Airtel's vast retail network in a quick and efficient manner. At Airtel Payments Bank, we’re transforming the way banking operates in the country. Our core business is banking and we’ve set out to serve each unbanked and underserved Indian. Our products and technology aim to take basic banking services to the doorstep of every Indian. We are a fun-loving, energetic and fast growing company that breathes innovation. We encourage our people to push boundaries and evolve from skilled professionals of today to risk-taking entrepreneurs of tomorrow. We hire people from every realm and offer them opportunities that encourage individual and professional growth. We are always looking for people who are thinkers & doers; people with passion, curiosity & conviction; people who are eager to break away from conventional roles and do 'jobs never done before’. Job Summary: We are looking for a Lead TechOps Engineer to join our team in managing and scaling containerized applications using Docker, Kubernetes, and OpenShift. You will be responsible for maintaining production environments, implementing automation, and ensuring platform stability and performance. Key Skills for TechOps Engineer (Docker, Kubernetes, OpenShift) 1. Containerization & Orchestration Expertise in Docker: building, managing, and debugging containers. Proficient in Kubernetes (K8s): deployments, services, ingress, Helm charts, namespaces. Experience with Red Hat OpenShift: operators, templates, routes, integrated CI/CD. 2. CI/CD and DevOps Toolchain Jenkins, GitLab CI/CD, other CI/CD pipelines. Familiarity with GitOps practices. 3. Monitoring & Logging Experience with Prometheus, Grafana, ELK stack, or similar tools. Understanding of health checks, metrics, and alerts. 4. Infrastructure as Code Hands-on with Terraform, Ansible, or Helm. Version control using Git. 5. Networking & Security K8s/OpenShift networking concepts (services, ingress, load balancers). Role-Based Access Control (RBAC), Network Policies, Secrets management. 6. Scripting & Automation Proficiency in Bash, Python, or Go for automation tasks. 7. Cloud Platforms (Optional but Valuable) Experience with AWS, GCP, or Azure Kubernetes Service (EKS, AKS, GKE). Responsibilities: Design, implement, and maintain Kubernetes/OpenShift clusters. Build and deploy containerized applications using Docker. Manage CI/CD pipelines for smooth application delivery. Monitor system performance and respond to alerts or issues. Develop infrastructure as code and automate repetitive tasks. Work with developers and QA to support and optimize application lifecycle. Requirements: 8-11 years of experience in TechOps/DevOps/SRE roles. Strong knowledge of Docker, Kubernetes, and OpenShift. Experience with CI/CD tools like Jenkins. Proficiency in scripting (Bash, Python) and automation tools (Ansible, Terraform). Familiarity with logging and monitoring tools (Prometheus, ELK, etc.). Knowledge of networking, security, and best practices in container environments. Good communication and collaboration skills. Nice to Have: Certifications (CKA, Red Hat OpenShift, etc.) Experience with public cloud providers (AWS, GCP, Azure). GitOps and service mesh (Istio, Linkerd) experience Why Join Us? Airtel Payments Bank is transforming from a digital-first bank to one of the largest Fintech company. There could not be a better time to join us and be a part of this incredible journey than now. We at Airtel payments bank don’t believe in all work and no play philosophy. For us, innovation is a way of life and we are a happy bunch of people who have built together an ecosystem that drives financial inclusion in the country by serving 300 million financially unbanked, underbanked, and underserved population of India. Some defining characteristics of life at Airtel Payments Bank are Responsibility, Agility, Collaboration and Entrepreneurial development : these also reflect in our core values that we fondly call RACE.. Show more Show less
Posted 2 weeks ago
2.0 - 5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Us: Airtel Payments Bank, India's first payments bank is a completely digital and paperless bank. The bank aims to take basic banking services to the doorstep of every Indian by leveraging Airtel's vast retail network in a quick and efficient manner. At Airtel Payments Bank, we’re transforming the way banking operates in the country. Our core business is banking and we’ve set out to serve each unbanked and underserved Indian. Our products and technology aim to take basic banking services to the doorstep of every Indian. We are a fun-loving, energetic and fast growing company that breathes innovation. We encourage our people to push boundaries and evolve from skilled professionals of today to risk-taking entrepreneurs of tomorrow. We hire people from every realm and offer them opportunities that encourage individual and professional growth. We are always looking for people who are thinkers & doers; people with passion, curiosity & conviction; people who are eager to break away from conventional roles and do 'jobs never done before’. Job Summary: We are looking for a skilled TechOps Engineer to join our team in managing and scaling containerized applications using Docker, Kubernetes, and OpenShift. You will be responsible for maintaining production environments, implementing automation, and ensuring platform stability and performance. Key Skills for TechOps Engineer (Docker, Kubernetes, OpenShift) 1. Containerization & Orchestration Expertise in Docker: building, managing, and debugging containers. Proficient in Kubernetes (K8s): deployments, services, ingress, Helm charts, namespaces. Experience with Red Hat OpenShift: operators, templates, routes, integrated CI/CD. 2. CI/CD and DevOps Toolchain Jenkins, GitLab CI/CD, other CI/CD pipelines. Familiarity with GitOps practices. 3. Monitoring & Logging Experience with Prometheus, Grafana, ELK stack, or similar tools. Understanding of health checks, metrics, and alerts. 4. Infrastructure as Code Hands-on with Terraform, Ansible, or Helm. Version control using Git. 5. Networking & Security K8s/OpenShift networking concepts (services, ingress, load balancers). Role-Based Access Control (RBAC), Network Policies, Secrets management. 6. Scripting & Automation Proficiency in Bash, Python, or Go for automation tasks. 7. Cloud Platforms (Optional but Valuable) Experience with AWS, GCP, or Azure Kubernetes Service (EKS, AKS, GKE). Responsibilities: Design, implement, and maintain Kubernetes/OpenShift clusters. Build and deploy containerized applications using Docker. Manage CI/CD pipelines for smooth application delivery. Monitor system performance and respond to alerts or issues. Develop infrastructure as code and automate repetitive tasks. Work with developers and QA to support and optimize application lifecycle. Requirements: 2-5 years of experience in TechOps/DevOps/SRE roles. Strong knowledge of Docker, Kubernetes, and OpenShift. Experience with CI/CD tools like Jenkins. Proficiency in scripting (Bash, Python) and automation tools (Ansible, Terraform). Familiarity with logging and monitoring tools (Prometheus, ELK, etc.). Knowledge of networking, security, and best practices in container environments. Good communication and collaboration skills. Nice to Have: Certifications (CKA, Red Hat OpenShift, etc.) Experience with public cloud providers (AWS, GCP, Azure). GitOps and service mesh (Istio, Linkerd) experience Why Join Us? Airtel Payments Bank is transforming from a digital-first bank to one of the largest Fintech company. There could not be a better time to join us and be a part of this incredible journey than now. We at Airtel payments bank don’t believe in all work and no play philosophy. For us, innovation is a way of life and we are a happy bunch of people who have built together an ecosystem that drives financial inclusion in the country by serving 300 million financially unbanked, underbanked, and underserved population of India. Some defining characteristics of life at Airtel Payments Bank are Responsibility, Agility, Collaboration and Entrepreneurial development : these also reflect in our core values that we fondly call RACE.. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for India’s top 1% Platform Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community? We are looking for Platform Engineers focusing on building scalable and high-performance AI/ML platforms. Strong background in cloud architecture, distributed systems, Kubernetes, and infrastructure automation is expected. If you have experience in this field then this is your chance to collaborate with industry leaders. What’s in it for you? Pay above market standards The role is going to be contract based with project timelines from 2 - 12 months , or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be: Remote (Highly likely) Onsite on client location Deccan AI’s Office: Hyderabad or Bangalore Responsibilities: Architect and maintain scalable cloud infrastructure on AWS, GCP, or Azure using tools like Terraform and Cloud Formation. Design and implement Kubernetes clusters with Helm, Kustomize, and Service Mesh (Istio, Linkerd). Develop CI/CD pipelines using GitHub Actions, GitLab CI/CD, Jenkins, and Argo CD for automated deployments. Implement observability solutions (Prometheus, Grafana, ELK stack) for logging, monitoring, and tracing & automate infrastructure provisioning with tools like Ansible, Chef, Puppet, and optimize cloud costs and security. Required Skills: Expertise in cloud platforms (AWS, GCP, Azure) and infrastructure as code (Terraform, Pulumi) with strong knowledge of Kubernetes, Docker, CI/CD pipelines, and scripting (Bash, Python). Experience with observability tools (Prometheus, Grafana, ELK stack) and security practices (RBAC, IAM). Familiarity with networking (VPC, Load Balancers, DNS) and performance optimization. Nice to Have: Experience with Chaos Engineering (Gremlin, LitmusChaos), Canary or Blue-Green deployments. Knowledge of multi-cloud environments, FinOps, and cost optimization strategies. What are the next steps ? 1. Register on our Soul AI website. 2. Our team will review your profile. 3 . Clear all the screening round s: Clear the assessments once you are shortlisted. As soon as you qualify all the screening rounds (assessments, interviews) you will be added to our Expert Community! 4 . Profile matching and Project Allocatio n: Be patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You! Show more Show less
Posted 2 weeks ago
7.0 years
0 Lacs
Sahibzada Ajit Singh Nagar, Punjab, India
On-site
Everything we do is powered by our customers! Featured on Deloitte's Technology Fast 500 list and G2's leaderboard, Maropost offers a connected experience that our customers anticipate, transforming marketing, merchandising, and operations with commerce tools designed to scale with fast-growing businesses. With a relentless focus on our customers’ success, we are motivated by curiosity, creativity, and collaboration to power 5,000+ global brands. Driven by a customer-first mentality, we empower businesses to achieve their goals and grow alongside us. If you're ready to make a significant impact and be part of our transformative journey, Maropost is the place for you. Become a part of Maropost today and help shape the future of commerce! Roles & Responsibilities Build and manage a REST API stack for Maropost Web Apps. Given the architecture strategy related to our big data, analytics and cloud native product vision, work on the concrete architecture design and, when necessary, prototype it Understanding of systems architecture and ability to design scalable, performance-driven solutions. Drive innovation within the engineering team, identifying opportunities to improve processes, tools, and technologies Drive the architecture and design governance for systems and products under scope, as well as code and design reviews. Technical leadership of the development team and ensuring that they follow industry-standard best practices Evaluating and improving the tools and frameworks used in software development Design, develop and architect complex web applications Integrate with ML and NLP engines. DevOps, DBMS & Scaling on Azure or GCP. Skills & Qualifications B.E./B.Tech. Hands-on experience with tech stacks—RoR and PostgreSQL 7+ years of experience with building, including designing and architecting backend applications, web apps, and analytics, preferably in commerce cloud or marketing automation domains. Experience in deploying applications at scale in production systems. Experience with platform security capabilities (TLS, SSL etc.) Excellent track record in designing highly scalable big data/event-streaming/cloud architectures and experience with having put them in production. Advanced HLD, LLD, and Design Patterns knowledge is a must. Experience of high-performance web-scale & real-time response systems Knowledge of tenant data segregation techniques, such as schema-based multi-tenancy, database-per-tenant, and hybrid approaches, for ensuring data isolation and privacy. Knowledge of networking protocols, security standards, and best practices. Experience in building and managing API endpoints for multimodal clients. In-depth knowledge and hands-on experience in architecting and optimizing large-scale database clusters, specifically MySQL and PostgreSQL, for performance, scalability, and reliability. Proficiency in microservices architecture and containerization technologies (e.g., Docker, Kubernetes). Experience with DevOps practices and tools (e.g., CI/CD pipelines, infrastructure as code). Expertise in database design, including SQL and NoSQL databases, with a specific focus on MySQL and PostgreSQL Experience in implementing advanced indexing strategies, query optimization techniques, and database tuning methodologies for optimizing the performance of MySQL and PostgreSQL databases. Enthusiasm to learn and contribute to a challenging & fun-filled startup. A knack for problem-solving and following efficient coding practices. Very strong interpersonal communication and collaboration skills Advanced HLD, LLD, and Design Patterns knowledge is a must. Hands-on Experience (Advantageous) Proficiency in infrastructure-as-code tools such as Terraform or AWS CloudFormation. Experience with containerization technologies such as Docker and container orchestration platforms like Kubernetes. Proficiency in implementing advanced replication topologies, such as master-slave replication, multi-master replication, and synchronous replication, for MySQL and PostgreSQL databases. Knowledge of database partitioning techniques, such as range partitioning, hash partitioning, and list partitioning, for optimizing storage and query performance in large-scale database clusters. Familiarity with high-availability architectures, such as active-passive and active-active configurations, for ensuring continuous availability and reliability of MySQL and PostgreSQL databases. Familiarity with microservices architecture and related tools such as Istio, Envoy, or Linkerd. Knowledge of CI/CD pipelines and related tools such as Jenkins, GitLab CI/CD, or CircleCI. Experience with monitoring and observability tools such as Prometheus, Grafana, ELK stack (Elasticsearch, Logstash, Kibana), or Splunk. Familiarity with configuration management tools like Ansible, Puppet, or Chef. Proficiency in version control systems such as Git. Knowledge of scripting languages such as Bash, PowerShell, Ruby, or Python for automation tasks. Understanding of cloud-native security practices and tools such as Google Identity and Access Management (IAM), AWS Key Management Service (KMS), or Azure Active Directory. Familiarity with network security concepts such as VPNs, firewalls, and intrusion detection/prevention systems (IDS/IPS). What’s in it for you? You will have the autonomy to take ownership of your role and contribute to the growth and success of our brand. If you are driven to make an immediate impact, achieve results, thrive in a high performing team and want to grow in a dynamic and rewarding environment – You belong to Maropost! Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
On-site
Job Description We are seeking a highly skilled Senior Reliability Engineer with strong backend software engineering skills to join our team. As a Senior Reliability Engineer , you will be responsible for designing, implementing, and maintaining our cloud infrastructure, ensuring the smooth operation of our applications and services. In addition, you will contribute to the development of our backend software systems, working closely with our engineering team to design, develop, and deploy scalable and reliable software solutions. This role will report to Senior Engineering Manager, Finance Engineering in Pune, Indi What you’ll do: Collaborate with your peers to envision, design, and develop solutions in your respective area with a bias toward reusability, toil reduction, and resiliency Surface opportunities across the broader organization for solving systemic issues Use a collaborative approach to make technical decisions that align with Procore’s architectural vision Partner with internal customers, peers, and leadership in planning, prioritization, and roadmap development Develop teammates by conducting code reviews, providing mentorship, pairing, and training opportunities Serve as a subject matter expert on tools, processes, and procedures and help guide others to create and maintain a healthy codebase Facilitate an “open source” mindset and culture both across teams internally and outside of Procore through active participation in and contributions to the greater community Design, develop, and deploy scalable and reliable backend software systems using languages such as Java, Python, or Go Work with engineering teams to design and implement microservices architecture Develop and maintain APIs using RESTful APIs, GraphQL, or gRPC Ensure high-quality code through code reviews, testing, and continuous integration Serve as a subject matter expert in a domain, including processes and software design that help guide others to create and maintain a healthy codebase What we’re looking for: Container orchestration (Kubernetes) K8s, preferably EKS. ArgoCD Terraform or similar IaC o11y (OpenTelemetry ideal) Public cloud (AWS, GCP, Azure) Cloud automation tooling (e.g., CloudFormation, Terraform, Ansible) Kafka and Kafka connectors Linux Systems Ensure compliance with security and regulatory requirements, such as HIPAA, SOX, FedRAMP Experience with the following is preferred: Continuous Integration Tooling (e.g., Circle CI, Jenkins, Travis, etc.) Continuous Deployment Tooling (e.g., ArgoCD, Spinnaker) Service Mesh / Discovery Tooling (e.g., Consul, Envoy, Istio, Linkerd) Networking (WAF, Cloudflare) Event-driven architecture (Event Sourcing, CQRS) Flink or other streaming processing technologies RDBMS and NoSQL databases Experience in working and developing APIs through REST, gRPC, or GraphQL Professional experience in Java, GoLang, Python preferred Additional Information Perks & Benefits At Procore, we invest in our employees and provide a full range of benefits and perks to help you grow and thrive. From generous paid time off and healthcare coverage to career enrichment and development programs, learn more details about what we offer and how we empower you to be your best. About Us Procore Technologies is building the software that builds the world. We provide cloud-based construction management software that helps clients more efficiently build skyscrapers, hospitals, retail centers, airports, housing complexes, and more. At Procore, we have worked hard to create and maintain a culture where you can own your work and are encouraged and given resources to try new ideas. Check us out on Glassdoor to see what others are saying about working at Procore. We are an equal-opportunity employer and welcome builders of all backgrounds. We thrive in a diverse, dynamic, and inclusive environment. We do not tolerate discrimination against candidates or employees on the basis of gender, sex, national origin, civil status, family status, sexual orientation, religion, age, disability, race, traveler community, status as a protected veteran or any other classification protected by law. If you'd like to stay in touch and be the first to hear about new roles at Procore, join our Talent Community. Alternative methods of applying for employment are available to individuals unable to submit an application through this site because of a disability. Contact our benefits team here to discuss reasonable accommodations. Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
You Lead the Way. We’ve Got Your Back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities and each other. Here, you’ll learn and grow as we help you create a career journey that’s unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you’ll be recognized for your contributions, leadership, and impact—every colleague has the opportunity to share in the company’s success. Together, we’ll win as a team, striving to uphold our company values and powerful backing promise to provide the world’s best customer experience every day. And we’ll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. Join Team Amex and let's lead the way together. About Enterprise Architecture: Enterprise Architecture is an organization within the Chief Technology Office at American Express and it is a key enabler of the company’s technology strategy. The four pillars of Enterprise Architecture include: 1. Architecture as Code : this pillar owns and operates foundational technologies that are leveraged by engineering teams across the enterprise. 2. Architecture as Design : this pillar includes the solution and technical design for transformation programs and business critical projects which need architectural guidance and support. 3. Governance : this pillar is responsible for defining technical standards, and developing innovative tools that automate controls to ensure compliance. 4. Colleague Enablement: this pillar is focused on colleague development, recognition, training, and enterprise outreach. Responsibilities: · Design, implement, and maintain API gateway solutions using tools like Apigee, Gloo, Envoy, or AWS API Gateway. · Configure and manage API traffic policies, routing, throttling, authentication, and authorization. · Collaborate with developers and architects to ensure effective API lifecycle management (design, testing, publishing, monitoring, and retirement). · Implement security protocols such as OAuth2, JWT, mTLS, and rate limiting. · Develop and enforce API governance policies, versioning standards, and best practices. · Monitor API performance, error rates, and latency, and provide insights for improvements. · Automate deployment and configuration using CI/CD pipelines tools. · Create and maintain documentation for API gateway configurations and processes. · Troubleshoot API gateway issues and provide support for developers and partners. Qualifications Preferably a BS or MS degree in computer science, computer engineering, or other technical discipline 3+ years of experience in API gateway technologies like Apigee, Gloo, Envoy, or similar. Strong understanding of RESTful API concepts, OpenAPI/Swagger specs. Proficiency in API security mechanisms (OAuth2, API Keys, JWT, mTLS). Experience with Kubernetes and service mesh technologies (Istio, Linkerd) is a strong plus. Familiarity with CI/CD tools (e.g., Jenkins, GitHub Actions, GitLab CI). Knowledge of monitoring tools (e.g., Prometheus, Grafana) for tracking API metrics. Strong scripting or programming skills (e.g., Python, Bash, Go, or Node.js). Excellent problem-solving and communication skills. Experience in Process Management, Case Management, Work Management is a plus. Experience in Automations, Cognitive OCR, AI / ML driving cost savings is a plus. Ability to effectively interpret technical and business objectives and challenges and articulate solutions Willingness to learn new technologies and exploit them to their optimal potential Extensive experience in designing and implementing large scale platforms with high resiliency, availability, and reliability. Strong experience in applications with high throughput and performance Experience with micro services architectures and service mesh technologies is preferred. Every member of our team must be able to demonstrate the following technical, functional, leadership and business core competencies, including: · Agile Practices · Porting/Software Configuration · Programming Languages and Frameworks – Hands on experience in some or all of the following is preferred: o Java, Python, Go, React, Envoy, gRPC, ProtoBuf, JSON, CouchBase, Cassandra, Redis, Consul, Jenkins, Docker, Kubernetes, OpenShift, Drools, Elastic Stack, Kafka, Spark · Analytical Thinking We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We Are: At Synopsys, we drive the innovations that shape the way we live and connect. Our technology is central to the Era of Pervasive Intelligence, from self-driving cars to learning machines. We lead in chip design, verification, and IP integration, empowering the creation of high-performance silicon chips and software content. Join us to transform the future through continuous technological innovation. You Are: You are a forward-thinking Cloud DevOps Engineer with a passion for modernizing infrastructure and enhancing the capabilities of CI/CD pipelines, containerization strategies, and hybrid cloud deployments. You thrive in environments where you can leverage your expertise in cloud infrastructure, distributed processing workloads, and AI-driven automation. Your collaborative spirit drives you to work closely with development, data, and GenAI teams to build resilient, scalable, and intelligent DevOps solutions. You are adept at integrating cutting-edge technologies and best practices to enhance both traditional and AI-driven workloads. Your proactive approach and problem-solving skills make you an invaluable asset to any team. What You’ll Be Doing: Designing, implementing, and optimizing CI/CD pipelines for cloud and hybrid environments. Integrating AI-driven pipeline automation for self-healing deployments and predictive troubleshooting. Leveraging GitOps (ArgoCD, Flux, Tekton) for declarative infrastructure management. Implementing progressive delivery strategies (Canary, Blue-Green, Feature Flags). Containerizing applications using Docker & Kubernetes (EKS, AKS, GKE, OpenShift, or on-prem clusters). Optimizing service orchestration and networking with service meshes (Istio, Linkerd, Consul). Implementing AI-enhanced observability for containerized services using AIOps-based monitoring. Automating provisioning with Terraform, CloudFormation, Pulumi, or CDK. Supporting and optimizing distributed computing workloads, including Apache Spark, Flink, or Ray. Using GenAI-driven copilots for DevOps automation, including scripting, deployment verification, and infra recommendations. The Impact You Will Have: Enhancing the efficiency and reliability of CI/CD pipelines and deployments. Driving the adoption of AI-driven automation to reduce downtime and improve system resilience. Enabling seamless application portability across on-prem and cloud environments. Implementing advanced observability solutions to proactively detect and resolve issues. Optimizing resource allocation and job scheduling for distributed processing workloads. Contributing to the development of intelligent DevOps solutions that support both traditional and AI-driven workloads. What You’ll Need: 5+ years of experience in DevOps, Cloud Engineering, or SRE. Hands-on expertise with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI, ArgoCD, Tekton, etc.). Strong experience with Kubernetes, container orchestration, and service meshes. Proficiency in Terraform, CloudFormation, Pulumi, or Infrastructure as Code (IaC) tools. Experience working in hybrid cloud environments (AWS, Azure, GCP, on-prem). Strong scripting skills in Python, Bash, or Go. Knowledge of distributed data processing frameworks (Spark, Flink, Ray, or similar). Who You Are: You are a collaborative and innovative professional with a strong technical background and a passion for continuous learning. You excel in problem-solving and thrive in dynamic environments where you can apply your expertise to drive significant improvements. Your excellent communication skills enable you to work effectively with diverse teams, and your commitment to excellence ensures that you consistently deliver high-quality results. The Team You’ll Be A Part Of: You will join a dynamic team focused on optimizing cloud infrastructure and enhancing workloads to contribute to overall operational efficiency. This team is dedicated to driving the modernization and optimization of Infrastructure CI/CD pipelines and hybrid cloud deployments, ensuring that Synopsys remains at the forefront of technological innovation. Rewards and Benefits: We offer a comprehensive range of health, wellness, and financial benefits to cater to your needs. Our total rewards include both monetary and non-monetary offerings. Your recruiter will provide more details about the salary range and benefits during the hiring process. Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Roles & Responsibilities GitHub Actions & CI/CD Workflows (Primary Focus) Design, develop, and maintain scalable CI/CD pipelines using GitHub Actions. Create reusable and modular workflow templates using composite actions and reusable workflows. Manage and optimize GitHub self-hosted runners, including autoscaling and hardening. Monitor and enhance CI/CD performance with caching, parallelism, and proper dependency management. Review and analyze existing Azure DevOps pipeline templates. Migrate Azure DevOps YAML pipelines to GitHub Actions, adapting tasks to equivalent GitHub workflows. Azure Kubernetes Service (AKS) Deploy and manage containerized workloads on AKS. Implement cluster and pod-level autoscaling, ensuring performance and cost-efficiency. Ensure high availability, security, and networking configurations for AKS clusters. Automate infrastructure provisioning using Terraform or other IaC tools. Azure DevOps Design and build scalable YAML-based Azure DevOps pipelines. Maintain and support Azure Pipelines for legacy or hybrid CI/CD environments. ArgoCD & GitOps Implement and manage GitOps workflows using ArgoCD. Configure and manage ArgoCD applications to sync AKS deployments from Git repositories. Enforce secure, auditable, and automated deployment strategies via GitOps. Collaboration & Best Practices Collaborate with developers and platform engineers to integrate DevOps best practices across teams. Document workflow standards, pipeline configurations, infrastructure setup, and runbooks. Promote observability, automation, and DevSecOps principles throughout the lifecycle. Must-Have Skills 8+ years of overall IT experience, with at least 5+ years in DevOps roles. 3+ years hands-on experience with GitHub Actions (including reusable workflows, composite actions, and self-hosted runners). 2+ years of experience with AKS, including autoscaling, networking, and security. Strong proficiency in CI/CD pipeline design and automation. Experience with ArgoCD and GitOps workflows. Hands-on with Terraform, ARM, or Bicep for IaC. Working knowledge of Azure DevOps pipelines and YAML configurations. Proficient in Docker, Bash, and at least one scripting language (Python preferred). Experience in managing secure and auditable deployments in enterprise environments. Good-to-Have Skills Exposure to monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack). Familiarity with Service Meshes like Istio or Linkerd. Experience with Secrets Management (e.g., HashiCorp Vault, Azure Key Vault). Understanding of RBAC, OIDC, and SSO integrations in Kubernetes environments. Knowledge of Helm and custom chart development. Certifications in Azure, Kubernetes, or DevOps practices. Skills Github Actions & CI/CD,Azure Kubernetes Service,AgroCD & GitOps,Devops Show more Show less
Posted 3 weeks ago
7.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
Remote
About The Company Armada is an edge computing startup that provides computing infrastructure to remote areas where connectivity and cloud infrastructure is limited, as well as areas where data needs to be processed locally for real-time analytics and AI at the edge. We’re looking to bring on the most brilliant minds to help further our mission of bridging the digital divide with advanced technology infrastructure that can be rapidly deployed anywhere . About The Role We are looking for a highly experienced, collaborative, and detail-oriented Senior Engineer to join our growing Edge team. You will be responsible for the design, automation, optimization, and operation of our Kubernetes-based platform supporting our Galleon mobile data centers and Commander cloud integration. This is a critical role where you will leverage deep technical expertise in cloud infrastructure and Kubernetes while valuing mentorship, collaboration, and open communication. You will work on building and managing resilient, secure, and scalable Kubernetes environments across diverse edge locations and cloud infrastructure, ensuring the reliability of our distributed computing platform. Location. This role is office-based at our Trivandrum, Kerala office. What You'll Do (Key Responsibilities) Architect, design, deploy, configure, and manage highly available Kubernetes clusters across edge (Galleon data centers) and cloud (AWS, Azure, GCP) environments. This includes designing the cluster layout, resource allocation, and storage configurations Administer, maintain, and monitor the health, performance, and capacity of Kubernetes clusters and underlying infrastructure Implement and manage Kubernetes networking solutions (CNI plugins, Ingress controllers) and storage solutions (PV/PVC, Storage Classes, CSI drivers) Maintain and monitor containerized platform services running within the clusters and robust monitoring, logging, and alerting systems (e.g., Prometheus, Grafana, ELK stack) Drive Infrastructure-as-Code (IaC) initiatives using tools like Terraform, Ansible, Helm, and potentially Kubernetes Operators, promoting automation, repeatability, and reliability Support and troubleshoot complex issues related to the Kubernetes platform, containerized services, networking, and infrastructure Implement and enforce Kubernetes security best practices (RBAC, Network Policies, Secrets Management, Security Contexts, Image Scanning) Automate cluster operations, deployment pipelines (CI/CD integration), and infrastructure provisioning using Infrastructure as Code (IaC) tools (e.g., Terraform, Ansible) Optimize Kubernetes clusters for performance, scalability, and resource utilization, particularly in edge environments Develop and maintain comprehensive documentation for cluster architecture, configurations, operational procedures, and runbooks Work in collaboration with software engineering, DevOps, security teams, and product managers to ensure seamless integration, deployment, and secure operation of applications on Kubernetes Evaluate and integrate new technologies from the Kubernetes ecosystem Contribute to the operational excellence of the platform, including participating in on-call rotations, incident management, and building self-healing capabilities Required Qualifications Bachelor's degree in computer science, Engineering, Information Technology, a related technical field, or equivalent practical experience At least 7+ years of professional experience in infrastructure engineering, systems administration, or software development, with a strong focus (4+ years preferred) on building and maintaining production Kubernetes environments At least 3+ years of professional experience using and administering Linux operating systems Deep understanding of Kubernetes architecture, core components, operational best practices, and lifecycle management Strong experience with containerization technologies (Docker) Hands-on experience managing Kubernetes on at least one major cloud provider (AWS, Azure, GCP) Strong understanding and proven experience with Infrastructure as Code (IaC) solutions, particularly Terraform and/or Ansible Proficiency in scripting languages (e.g., Python, Bash) for automation Experience configuring and managing monitoring/logging tools (e.g., Prometheus, Grafana, ELK Stack) Solid understanding of Linux operating system, networking fundamentals (TCP/IP, DNS, Load Balancing, Firewalls, VPNs) and container networking (CNI) Strong understanding of Kubernetes security concepts and implementation (RBAC, Network Policies, Secrets) Ability to work independently and collaborate effectively with others to debug and solve problems Preferred Experience And Skills Experience with Red Hat OpenShift Container Platform (version 4+ is a plus) Experience deploying and maintaining CI/CD solutions for DevSecOps, such as GitLab CI or Jenkins Strong development experience using Docker, docker-compose, and/or Kubernetes Experience developing Ansible playbooks for process automation Kubernetes certifications (CKA, CKS) Experience with Kubernetes operators and Custom Resource Definitions (CRDs) Experience with service mesh technologies like Istio or Linkerd Experience managing Kubernetes in edge computing or resource-constrained environments Compensation & Benefits For India-based candidates: We offer a competitive base salary along with equity options, providing an opportunity to share in the success and growth of Armada. You're a Great Fit if You're A go-getter with a growth mindset. You're intellectually curious, have strong business acumen, and actively seek opportunities to build relevant skills and knowledge A detail-oriented problem-solver. You can independently gather information, solve problems efficiently, and deliver results with a "get-it-done" attitude Thrive in a fast-paced environment. You're energized by an entrepreneurial spirit, capable of working quickly, and excited to contribute to a growing company A collaborative team player. You focus on business success and are motivated by team accomplishment vs personal agenda Highly organized and results-driven. Strong prioritization skills and a dedicated work ethic are essential for you Equal Opportunity Statement At Armada, we are committed to fostering a work environment where everyone is given equal opportunities to thrive. As an equal opportunity employer, we strictly prohibit discrimination or harassment based on race, color, gender, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other characteristic protected by law. This policy applies to all employment decisions, including hiring, promotions, and compensation. Our hiring is guided by qualifications, merit, and the business needs at the time. Show more Show less
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2