Jobs
Interviews

994 Gitops Jobs - Page 40

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Vadodara, Gujarat, India

On-site

We are in search of an experienced Senior DevOps Engineer with specialized expertise in Kubernetes, GitOps, and cloud services. This individual will play a crucial role in the design and management of advanced CI/CD pipelines, guaranteeing seamless integration and deployment of software artifacts within varied environments in Kubernetes clusters. Key Responsibilities: Pipeline Construction & Management: Build and maintain efficient build pipelines. Deploy artifacts to Kubernetes with advanced deployment strategies. Docker & Helm Expertise: Develop and manage Docker images and Helm charts. Handle Helm repositories and deploy charts to Kubernetes clusters. GitOps Proficiency: Employ GitOps tools like ArgoCD, ArgoEvents, and ArgoRollouts. Coordinate with development and QA teams in managing GitOps repositories. Kubernetes & Cloud Services: Administer Kubernetes clusters, including knowledge of CSI, CNI drivers, backup/restore solutions. Monitor clusters using New Relic, ensuring reliability and availability. Proficiency in AWS services such as EKS, IAM, VPC, RDS/Aurora, Load Balancer configurations. Security & Compliance: Uphold security standards within Kubernetes clusters. IAC and Deployment Tracking: Manage Infrastructure as Code (IAC) and oversee deployment tracking, linking with CI/CD pipelines. Collaboration & Coordination: Collaborate with development teams for artifact generation pipelines. Coordinate with QA teams for environment setup (DEV, QA, Staging, UAT, Production). Technical Expertise: Skilled in both on-prem and cloud-managed Kubernetes clusters. Required Skills & Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field. 5-10 years of experience in DevOps with a focus on Kubernetes and GitOps. In-depth understanding of CI/CD principles, especially Gitlab/Jankins and ArgoCD. Advanced skills in AWS cloud services and Kubernetes security practices. Good knowledge of IAC, infrastructure provisioning and configuration management tool like Ansible and Terraform Proficient in working with YAML files and shell scripting. Experience in programming (Python or other relevant languages). Strong automation skills with an ability to streamline processes. Excellent problem-solving abilities and teamwork skills. Apply Now Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Be Part of Building the Future Dremio is the unified lakehouse platform for self-service analytics and AI, serving hundreds of global enterprises, including Maersk, Amazon, Regeneron, NetApp, and S&P Global. Customers rely on Dremio for cloud, hybrid, and on-prem lakehouses to power their data mesh, data warehouse migration, data virtualization, and unified data access use cases. Based on open source technologies, including Apache Iceberg and Apache Arrow, Dremio provides an open lakehouse architecture enabling the fastest time to insight and platform flexibility at a fraction of the cost. Learn more at www.dremio.com. About The Role Dremio’s SREs ensure that internal and externally visible services have reliability and uptime appropriate to users' needs and a fast rate of improvement. You will be joining a small but mighty team of experienced SREs helping to deliver a world class experience to Dremo Cloud customers. Our systems, like many, are joint-cognitive, made up of both people and software: complex and therefore intrinsically hazardous. We understand and expect that catastrophe is always just around the corner. What You’ll Be Doing Drive continuous improvements to our usage of Kubernetes, our Operators, and the GitOps deployment paradigm. Extend our networking, service mesh and Kubernetes systems to support connectivity between GCP, AWS and Azure. Collaborate with Engineering teams to support services before they go live through activities such as system design consulting, developing software platforms and frameworks, monitoring/alerting, capacity planning, production readiness and service reviews. Help define and instrument Service Level indicators and objectives (SLIs/SLOs) with service owners in the Engineering teams. Develop SLO-based on-call strategies for service owners and their teams. Collaborate within our virtual Observability team: develop and improve observability (tracing, events, metrics, profiling, logging and exceptions) of the Dremio Cloud product. Ability to debug and optimize code written by others and automate routine tasks. You recognize complexity and are familiar with multiple techniques to manage it but recognize the folly in complete rewrites. Evangelize and advocate for resilience engineering and reliability practices across our organization. Scale systems sustainably through automation and evolve systems by pushing for changes that improve reliability and velocity. Join an on-call rotation for systems and services that the SRE team owns. Practice sustainable incident response and post-incident investigation analysis. Drive the cultural, technical, and process changes to move towards a true continuous delivery model within the company. What We’re Looking For 10+ years of relevant experience in the following areas: SRE, DevOps, Distributed Systems, Cloud Operations, Software Engineering. Expertise in Kubernetes, Istio, Terraform, Terragrunt, ArgoCD/Flux. Expertise with software defined networking infrastructure: dedicated and partner interconnects, VPNs, BGP. Excellent command of cloud services on GCP/AWS/Azure, CI/CD pipelines. Have moderate-advanced experience in Python/Go, and at least reading knowledge of Java. You are interested in designing, analyzing and troubleshooting large-scale distributed systems. You have a systematic problem-solving approach, coupled with strong communication skills and a sense of ownership, drive, and determination. You have a great ability to debug and optimize code and automate routine tasks. You have a solid background in software development and architecting resilient and reliable applications. Bonus points if you have Hands-on experience with large-scale production Kubernetes clusters ( You have developed SLIs/SLOs for production systems. Return to Office Philosophy Workplace Wednesdays - to break down silos, build relationships and improve cross-team communication. Lunch catering / meal credits provided in the office and local socials align to Workplace Wednesdays. In general, Dremio will remain a hybrid work environment. We will not be implementing a 100% (5 days a week) return to office policy for all roles. What We Value At Dremio, we hold ourselves to high standards when it comes to People, Thinking, and Action. Our Gnarlies (that's what we call our employees) communicate with clarity, drive accountability, and are respectful towards each other. We confront brutal facts and focus on results while operating with a sense of urgency and building a "flywheel". People who like to jump in and drive momentum will thrive in our #GnarlyLife. Dremio is an equal opportunity employer supporting workforce diversity. We do not discriminate on the basis of race, religion, color, national origin, gender identity, sexual orientation, age, marital status, protected veteran status, disability status, or any other unlawful factor. Dremio is committed to providing any necessary accommodations for individuals with disabilities within our application and interview process. To request accommodation due to a disability, please inform your recruiter. Dremio has policies in place to protect the personal information that employees and applicants disclose to us. Please click here to review the privacy notice. Important Security Notice for Candidates At Dremio, we uphold trust and transparency as paramount values in all our interactions with customers, partners, employees, and the general public. We have been targeted by individuals creating fake domains similar to ours to scam prospects and candidates. Please note that all official communications from us will be from an @dremio.com domain. If you suspect you've been targeted by a scam, it's imperative to report the incident to your local law enforcement agencies. For more information about this type of scam, please refer to Dremio's official statement here. Dremio is not responsible for any fees related to unsolicited resumes and will not pay fees to any third-party agency or company that does not have a signed agreement with the Company. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

India

On-site

Senior Software Engineer I - DevOps Engineer Exceptional software engineering is challenging. Amplifying it to ensure that multiple teams can concurrently create and manage a vast, intricate product escalates the complexity. As a Senior Software Engineer within the Release Engineering team at Sumo Logic, your task will be to develop and sustain automated tooling for the release processes of all our services. You will contribute significantly to establishing automated delivery pipelines, empowering autonomous teams to create independently deployable services. Your role is integral to our overarching strategy of enhancing software delivery and progressing Sumo Logic’s internal Platform-as-a-Service. What You Will Do Own the Delivery pipeline and release automation framework for all Sumo services Educate and collaborate with teams during both design and development phases to ensure best practices. Mentor a team of Engineers (Junior to Senior) and improve software development processes. Evaluate, test, and provide technology and design recommendations to executives. Write detailed design documents and documentation on system design and implementation. Ensuring the engineering teams are set up to deliver quality software quickly and reliably. Enhance and maintain infrastructure and tooling for development, testing and debugging What You Already Have B.S. or M.S. Computer Sciences or related discipline Ability to influence: Understand people’s values and motivations and influence them towards making good architectural choices. Collaborative working style: You can work with other engineers to come up with good decisions. Bias towards action: You need to make things happen. It is essential you don’t become an inhibitor of progress, but an enabler. Flexibility: You are willing to learn and change. Admit past approaches might not be the right ones now. Technical Skills 4+ years of experience in the design, development, and use of release automation tooling, DevOps, CI/CD, etc. 2+ years of experience in software development in Java/Scala/Golang or similar 3+ years of experience on software delivery technologies like jenkins including experience writing and developing CI/CD pipelines and knowledge of build tools like make/gradle/npm etc. Experience with cloud technologies, such as AWS/Azure/GCP Experience with Infrastructure-as-Code and tools such as Terraform Experience with scripting languages such as Groovy, Python, Bash etc. Knowledge of monitoring tools such as Prometheus/Grafana or similar tools Understanding of GitOps and ArgoCD concepts/workflows Understanding of security and compliance aspects of DevSecOps About Us Sumo Logic, Inc. empowers the people who power modern, digital business. Sumo Logic enables customers to deliver reliable and secure cloud-native applications through its Sumo Logic SaaS Analytics Log Platform, which helps practitioners and developers ensure application reliability, secure and protect against modern security threats, and gain insights into their cloud infrastructures. Customers worldwide rely on Sumo Logic to get powerful real-time analytics and insights across observability and security solutions for their cloud-native applications. For more information, visit www.sumologic.com. Sumo Logic Privacy Policy. Employees will be responsible for complying with applicable federal privacy laws and regulations, as well as organizational policies related to data protection. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Delhi, India

On-site

About AlphaSense The world’s most sophisticated companies rely on AlphaSense to remove uncertainty from decision-making. With market intelligence and search built on proven AI, AlphaSense delivers insights that matter from content you can trust. Our universe of public and private content includes equity research, company filings, event transcripts, expert calls, news, trade journals, and clients’ own research content. The acquisition of Tegus by AlphaSense in 2024 advances our shared mission to empower professionals to make smarter decisions through AI-driven market intelligence. Together, AlphaSense and Tegus will accelerate growth, innovation, and content expansion, with complementary product and content capabilities that enable users to unearth even more comprehensive insights from thousands of content sets. Our platform is trusted by over 4,000 enterprise customers, including a majority of the S&P 500. Founded in 2011, AlphaSense is headquartered in New York City with more than 2,000 employees across the globe and offices in the U.S., U.K., Finland, India, Singapore, Canada, and Ireland. Come join us! About The Role You will join our team of world-class experts who are developing the AlphaSense platform. The team is right at the very core of what we do and is responsible for implementing cutting-edge technology for scalable, distributed processing of millions of documents. We are seeking a highly skilled Software Engineer II to join our dynamic team responsible for building and maintaining data ingestion systems at scale. As a key member of our team, you will play a crucial role in designing, implementing, and optimizing robust solutions for ingesting millions of documents per month, including the addition of multimedia content such as audio and video from the public web. You'll play a key role in integrating cutting-edge AI models, enabling intelligent suggestions, and content synchronization. You are a good fit if you're a proactive problem-solver with a “go-getter” attitude, startup experience, and a readiness to learn whatever comes your way! Responsibilities Design, develop, and maintain high-performance, scalable applications using Python. Solve complex technical challenges with innovative solutions that enhance product features and operational efficiencies. Collaborate across teams to integrate applications, optimize system performance, and streamline data flows. Take full ownership of projects from inception to deployment, delivering high-quality solutions that improve user experience. Lead or support data ingestion processes, ensuring seamless data flow and management. Continuously learn and adapt to new tools, frameworks, and technologies as they arise, embracing a growth mindset. Mentor and guide junior developers, fostering a collaborative, innovative culture. Requirements 2+ years of professional Python development experience, with a strong understanding of Python frameworks (Django, Flask, FastAPI, etc.). Proven success working in a startup environment, demonstrating adaptability and flexibility in fast-changing conditions. Proactive problem-solver with a keen eye for tackling challenging technical issues. A willingness to learn and adapt to new technologies and challenges as they arise. Strong team player with a go-getter attitude, comfortable working both independently and within cross-functional teams. Nice-to-Have Experience with media processing and live streaming techniques is a major plus. Familiarity with Crossplane and/or ArgoCD for GitOps-based infrastructure management. Experience with working on Docker, K8s AlphaSense is an equal-opportunity employer. We are committed to a work environment that supports, inspires, and respects all individuals. All employees share in the responsibility for fulfilling AlphaSense’s commitment to equal employment opportunity. AlphaSense does not discriminate against any employee or applicant on the basis of race, color, sex (including pregnancy), national origin, age, religion, marital status, sexual orientation, gender identity, gender expression, military or veteran status, disability, or any other non-merit factor. This policy applies to every aspect of employment at AlphaSense, including recruitment, hiring, training, advancement, and termination. In addition, it is the policy of AlphaSense to provide reasonable accommodation to qualified employees who have protected disabilities to the extent required by applicable laws, regulations, and ordinances where a particular employee works. Recruiting Scams and Fraud We At AlphaSense Have Been Made Aware Of Fraudulent Job Postings And Individuals Impersonating AlphaSense Recruiters. These Scams May Involve Fake Job Offers, Requests For Sensitive Personal Information, Or Demands For Payment. Please Note AlphaSense never asks candidates to pay for job applications, equipment, or training. All official communications will come from an @alpha-sense.com email address. If you’re unsure about a job posting or recruiter, verify it on our Careers page. If you believe you’ve been targeted by a scam or have any doubts regarding the authenticity of any job listing purportedly from or on behalf of AlphaSense please contact us. Your security and trust matter to us. Show more Show less

Posted 2 months ago

Apply

3 years

0 Lacs

Pune, Maharashtra, India

On-site

## Position Overview We are seeking an experienced Platform Engineering Manager to lead and grow our platform engineering team. This role combines technical leadership in cloud-native technologies with people management skills to drive developer productivity and platform reliability at scale. ## Key Responsibilities Lead and mentor a team of platform engineers, fostering a culture of innovation, collaboration, and continuous improvement Oversee the development, operation, and evolution of our developer platforms, focusing on container orchestration with Kubernetes Collaborate with architecture teams and product management to inform platform decisions that enhance developer experience Partner with development teams to understand their needs and optimize the developer experience Establish and track meaningful metrics to measure and improve developer productivity across the organization Manage team priorities, resource allocation, and project delivery while maintaining high operational standards Build strong relationships across engineering teams to ensure platform solutions meet organizational needs Facilitate technical design reviews and contribute insights on platform utilization best practices Help define and drive the platform engineering roadmap in collaboration with stakeholders ## Required Qualifications 5+ years of experience in platform engineering or DevOps, with at least 3 years in a management role Proven track record of building and operating production-grade Kubernetes platforms Deep understanding of container technologies, CI/CD pipelines, and cloud-native architectures Experience implementing and managing developer productivity tools and metrics Strong background in automation, infrastructure as code, and site reliability engineering Demonstrated success in leading technical teams and managing stakeholder relationships Experience with agile methodologies and project management Excellent problem-solving skills and ability to balance technical debt with business needs ## Leadership Competencies Strong mentorship and coaching abilities to develop team members' technical and soft skills Excellent communication skills with the ability to translate complex technical concepts to various audiences Strategic thinking with a focus on long-term platform scalability and sustainability Proven ability to influence and drive consensus across multiple teams Experience in recruitment, performance management, and career development Demonstrated ability to manage competing priorities and make data-driven decisions ## Preferred Qualifications Experience with major cloud providers (AWS, GCP, or Azure) Experience with GitOps practices and tools such as ArgoCD Understanding of security best practices and compliance requirements Contribution to open-source projects or developer tools Experience with developer experience (DevX) optimization Background in implementing platform observability and monitoring solutions Knowledge of cost optimization strategies for cloud infrastructure ## Impact & Scope Lead a team responsible for critical developer infrastructure supporting multiple engineering teams Partner with architecture and product teams to align platform implementation with technical strategy Collaborate with engineering leadership to align platform capabilities with business objectives Champion developer experience improvements through effective platform solutions Play a key role in shaping the platform engineering roadmap and strategic initiatives Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

dunnhumby is the global leader in Customer Data Science, empowering businesses everywhere to compete and thrive in the modern data-driven economy. We always put the Customer First. Our mission: to enable businesses to grow and reimagine themselves by becoming advocates and champions for their Customers. With deep heritage and expertise in retail – one of the world’s most competitive markets, with a deluge of multi-dimensional data – dunnhumby today enables businesses all over the world, across industries, to be Customer First. dunnhumby employs nearly 2,500 experts in offices throughout Europe, Asia, Africa, and the Americas working for transformative, iconic brands such as Tesco, Coca-Cola, Meijer, Procter & Gamble and Metro. We’re looking for a Backend Engineer (.NET) who expects more from their career. This role offers an opportunity to build scalable, high-performance solutions in the platform engineering team, enhance code quality and engineering excellence, and contribute to critical architecture decisions within a data-driven environment. What We Expect From You 6+ years of hands-on experience in backend development, focusing on performance, scalability, security, and maintainability. Strong proficiency in C# and .NET Core, with expertise in developing RESTful APIs and microservices. Drive code quality, ensuring adherence to best practices, design patterns, and SOLID principles. Experience with cloud platforms (Google Cloud Platform & Azure), implementing cloud-native best practices for scalability, security, and resilience. Hands-on experience with containerization (Docker) and orchestration (Kubernetes, Helm). Strong focus on non-functional requirements (NFRs) such as performance tuning, observability (monitoring/logging/alerting), and security best practices. Experience implementing unit testing, integration testing, and automated testing frameworks. Proficiency in CI/CD automation, with experience in GitOps workflows and Infrastructure-as-Code (Terraform, Helm, or similar). Experience working in Agile methodologies (Scrum, Kanban) and DevOps best practices. Identify dependencies, risks, and bottlenecks early, working proactively with engineering leads to resolve them. - Stay updated with emerging technologies and industry best practices to drive continuous improvement. Key Technical Skills Strong proficiency in C#, .NET Core, and RESTful API development. Experience with asynchronous programming, concurrency control, and event-driven architecture (Pub/Sub, Kafka, etc.). Deep understanding of object-oriented programming, data structures, and algorithms. Experience with unit testing frameworks and a TDD approach to development. Hands-on experience with Docker and Kubernetes (K8s) for containerized applications. Strong knowledge of performance tuning, security best practices, and observability (monitoring/logging/alerting). Experience with CI/CD pipelines, GitOps workflows, and infrastructure-as-code (Terraform, Helm, or similar). Exposure to multi-tenant architectures and high-scale distributed systems. Proficiency in relational databases (PostgreSQL preferred) and exposure to NoSQL solutions. Preferred Skills Exposure and experience in working with front end technologies as React.js Knowledge of gRPC, GraphQL, event driven or other modern API technologies. Familiarity with feature flagging, blue-green deployments, and canary releases. For further information about how we collect and use your personal information please see our Privacy Notice which can be found (here) What You Can Expect From Us We won’t just meet your expectations. We’ll defy them. So you’ll enjoy the comprehensive rewards package you’d expect from a leading technology company. But also, a degree of personal flexibility you might not expect. Plus, thoughtful perks, like flexible working hours and your birthday off. You’ll also benefit from an investment in cutting-edge technology that reflects our global ambition. But with a nimble, small-business feel that gives you the freedom to play, experiment and learn. And we don’t just talk about diversity and inclusion. We live it every day – with thriving networks including dh Gender Equality Network, dh Proud, dh Family, dh One and dh Thrive as the living proof. We want everyone to have the opportunity to shine and perform at your best throughout our recruitment process. Please let us know how we can make this process work best for you. For an informal and confidential chat please contact stephanie.winson@dunnhumby.com to discuss how we can meet your needs. Our approach to Flexible Working At dunnhumby, we value and respect difference and are committed to building an inclusive culture by creating an environment where you can balance a successful career with your commitments and interests outside of work. We believe that you will do your best at work if you have a work / life balance. Some roles lend themselves to flexible options more than others, so if this is important to you please raise this with your recruiter, as we are open to discussing agile working opportunities during the hiring process. For further information about how we collect and use your personal information please see our Privacy Notice which can be found (here) Show more Show less

Posted 2 months ago

Apply

6 - 8 years

16 - 20 Lacs

Bengaluru

Work from Office

Senior DevOps Engineer Location: Bengaluru South, Karnataka, India Experience: 68 Years Compensation: 1620 LPA Industry: PropTech | AgriTech | Cloud Infrastructure | Platform Engineering Employment Type: Full-Time | On-Site/Hybrid Are you a DevOps Engineer passionate about building scalable and efficient infrastructure for innovative platforms? If you’re excited by the challenge of automating and optimizing cloud infrastructure for a mission-driven PropTech platform, this opportunity is for you. We are seeking a seasoned DevOps Engineer to be a key player in scaling a pioneering property-tech ecosystem that reimagines how people discover, trust, and own their dream land or property. Our ideal candidate thrives in dynamic environments, embraces automation, and values security, performance, and reliability. You’ll be working alongside a passionate and agile team that blends technology with sustainability, enabling seamless experiences for both property buyers and developers. Key Responsibilities Architect, deploy, and maintain highly available, scalable, and secure cloud infrastructure, preferably on AWS. Design, develop, and optimize CI/CD pipelines for automated software build, test, and deployment. Implement and manage Infrastructure as Code (IaC) using Terraform, CloudFormation, or similar tools. Set up and manage robust monitoring, logging, and alerting systems (Prometheus, Grafana, ELK, etc.). Proactively monitor and improve system performance, availability, and resilience. Ensure compliance, access control, and secrets management across environments using best-in-class DevSecOps practices. Collaborate closely with development, QA, and product teams to streamline software delivery lifecycles. Troubleshoot production issues, identify root causes, and implement long-term solutions. Optimize infrastructure costs while maintaining performance SLAs. Build and maintain internal tools and automation scripts to support development workflows. Stay updated with the latest in DevOps practices, cloud technologies, and infrastructure design. Participate in on-call support rotation for critical incidents and infrastructure health. Preferred Qualifications Bachelor's degree in Computer Science, Engineering, or related field. 6–8 years of hands-on experience in DevOps, SRE, or Infrastructure roles. Strong proficiency in AWS (EC2, S3, RDS, Lambda, ECS/EKS). Expert-level scripting skills in Python, Bash, or Go. Solid experience with CI/CD tools such as Jenkins, GitLab CI, CircleCI, etc. Expertise in Docker, Kubernetes, and container orchestration at scale. Experience with configuration management tools like Ansible, Chef, or Puppet. Solid understanding of networking, DNS, SSL, firewalls, and load balancing. Familiarity with relational and non-relational databases (PostgreSQL, MySQL, etc.) is a plus. Excellent troubleshooting and analytical skills with a performance- and security-first mindset. Experience working in agile, fast-paced startup environments is a strong plus. Nice to Have Experience working in PropTech, AgriTech, or sustainability-focused platforms. Exposure to geospatial mapping systems, virtual land visualization, or real-time data platforms. Prior work with DevSecOps, service meshes like Istio, or secrets management with Vault. Passion for building tech that positively impacts people and the planet. Why Join Us? Join India’s first revolutionary PropTech platform, blending human-centric design with cutting-edge technology to empower property discovery and ownership. Be part of a company that doesn’t just build products—it builds ecosystems: for urban buyers, rural farmers, and the environment. Work with a forward-thinking leadership team from one of India’s most respected sustainability and land stewardship organizations. Collaborate across cross-disciplinary teams solving real-world challenges at the intersection of tech, land, and sustainability.

Posted 2 months ago

Apply

7 years

0 Lacs

Bengaluru, Karnataka

Work from Office

About the job Devops + Python Devops + Python Location: Bangalore Mode: Hybrid (2- 3 days/week) Experience: 7+ Years Tech skills: Complementary tech skills / Relevant development experience is must Experience with Python Scripting. Understanding of code management and release approaches / must have. Understanding of CI/CD pipelines, GitFlow and Github, GitOps (Flux, ArgoCD) / must have / flux is good to have. Good understanding of functional programming (Python Primary / Golang Secondary used in IAC platform). Understanding ABAC / RBAC / JWT / SAML / AAD / OIDC authorization and authentication ( handson and direction No SQL databases, i.e., DynamoDB (SCC heavy). Event driven architecture queues, streams, batches, pub / subs. Understanding functional programming list / map / reduce / compose, if familiar with monads / needed. Fluent in operating kubernetes clusters, as from dev perspective Creating custom CRD, operators, controllers. Experience in creating Serverless AWS & Azure (both needed )Monorepo / multirepo / Understanding of code management approaches. Understanding scalability and concurrency. Understanding of network, direct connect connectivity, proxies. Deep knowledge in AWS cloud ( org / networks / security / IAM ) (Basic understanding of Azure cloud). Understanding of SDLC, DRY, KISS, SOLID/ development principles

Posted 2 months ago

Apply

3 - 8 years

10 - 20 Lacs

Bengaluru

Remote

About the Team/Role We are seeking a highly skilled DevOps Engineer with in-depth knowledge and hands-on experience in Kubernetes, GitOps, GitHub Actions, Argo CD, and Docker. The ideal candidate will be responsible for containerizing all technology applications and ensuring seamless integration and deployment across our infrastructure. How youll make an impact Provide strong technical guidance and leadership in DevOps practices. Design, implement, and maintain Kubernetes clusters for scalable application deployment. Utilize GitOps methodologies for continuous delivery and operational efficiency. Develop and manage CI/CD pipelines using GitHub Actions and Argo CD. Service mesh implementation using Istio. Containerize applications using Docker to ensure consistency across different environments. Collaborate with development and operations teams to deliver services quickly and efficiently. Monitor and optimize the performance, scalability, and reliability of applications. Experience you’ll bring Proven experience in Kubernetes, GitOps, GitHub Actions, Argo CD, and Docker. Strong background in containerizing technology applications. Demonstrated ability to deliver services quickly without compromising quality. Excellent problem-solving skills and the ability to troubleshoot complex issues. Strong communication skills and the ability to provide technical guidance to team members. Prior experience in a similar role is essential. Development background is essential. Monitoring and Logging: Implement and manage comprehensive monitoring and logging solutions to ensure proactive issue detection and resolution. Debugging and Troubleshooting: Utilize advanced debugging and troubleshooting skills to address complex issues across the infrastructure and application stack. Architect and Design: Lead the architecture and design of scalable and reliable infrastructure solutions, ensuring alignment with organizational goals and industry best practices.

Posted 2 months ago

Apply

0 - 10 years

0 Lacs

Noida, Uttar Pradesh

Work from Office

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! The Opportunity: At Adobe, we offer an outstanding opportunity to work on new and emerging technology that crafts the digital experiences of millions. As Infrastructure Engineering team of Developer Platforms in Adobe, we provide industry-leading application hosting capabilities. Our solutions support high traffic, highly visible applications with immense amounts of data, numerous third-party integrations, and exciting scalability and performance problems. As a platform engineer on the Ethos team, you will work closely with our senior engineers to develop, deploy, and maintain our Kubernetes-based infrastructure. This role offers an excellent opportunity to grow your skills in cloud-native technologies and DevOps practices. We're on a mission to hire the very best and are committed to build exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new insights can come from everywhere in the organization, and we know the next big idea could be yours! What you'll Do: Contribute to the design, development, and maintenance of the K8s platform for container orchestration. Partner with other development teams across Adobe to ensure applications are crafted to be cloud-native and scalable. Perform day-to-day operational tasks such as upgrades and patching of the Kubernetes platform. Develop and implement CI/CD pipelines for application deployment on Kubernetes. Handle tasks and projects with Agile methodologies such as Scrum. Supervise the health of the platform and applications using tools like Prometheus and Grafana. Solve issues within the platform and collaborate with development teams to resolve application issues. Opportunities to contribute to upstream CNCF projects - Cluster API, ACK, ArgoCD among several others. Stay updated with the latest industry trends and technologies in container orchestration and cloud-native development. Participate in on-call rotation to resolve and get to the bottom of root cause as part of Incident & Problem management. What you need to succeed: B.Tech Degree in Computer Science or equivalent practical experience. Minimum of 5-10 years of experience working with Kubernetes. Certified Kubernetes Administrator and/or Developer/Security certifications encouraged. Strong software development skills in Python, Node.js, Go, Bash or similar languages. Experienced with AWS, Azure, or other cloud platforms. (AWS/Azure certifications encouraged.) Understanding of cloud network architectures (VNET/VPC/Nat Gateway/Envoy etc.). A solid understanding of time-series monitoring tools (such as Prometheus, Grafana, etc.). Familiarity with the 12-factor principles and software development lifecycle. Knowledge about GitOps, ArgoCD, and Helm with equivalent experience will be have advantage. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.

Posted 2 months ago

Apply

0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Role: GCP Enterprise ArchitectRequired Technical Skill Set: GCP Enterprise ArchitectDesired Experience Range: 8-10 yrsLocation of Requirement: Kolkata/DelhiNotice period: Immediately We are currently planning to do a Virtual Interview on 17th – May - 2025 (Saturday)Interview Date – 17th – May - 2025 (Saturday) Job Description: Overview: We are seeking a Google Cloud Platform (GCP) Enterprise Architect to design, implement, and optimize cloud solutions that drive business transformation. The ideal candidate will have deep expertise in GCP services, cloud architecture best practices, and enterprise IT infrastructure. You will collaborate with cross-functional teams to ensure scalability, security, and cost-efficiency while aligning cloud strategies with business goals. Required Skills & Experience: Cloud Expertise: Deep understanding of GCP services, including Compute Engine, Kubernetes Engine (GKE), Cloud Functions, BigQuery, Cloud Storage, and IAM. Architecture Patterns: Strong knowledge of microservices, serverless, and containerization (Kubernetes, Docker). Security & Compliance: Familiarity with cloud security best practices, compliance frameworks (ISO 27001, SOC 2, HIPAA, etc.), and identity management. Networking & Connectivity: Experience with hybrid cloud networking, VPC design, VPN, Cloud Interconnect, and DNS configurations. DevOps & Automation: Hands-on experience with Terraform, Ansible, CI/CD pipelines, and GitOps methodologies. Data & Analytics: Understanding of data lake architectures, ETL pipelines, and analytics solutions on GCP. Programming & Scripting: Proficiency in Python, Go, Java, or similar languages for cloud automation and scripting. Business Acumen: Ability to translate business needs into technical solutions and articulate cloud ROI to stakeholders.

Posted 2 months ago

Apply

0 - 20 years

0 Lacs

Perungudi, Chennai, Tamil Nadu

Remote

Position: Principal architect Domain: Finance, Retail & Additional Verticals. Exp: 12+ years Location: Chennai and other location applications are acceptable if willing to relocate. (No remote) Overview: GMIndia seeks a skilled Technical Project Manager with 05–20 years of experience to lead complex projects in BFSI, eCommerce, and IT domains. The ideal candidate is adept in Agile and Waterfall methodologies, with strong technical expertise in Java, FullStack, and MERN technologies, and a proven record of delivering $10M+ projects on time. Key Responsibilities: Architectural Strategy: Design microservices, APIs, event-driven systems, and data models for high-throughput web and mobile applications. Establish standards for coding, CI/CD, infrastructure as code, observability, and automated testing. Solution Delivery: Lead development of core platform components—balancing cost, performance, and security trade-offs. Oversee PoCs for emerging tech (AI/ML, real-time streaming, Web 3.0), driving rapid innovation. Collaboration & Governance: Partner with Product, Engineering, Security, and Compliance to align roadmaps and review designs. Chair architecture review boards; maintain up-to-date documentation and run regular design audits. Scalability & Resilience: Define SLAs and SLOs; own capacity planning, load/stress testing, and disaster-recovery strategies. Domain Expertise: Apply deep knowledge of finance (trading platforms, payment gateways, regulatory compliance) and retail (omnichannel, inventory systems, loyalty engines) to architecture decisions. Qualifications: 12+ years in software engineering, 5+ years in technical leadership/architecture, ideally in startups or high-growth firms. Preferred : Startup exposure (Seed to Series A), knowledge in AI/ML, blockchain, IoT, Kafka, and green-IT. Certifications : Required : AWS/GCP/Azure Architect, CKA Preferred : TOGAF 9, CISSP, CSP. Designed and operated enterprise-scale web/mobile apps with deep expertise in AWS/GCP/Azure, microservices, containers (Docker/Kubernetes), and serverless architectures. Strong backend skills (Java, Go, Node.js); familiar with modern frontend/mobile frameworks (React, Angular, Flutter, Swift/Kotlin). Hands-on DevOps (Terraform, GitOps, CI/CD, Prometheus, ELK, Datadog); solid understanding of security protocols (OAuth, encryption, GDPR, SOC 2). Excellent communicator and mentor; experienced in stakeholder engagement. Personal Attributes Visionary thinker with attention to detail. Strong sense of ownership and bias for action. Collaborative leadership style and a passion for mentoring. Adaptable in fast-paced, ambiguous startup environments. Job Types: Full-time, Permanent Pay: Up to ₹2,302,920.20 per year Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Work Location: In person Speak with the employer +91 9966099006

Posted 2 months ago

Apply

4 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

This role is for one of Weekday's clients Salary range: Rs 500000 - Rs 2400000 (ie INR 5-24 LPA) Min Experience: 4 years Location: Hyderabad, Coimbatore, Bengaluru JobType: full-time Requirements We are looking for a talented and experienced DevOps Engineer with expertise in Google Cloud Platform (GCP) to join our infrastructure and operations team. This is a key technical role where you will be responsible for automating, scaling, and optimizing our cloud infrastructure, CI/CD pipelines, and deployment processes. As a DevOps Engineer - GCP, you will work closely with development, QA, and security teams to build and maintain robust infrastructure that supports continuous integration, automated deployments, and high availability for mission-critical applications. You will champion DevOps best practices and ensure operational excellence in cloud-based environments. Key Responsibilities: Design, build, and manage infrastructure on Google Cloud Platform (GCP) with a focus on scalability, reliability, and security. Develop and maintain Infrastructure as Code (IaC) using tools such as Terraform, Deployment Manager, or equivalent. Set up and manage CI/CD pipelines using tools like Jenkins, GitLab CI, Cloud Build, or Spinnaker to enable fast, safe, and consistent releases. Monitor system performance, availability, and security using tools such as Stackdriver, Prometheus, Grafana, and integrate alerts with incident management systems. Automate repetitive operational tasks and deployments using Python, Bash, or Shell scripting. Implement and maintain containerized applications using Docker and orchestration with Kubernetes (GKE preferred). Manage secrets, configuration, and access control using tools like HashiCorp Vault, Google Secret Manager, and IAM policies. Ensure compliance with security standards, perform system hardening, and assist with audits and vulnerability assessments. Collaborate with engineering teams to troubleshoot infrastructure issues and support development workflows. Document infrastructure, processes, and procedures to ensure transparency and knowledge sharing within the team. Required Skills & Experience: Minimum of 4 years of hands-on DevOps experience, preferably in cloud-native environments. Strong experience working with Google Cloud Platform (GCP) and familiarity with core services like Compute Engine, GKE, Cloud Functions, Cloud Storage, Pub/Sub, BigQuery, and Cloud SQL. Proficiency with Infrastructure as Code tools like Terraform, and scripting languages like Bash or Python. Hands-on experience with CI/CD pipeline development, version control systems (Git), and deployment automation. Solid understanding of Linux-based systems, networking, load balancing, DNS, firewalls, and system security. Experience deploying and managing Kubernetes clusters (GKE experience preferred). Knowledge of monitoring, logging, and alerting tools and their integration in a production environment. Strong troubleshooting skills and the ability to diagnose and resolve infrastructure and application-level issues. Excellent communication and documentation skills, with a proactive and collaborative mindset. Nice to Have: Google Cloud Professional certifications (e.g., Professional Cloud DevOps Engineer, Associate Cloud Engineer). Experience with multi-cloud or hybrid cloud environments. Exposure to GitOps workflows and tools like ArgoCD or FluxCD

Posted 2 months ago

Apply

4 years

0 Lacs

Hyderabad, Telangana, India

On-site

This role is for one of Weekday's clients Salary range: Rs 500000 - Rs 2400000 (ie INR 5-24 LPA) Min Experience: 4 years Location: Hyderabad, Coimbatore, Bengaluru JobType: full-time Requirements We are looking for a talented and experienced DevOps Engineer with expertise in Google Cloud Platform (GCP) to join our infrastructure and operations team. This is a key technical role where you will be responsible for automating, scaling, and optimizing our cloud infrastructure, CI/CD pipelines, and deployment processes. As a DevOps Engineer - GCP, you will work closely with development, QA, and security teams to build and maintain robust infrastructure that supports continuous integration, automated deployments, and high availability for mission-critical applications. You will champion DevOps best practices and ensure operational excellence in cloud-based environments. Key Responsibilities: Design, build, and manage infrastructure on Google Cloud Platform (GCP) with a focus on scalability, reliability, and security. Develop and maintain Infrastructure as Code (IaC) using tools such as Terraform, Deployment Manager, or equivalent. Set up and manage CI/CD pipelines using tools like Jenkins, GitLab CI, Cloud Build, or Spinnaker to enable fast, safe, and consistent releases. Monitor system performance, availability, and security using tools such as Stackdriver, Prometheus, Grafana, and integrate alerts with incident management systems. Automate repetitive operational tasks and deployments using Python, Bash, or Shell scripting. Implement and maintain containerized applications using Docker and orchestration with Kubernetes (GKE preferred). Manage secrets, configuration, and access control using tools like HashiCorp Vault, Google Secret Manager, and IAM policies. Ensure compliance with security standards, perform system hardening, and assist with audits and vulnerability assessments. Collaborate with engineering teams to troubleshoot infrastructure issues and support development workflows. Document infrastructure, processes, and procedures to ensure transparency and knowledge sharing within the team. Required Skills & Experience: Minimum of 4 years of hands-on DevOps experience, preferably in cloud-native environments. Strong experience working with Google Cloud Platform (GCP) and familiarity with core services like Compute Engine, GKE, Cloud Functions, Cloud Storage, Pub/Sub, BigQuery, and Cloud SQL. Proficiency with Infrastructure as Code tools like Terraform, and scripting languages like Bash or Python. Hands-on experience with CI/CD pipeline development, version control systems (Git), and deployment automation. Solid understanding of Linux-based systems, networking, load balancing, DNS, firewalls, and system security. Experience deploying and managing Kubernetes clusters (GKE experience preferred). Knowledge of monitoring, logging, and alerting tools and their integration in a production environment. Strong troubleshooting skills and the ability to diagnose and resolve infrastructure and application-level issues. Excellent communication and documentation skills, with a proactive and collaborative mindset. Nice to Have: Google Cloud Professional certifications (e.g., Professional Cloud DevOps Engineer, Associate Cloud Engineer). Experience with multi-cloud or hybrid cloud environments. Exposure to GitOps workflows and tools like ArgoCD or FluxCD

Posted 2 months ago

Apply

4 years

0 Lacs

Greater Kolkata Area

Job Summary In this role, you will lead the architecture and implementation of MLOps/LLMOps systems within OpenShift AI. Job Description Company Overview: Outsourced is a leading ISO certified India & Philippines offshore outsourcing company that provides dedicated remote staff to some of the world's leading international companies. Outsourced is recognized as one of the Best Places to Work and has achieved Great Place to Work Certification. We are committed to providing a positive and supportive work environment where all staff can thrive. As an Outsourced staff member, you will enjoy a fun and friendly working environment, competitive salaries, opportunities for growth and development, work-life balance, and the chance to share your passion with a team of over 1000 talented professionals. Job Responsibilities Lead the architecture and implementation of MLOps/LLMOps systems within OpenShift AI, establishing best practices for scalability, reliability, and maintainability while actively contributing to relevant open source communitiesDesign and develop robust, production-grade features focused on AI trustworthiness, including model monitoringDrive technical decision-making around system architecture, technology selection, and implementation strategies for key MLOps components, with a focus on open source technologiesDefine and implement technical standards for model deployment, monitoring, and validation pipelines, while mentoring team members on MLOps best practices and engineering excellenceCollaborate with product management to translate customer requirements into technical specifications, architect solutions that address scalability and performance challenges, and provide technical leadership in customer-facing discussionsLead code reviews, architectural reviews, and technical documentation efforts to ensure high code quality and maintainable systems across distributed engineering teamsIdentify and resolve complex technical challenges in production environments, particularly around model serving, scaling, and reliability in enterprise Kubernetes deploymentsPartner with cross-functional teams to establish technical roadmaps, evaluate build-vs-buy decisions, and ensure alignment between engineering capabilities and product visionProvide technical mentorship to team members, including code review feedback, architecture guidance, and career development support while fostering a culture of engineering excellence Required Qualifications 5+ years of software engineering experience, with at least 4 years focusing on ML/AI systems in production environmentsStrong expertise in Python, with demonstrated experience building and deploying production ML systemsDeep understanding of Kubernetes and container orchestration, particularly in ML workload contextsExtensive experience with MLOps tools and frameworks (e.g., KServe, Kubeflow, MLflow, or similar)Track record of technical leadership in open source projects, including significant contributions and community engagementProven experience architecting and implementing large-scale distributed systemsStrong background in software engineering best practices, including CI/CD, testing, and monitoringExperience mentoring engineers and driving technical decisions in a team environment Preferred Qualifications Experience with Red Hat OpenShift or similar enterprise Kubernetes platformsContributions to ML/AI open source projects, particularly in the MLOps/GitOps spaceBackground in implementing ML model monitoringExperience with LLM operations and deployment at scalePublic speaking experience at technical conferencesAdvanced degree in Computer Science, Machine Learning, or related fieldExperience working with distributed engineering teams across multiple time zones What we Offer Health Insurance: We provide medical coverage up to 20 lakh per annum, which covers you, your spouse, and a set of parents. This is available after one month of successful engagement.Professional Development: You'll have access to a monthly upskill allowance of ₹5000 for continued education and certifications to support your career growth.Leave Policy: Vacation Leave (VL): 10 days per year, available after probation. You can carry over or encash up to 5 unused days.Casual Leave (CL): 8 days per year for personal needs or emergencies, available from day one.Sick Leave: 12 days per year, available after probation.Flexible Work Hours or Remote Work Opportunities – Depending on the role and project.Outsourced Benefits such as Paternity Leave, Maternity Leave, etc.

Posted 2 months ago

Apply

0.0 - 3.0 years

0 Lacs

Madgaon, Goa

On-site

About the Role: We are seeking an experienced Senior DevOps Engineer with strong expertise in AWS to join our growing team. You will be responsible for designing, implementing, and managing scalable, secure, and reliable cloud infrastructure. This role demands a proactive, highly technical individual who can drive DevOps practices across the organization and work closely with development, security, and operations teams. Key Responsibilities: Design, build, and maintain highly available cloud infrastructure using AWS services. Implement and manage CI/CD pipelines for automated software delivery and deployment. Collaborate with software engineers to ensure applications are designed for scalability, reliability, and performance. Manage Infrastructure as Code (IaC) using tools like Terraform, AWS CloudFormation, or similar. Optimize system performance, monitor production environments, and ensure system security and compliance. Develop and maintain system and application monitoring, alerting, and logging using tools like CloudWatch, Prometheus, Grafana, or ELK Stack. Manage containerized applications using Docker and orchestration platforms such as Kubernetes (EKS preferred). Conduct regular security assessments and audits, ensuring best practices are enforced. Mentor and guide junior DevOps team members. Continuously evaluate and recommend new tools, technologies, and best practices to improve infrastructure and deployment processes. Required Skills and Qualifications: 5+ years of professional experience as a DevOps Engineer, with a strong focus on AWS. Deep understanding of AWS core services (EC2, S3, RDS, IAM, Lambda, ECS, EKS, etc.). Expertise with Infrastructure as Code (IaC) – Terraform, CloudFormation, or similar. Strong experience with CI/CD tools such as Jenkins, GitLab CI, CircleCI, or AWS CodePipeline. Hands-on experience with containerization (Docker) and orchestration (Kubernetes, EKS). Proficiency in scripting languages (Python, Bash, Go, etc.). Solid understanding of networking concepts (VPC, VPN, DNS, Load Balancers, etc.). Experience implementing security best practices (IAM policies, KMS, WAF, etc.). Strong troubleshooting and problem-solving skills. Familiarity with monitoring and logging frameworks. Good understanding of Agile/Scrum methodologies. Preferred Qualifications: AWS Certified DevOps Engineer – Professional or other AWS certifications. Experience with serverless architectures and AWS Lambda functions. Exposure to GitOps practices and tools like ArgoCD or Flux. Experience with configuration management tools (Ansible, Chef, Puppet). Knowledge of cost optimization strategies in cloud environments. Job Type: Full-time Pay: ₹60,000.00 - ₹70,000.00 per month Benefits: Provident Fund Schedule: Day shift Application Question(s): Are you based in Goa ? Experience: DevOps: 4 years (Required) AWS: 3 years (Required) Work Location: In person Speak with the employer +91 8275022406

Posted 2 months ago

Apply

9 years

0 Lacs

Kochi, Kerala, India

On-site

Job Title: Java Full Stack Cloud ArchitectLocations: Pune, Bangalore, Chennai, Hyderabad, Kochi, TrivandrumExperience Level: 9+ Years Key ResponsibilitiesWe are seeking a highly skilled Java Full Stack Cloud Architect to design and implement scalable, secure, and high-performance cloud-native solutions. The ideal candidate will bring deep expertise in Java, microservices, modern front-end frameworks, cloud platforms (preferably AWS), and DevOps practices. Your Role Will Involve:Architecting Cloud-Native Solutions: Design scalable, resilient cloud architectures using Java (Spring Boot).Cloud Strategy & Implementation: Deploy and optimize applications on AWS (preferred), GCP, or Azure.Microservices Development: Build RESTful APIs and event-driven microservices using Kafka or RabbitMQ.Containerization & Orchestration: Leverage Docker and Kubernetes for service deployment and scalability.Security & Compliance: Implement secure authentication and authorization using OAuth2, JWT, IAM; ensure compliance with standards like SOC 2, ISO 27001.DevOps Enablement: Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab; automate infrastructure via Terraform or CloudFormation.Database & Performance Optimization: Work with SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, DynamoDB) databases; implement caching (Redis) for performance.Front-End Collaboration: Collaborate with UI teams using React, JavaScript/TypeScript, and modern UI frameworks.Stakeholder Collaboration: Engage with cross-functional teams to align architectural goals with business needs. Required Skills & QualificationsJava Proficiency: Strong hands-on experience with Java 8+, Spring Boot, Jakarta EE.Cloud Expertise: Proficient in AWS (preferred), Azure, or GCP.DevOps Skills: Experience with Kubernetes, Helm, Jenkins, GitOps, and IaC tools like Terraform.Security & Identity: Practical knowledge of OAuth 2.0, SAML, and Zero Trust Architecture.Database Knowledge: Expertise in SQL and NoSQL databases (PostgreSQL, MySQL, MongoDB, DynamoDB).Scalable System Design: Ability to design and implement low-latency, high-availability systems.UI/Front-End Understanding: Familiarity with React, TypeScript, and front-end integration in full-stack applications.Leadership & Communication: Excellent presentation skills, stakeholder management, and team mentoring ability.

Posted 2 months ago

Apply

10 - 20 years

30 - 37 Lacs

Bengaluru

Remote

Direct link to apply : https://jobs.lever.co/pythian/2cec65bb-46eb-4ca0-8f94-1c9afd36bb38 Role & responsibilities Build and maintain client relationships, providing technical leadership and guidance for current projects. Collaborate with stakeholders to understand business requirements, assist in project planning, and document project plans for both small and medium-sized projects. Participate in and support sprint planning activities with the Project Manager, including story point estimation and ceremonies such as standups, backlog grooming, and retrospectives. Design and implement technical solutions for customer projects, ensuring scalability and efficiency. Create or contribute to building technical design documents and other necessary documentation for projects. Write testable, high-performance, reliable, and maintainable code for CI/CD pipelines and infrastructure-as-code frameworks (e.g., Terraform, CloudFormation). Design and implement security and network software components for multi-cloud solutions and architectures. Research, evaluate, and recommend third-party software and technology packages based on project requirements. Provide performance optimization recommendations and document best practices. Create cloud migration strategies and plans, following best practices and ensuring smooth transitions to cloud architectures. Develop automated provisioning solutions for servers, environments, containers, and data centers. Preferred candidate profile Experience with engineering solutions on major cloud provider platforms, preferably Google Cloud Platform (GCP), and one or both of Amazon Web Services (AWS) and Microsoft Azure. Hands-on experience with operating system platform configuration, tuning, and administration for Linux or Windows, with a preference for both. Strong understanding of application performance and design best practices to ensure applications and services are highly available, performant, scalable, and secure. High proficiency with open-source tools, including Hashicorp solutions such as Terraform, Packer, and Vault, along with other deployment frameworks like Pulumi. Proficiency in at least one popular programming language (e.g., Go, Java, Python, Ruby, Rust). Solid understanding of testing techniques and frameworks, including test and behavior-driven development, with experience in writing test suites, mocks, and fixtures. Capability to write scripts for maintenance, automation, and data processing using scripting languages such as Bash, Groovy, JavaScript, Perl, PHP, PowerShell, or R. Experience with common configuration management tools (e.g., Ansible, Chef, Puppet). Strong knowledge of automating deployment, scaling, and management of containerized applications, ideally with hands-on experience using Kubernetes and tools like Helm. Exposure to Anthos is a plus. Skilled in common CI/CD tools, patterns, and techniques, with familiarity in pipeline enablement products such as ArgoCD, Azure DevOps, Cloud Build, GitLab, or Jenkins. Understanding of development methods, workflows, and patterns, particularly Agile and DevOps practices. Experience with stream-processing platforms and services, such as Kafka and Cloud Pub/Sub. Solid understanding of data security principles, including encryption, access control, and identity management, and their technical application to enforce data custodianship and compliance. Experience with on-premise architectures and visualization applications such as vCenter Experience in MLOPs is a plus.

Posted 2 months ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive. This role is based in India and as such all normal working days must be carried out in India. Job Description Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career We're offering this role as associate level What you'll do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. We are seeking a highly skilled and motivated AWS Cloud Engineer with deep expertise in Amazon EKS, Kubernetes, Docker, and Helm chart development. The ideal candidate will be responsible for designing, implementing, and maintaining scalable, secure, and resilient containerized applications in the cloud. You’ll also be: Design, deploy, and manage Kubernetes clusters using Amazon EKS. Develop and maintain Helm charts for deploying containerized applications. Build and manage Docker images and registries for microservices. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation). Monitor and troubleshoot Kubernetes workloads and cluster health. Support CI/CD pipelines for containerized applications. Collaborate with development and DevOps teams to ensure seamless application delivery. Ensure security best practices are followed in container orchestration and cloud environments. Optimize performance and cost of cloud infrastructure. The skills you'll need You’ll need a background in software engineering, software design, architecture, and an understanding of how your area of expertise supports our customers. You'll need experience in Java full stack including Microservices, ReactJS, AWS, Spring, SpringBoot, SpringBatch, Pl/SQL, Oracle, PostgreSQL, Junit, Mockito, Cloud, REST API, API Gateway, Kafka and API development. You’ll also need: 3+ years of hands-on experience with AWS services, especially EKS, EC2, IAM, VPC, and CloudWatch. Strong expertise in Kubernetes architecture, networking, and resource management. Proficiency in Docker and container lifecycle management. Experience in writing and maintaining Helm charts for complex applications. Familiarity with CI/CD tools such as Jenkins, GitLab CI, or GitHub Actions. Solid understanding of Linux systems, shell scripting, and networking concepts. Experience with monitoring tools like Prometheus, Grafana, or Datadog. Knowledge of security practices in cloud and container environments. Preferred Qualifications: AWS Certified Solutions Architect or AWS Certified DevOps Engineer. Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools like ArgoCD or Flux. Experience with logging and observability tools (e.g., ELK stack, Fluentd). Show more Show less

Posted Date not available

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies