Jobs
Interviews

156 Gke Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

16 - 20 Lacs

mumbai

Work from Office

We are looking for an experienced Senior DevOps Cloud Engineer to design, build, and manage large-scale cloud-native systems. You will be working across multiple products, ensuring high availability, scalability, and security in production environments. Responsibilities Architect and manage multi-cloud infrastructure (AWS, GCP, Azure). Deploy and maintain containerized applications on Kubernetes (EKS/GKE) with Helm / Kustomize. Automate provisioning and scaling using Terraform and related IaC tools. Implement GitOps workflows with ArgoCD or FluxCD. Build and optimize CI/CD pipelines with GitLab CI, Jenkins, or GitHub Actions. Monitor and secure infrastructure using Prometheus, Grafana, Elastic Stack. Collaborate with engineering teams to deliver reliable and secure platforms. Drive DevSecOps practices and handle incident management & performance optimization. Requirements 5+ years of hands-on experience in DevOps / Cloud Architecture. Expertise in at least two cloud providers (AWS, GCP, Azure). Proficiency in Kubernetes for production workloads. Strong with Terraform & Helm for automation. Solid experience with CI/CD tools (GitLab CI, Jenkins, GitHub Actions). Practical knowledge of GitOps (ArgoCD / FluxCD). Good understanding of Linux environments. Nice to Have Scripting skills (Python, Bash, Go). Familiarity with Packer, Rancher, OpenShift, or k3s. Exposure to on-prem deployments and hybrid cloud.

Posted 1 week ago

Apply

5.0 - 8.0 years

17 - 20 Lacs

mumbai

Work from Office

An understanding of product development methodologies and microservices architecture. Hands-on experience with at least two major cloud providers (AWS, GCP, Azure). Multi-cloud experience is a strong advantage. Expertise in designing, implementing, and managing cloud architectures focusing on scalability, security, and resilience. Understanding and experience with cloud fundamentals like Networking, IAM, Compute, and Managed Services like DB, Storage, GKE/EKS, and KMS. Hands-on experience with cloud architecture design & setup. An in-depth understanding of Infrastructure as Code tools like Terraform, HELM is a must. Practical experience in deploying, maintaining, and scaling applications on Kubernetes clusters using Helm Charts or Kustomize Hands-on experience with any CI/CD tools like Gitlab CI, Jenkins, Github Actions. GitOps tools like ArgoCD, FluxCD is a must. Experience with Monitoring and Logging tools like Prometheus, Grafana and Elastic Stack. Experience working with PaaS is a plus Experience deploying on-prem data centre. Experience with k3s OSS / OpenShift / Rancher Kubernetes Cluster is a plus What are we looking for Learn, Architect & Build Skills & Technologies as highlighted above Product-Oriented Delivery Design, Build, and Operate Cloud Architecture & DevOps Pipeline Build on Open Source Technologies Collaboration with teams across 5 products GitOps Philosophy DevSecOps Mindset - Highly Secure Platform

Posted 1 week ago

Apply

14.0 - 20.0 years

35 - 60 Lacs

chennai

Hybrid

Greetings from Peoplefy! Role: IT Architect (GCP) Location - Chennai Experience - 10+ years Position Summary: Our Client is building a next-generation Warehouse-as-a-Service platform leveraging Google Cloud Platform (GCP), microservices architecture, and AI-driven capabilities. We are seeking a Senior GCP IT Architect with 10+ years of overall IT experience and proven expertise in architecting enterprise-grade cloud solutions. This is a hands-on architecture role requiring deep mastery of GKE, App Engine, Cloud Run, Cloud Functions, microservices, and advanced databases such as AlloyDB, Cloud Spanner, and Cloud SQL combined with the latest knowledge of GCP innovations, best practices, and emerging services. The architect will work closely with global teams to design, guide, and oversee secure, scalable, and high-performing deployments. Key Responsibilities Cloud Architecture Ownership Design, document, and implement end-to-end GCP architectures for large-scale, enterprise logistics platforms. GKE & Microservices Expertise Define Google Kubernetes Engine architecture, service mesh, scaling policies, and microservices deployment patterns. Latest GCP Adoption Evaluate, recommend, and implement new GCP services and capabilities (e.g., Duet AI, latest networking features, new database offerings) for competitive advantage. Application Deployment Strategy – Architect deployment models using App Engine, Cloud Run, and Cloud Functions for optimal performance and cost. DevOps Integration – Establish CI/CD pipelines using Gemini Code Assist, CLI tools, and Git workflows to enable continuous delivery. Database Architecture – Design resilient, high-performance data models on AlloyDB, Cloud Spanner, and Cloud SQL. Security & Compliance – Build security into the architecture using IAM, VPC Service Controls, encryption, and compliance frameworks. AI Integration – Leverage Google Vertex AI, NLP APIs, and other AI tools to embed intelligence into core applications. Performance Optimization – Conduct architecture reviews, performance testing, and cost optimization on a regular basis. Technical Guidance – Collaborate with front-end, back-end, and DevOps teams to ensure adherence to architecture principles. Experience & Qualifications: Experience currently or previously working in retail/e-commerce/logistics or SCM domain 10+ years in IT, with 5+ years in cloud architecture and 3+ years on GCP. Proven success in designing GKE-based microservices for production at scale. Strong experience with App Engine, Cloud Run, and Cloud Functions. Expertise in AlloyDB, Cloud Spanner, and Cloud SQL database solutions. Knowledge of latest GCP services, AI/ML tools, and modern architecture patterns. Proficiency in API architecture, service integration, and distributed system design. Strong grasp of Git, DevOps automation, and CI/CD pipelines. Experience embedding AI/ML capabilities into cloud-native applications. Up-to-date on GCP security best practices and compliance requirements. Excellent communication skills for working with global, distributed teams. If interested please share your updated resume on amruta.bu@peoplefy.com

Posted 1 week ago

Apply

10.0 - 20.0 years

10 - 20 Lacs

pune, chennai, bengaluru

Work from Office

Key Responsibilities Develop, test, and deploy Python-based applications with a focus on scalability, reliability, and performance. Design and implement cloud-native solutions using Google Kubernetes Engine (GKE) and Anthos . Collaborate with DevOps teams to manage Kubernetes clusters, ensuring high availability and optimal performance. Automate deployment pipelines and CI/CD workflows for efficient application delivery. Monitor and troubleshoot application performance, ensuring seamless operation in production environments. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Stay updated with the latest trends and best practices in Kubernetes, Anthos, and cloud-native development. Required Skills and Qualifications Strong proficiency in Python programming with experience in developing microservices and APIs. Hands-on experience with Google Cloud Platform (GCP) services, including GKE, Anthos, IAM, and networking. In-depth knowledge of Kubernetes concepts such as pods, services, deployments, and ingress controllers. Experience with containerization tools like Docker and managing containerized applications. Familiarity with Infrastructure as Code (IaC) tools such as Terraform or Deployment Manager . Proficient in building CI/CD pipelines using tools like Jenkins , GitLab CI/CD , or Cloud Build . Strong understanding of cloud security, networking, and monitoring tools (e.g., Stackdriver, Prometheus, Grafana). Excellent problem-solving and debugging skills. Strong communication and collaboration skills to work effectively in a team environment.

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Cloud Computing Solutions Developer at our Chennai location with 4-6 years of experience and a CTC of 20 LPA, you will be responsible for designing and implementing AIOps solutions to optimize IT operations efficiency. You will collaborate with cross-functional teams to integrate AIOps tools, build machine learning models for predictive analysis, and develop dashboards for actionable insights. You will stay updated on emerging technologies, recommend innovative solutions, and leverage GenAI solutions for operational automation. Additionally, you will mentor junior team members in AIOps principles and best practices. We are looking for someone with a passion for leveraging AI and ML in IT operations, strong analytical and problem-solving skills, excellent communication and collaboration abilities, proactive and self-directed mindset, and eagerness to learn new technologies. You should have a degree in Computer Science or related field, 5+ years of experience in IT operations or DevOps, proficiency in AIOps platforms and tools, understanding of machine learning algorithms, and proficiency in programming languages like Python or Java. Experience with cloud platforms and containerization technologies is a plus. In this role, you will drive the development, implementation, and maintenance of cloud computing solutions. You will work independently to build scalable grid systems in the cloud platform, conduct testing and maintenance, meet with clients to discuss requirements, evaluate and develop cloud-based systems, migrate existing systems, conduct tests for software components, and install, configure, and maintain cloud-based systems to enhance performance and automate processes.,

Posted 2 weeks ago

Apply

0.0 - 4.0 years

0 Lacs

navi mumbai, maharashtra

On-site

The selected intern will be responsible for the following day-to-day tasks: - Deploying and managing Kubernetes clusters such as EKS, AKS, GKE, or on-prem. - Administering Linux systems including Ubuntu, RHEL, and CentOS to ensure performance and security. - Automating deployments using tools like Helm, Terraform, or Ansible. - Monitoring clusters using Prometheus/Grafana and troubleshooting any issues that arise. - Implementing CI/CD pipelines with Jenkins, GitLab CI, and ArgoCD. - Ensuring adherence to security best practices including RBAC, network policies, and secrets management. - Collaborating with DevOps/development teams to deliver cloud-native solutions. About the Company: The company is based in Navi Mumbai and focuses on developing digital solutions for the real estate sector in India and the Gulf region. Their office is situated in Navi Mumbai. Bizlem specializes in revolutionizing business processes through the use of a deep-learning-powered digital workforce. They are dedicated to advancing artificial intelligence (AI) and natural language processing (NLP) technologies, leading the way in creating innovative products to address unmet business challenges leveraging core AI and NLP capabilities.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Principal Software Engineer, you will play a vital role in the design, development, and deployment of advanced AI and generative AI-based products. Your main responsibilities will include driving technical innovation, leading complex projects, and working closely with cross-functional teams to deliver high-quality, scalable, and maintainable solutions. To excel in this role, you must possess a strong background in software development, AI/ML techniques, and DevOps practices. Additionally, mentoring junior engineers and contributing to strategic technical decisions are key aspects of this position. Your primary responsibilities will involve advanced software development, where you will be responsible for designing, developing, and optimizing high-quality code for complex software applications and systems. It will be crucial to maintain high standards of performance, scalability, and maintainability while driving best practices in code quality, documentation, and test coverage. Furthermore, you will lead the end-to-end development of generative AI solutions, from data collection and model training to deployment and optimization. Experimenting with cutting-edge generative AI techniques to enhance product capabilities and performance will be a key part of your role. As a technical leader, you will take ownership of architecture and technical decisions for AI/ML projects. You will mentor junior engineers, review code for adherence to best practices, and ensure that the team maintains a high standard of technical excellence. Project ownership will also be a significant part of your responsibilities, where you will lead the execution and delivery of features, manage project scope, timelines, and priorities in collaboration with product managers, and proactively identify and mitigate risks. You will contribute to the architectural design and planning of new features, ensuring that solutions are scalable, reliable, and maintainable. Engaging in technical reviews with peers and stakeholders to promote a product suite mindset will be essential. Conducting rigorous code reviews to ensure adherence to industry best practices, maintainingability, and performance optimization, and providing feedback that supports team growth and technical improvement will also be part of your role. In addition, you will design and implement robust test suites to ensure code quality and system reliability. Advocating for test automation and the use of CI/CD pipelines to streamline testing processes and maintain service health will be critical. You will also be responsible for monitoring and maintaining the health of deployed services, utilizing telemetry and performance indicators to proactively address potential issues, performing root cause analysis for incidents, and driving preventive measures for improved system reliability. Taking end-to-end responsibility for features and services in a DevOps model to deploy and manage software in production will be part of your role. Ensuring efficient incident response and maintaining a high level of service availability are key components of this responsibility. You will also be required to create and maintain thorough documentation for code, processes, and technical decisions and contribute to knowledge sharing within the team to enable continuous learning and improvement. To qualify for this position, you should have a Bachelor's degree in Computer Science, Engineering, or a related technical field, with a Master's degree preferred. You should also have at least 6 years of professional software development experience, including significant experience with AI/ML or GenAI applications. Demonstrated expertise in building scalable, production-grade software solutions is essential. Advanced proficiency in Python, FastAPI, PyTest, Celery, and other Python frameworks, along with deep knowledge of software design patterns, object-oriented programming, and concurrency, is required. Extensive experience with cloud technologies (e.g., GCP, AWS, Azure), containerization (e.g., Docker, Kubernetes), CI/CD practices, version control systems (e.g., GitHub), and work tracking tools (e.g., JIRA) is also necessary. Familiarity with GenAI frameworks (e.g., LangChain, LangGraph), MLOps, AI lifecycle management, and experience with model deployment and monitoring in cloud environments is preferred. Additionally, hands-on experience with advanced ML algorithms, including generative models, NLP, and transformers, and knowledge of industry-standard AI frameworks (e.g., TensorFlow, PyTorch) are advantageous. Proficiency with relational and NoSQL databases (e.g., MongoDB, MSSQL, PostgreSQL), analytics platforms (e.g., BigQuery, Snowflake, Tableau), and messaging systems (e.g., Kafka) is a plus. Experience with test automation tools (e.g., PyTest, xUnit) and CI/CD tooling such as Terraform and GitHub Actions, with a strong emphasis on building resilient and testable software, is also beneficial. Proficiency with GCP technologies such as VertexAI, BigQuery, GKE, GCS, and DataFlow, focusing on deploying AI models at scale, is an advantage. In conclusion, as a Principal Software Engineer at our organization, you will play a critical role in driving technical innovation, leading complex projects, and collaborating with cross-functional teams to deliver high-quality, scalable, and maintainable AI and generative AI-based products. Your expertise in software development, AI/ML techniques, and DevOps practices, along with your ability to mentor junior engineers and contribute to strategic technical decisions, will be instrumental in your success in this role.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

You will be expected to have hands-on project experience with GCP core products and services, including GCP Networking, VPCs, VPCSC, and Google Artefact Registry. It is essential to have extensive experience in Infrastructure as Code, including TF custom Modules and TF module Registry. Moreover, you should possess hands-on experience with GCP Data products such as Bigquery, Dataproc, Dataflow, and Vertex AI. Familiarity with Kubernetes and managing container Infrastructure, specifically GKE, is also required for this role. The role involves automation using programming languages like Python, Groovy, etc. Additionally, having an understanding of Infrastructure security, threat modeling, and Guardrails would be beneficial for this position.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

As a Senior Software Engineer - DevOps at INVIDI Technologies Corporation in Bangalore, India, you will be part of a globally acclaimed software development company that revolutionizes television broadcasting. Our Emmy Award-winning technology is utilized by leading cable, satellite, and telco operators worldwide, delivering targeted ads seamlessly across various devices and platforms. INVIDI's innovative solutions have played a pivotal role in shaping the addressable television industry, with clients including major operators, networks, advertising agencies, and prominent brands. In this dynamic and fast-paced environment, you will be at the forefront of commercial television innovation, contributing to the development of a unified video ad tech platform. Your role as a DevOps Engineer is essential to supporting and enhancing a remote product development team. Operating within a modern agile product organization, you will be tasked with maintaining and deploying scalable, performant backend services in Java and Kotlin, ensuring high availability and operational efficiency. Collaborating closely with peers and product owners, you will play a key role in evolving deployment pipelines, troubleshooting issues, and mentoring team members. Your responsibilities will also include active participation in on-call rotations, responding to alarms and maintaining critical services as needed. With a focus on simplicity, elegance, and continuous learning, you will be expected to excel in a collaborative and agile work environment. To excel in this role, you should possess a Master's degree in computer science or equivalent, along with at least 4 years of experience in the industry. Strong development skills, experience with high-volume systems, and proficiency in technologies such as Dropwizard, Kafka, Google Cloud, Terraform, and Docker are highly desirable. Additionally, expertise in infrastructure maintenance, CI/CD tools, and cloud services like GCP and AWS will be advantageous. INVIDI offers a supportive and organized office environment, where your contributions will be valued and recognized. If you are a proactive and motivated DevOps Engineer with a passion for innovation and a commitment to excellence, we invite you to apply and be a part of our talented team at INVIDI Technologies Corporation.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

As a Cloud & DevOps Engineer at Omniful, you will play a crucial role in designing, implementing, and maintaining scalable cloud infrastructure on platforms like AWS, GCP, and Azure. Your responsibilities will include automating infrastructure provisioning using Terraform, building and managing CI/CD pipelines with tools like Jenkins, GitHub Actions, and GitLab, as well as deploying and managing containerized applications using Docker, Kubernetes (EKS, GKE), and AWS ECS. Your expertise will ensure system reliability, security, and performance by utilizing modern observability tools such as Prometheus, Grafana, and others. Collaboration with engineering teams will be essential to support fast and reliable operations. We are looking for candidates with a minimum of 3 years of hands-on experience in Cloud & DevOps Engineering, possessing strong expertise in technologies like AWS, GCP, Azure, and Terraform. Proficiency in CI/CD tools like Jenkins, GitHub Actions, GitLab, and ArgoCD, along with experience in Docker, Kubernetes, and managing AWS ECS environments, is required. Scripting knowledge in Bash, Python, and YAML is essential, as well as familiarity with observability stacks and a strong understanding of Git and version control systems. Additionally, having experience in incident response, system troubleshooting, and understanding of cloud security best practices will be advantageous in excelling in this role at Omniful. If you are passionate about building and managing scalable cloud infrastructure, automating systems, optimizing CI/CD pipelines, and implementing infrastructure as code, this position offers an exciting opportunity to contribute to our fast-growing B2B SaaS startup and make a significant impact in the field of operations and supply chain management.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

This position is for a Senior Software Engineer responsible for developing and deploying Node JS Backend in a long-term software project for a US client. You will be working in Trivandrum, India, collaborating with the existing project team on technical and management aspects. Your responsibilities will include requirement elicitation, software architecture designing, implementation, code reviews, and supporting deployment in a Cloud environment. It is crucial to take each assigned task to completion while ensuring the quality of deliverables. Self-initiatives, decision-making skills, self-directing capabilities, and a go-getter attitude are essential for success in this role. You will be responsible for performing software requirements analysis, determining functional and non-functional requirements, analyzing requirements to create solutions and software architecture design, writing high-quality code, and deploying applications in a Cloud environment by selecting relevant Cloud services. Effective communication with stakeholders to clarify requirements and expectations is vital. Timely delivery of the product with high quality is a key aspect of this role. Collaboration with stakeholders, including customers, is necessary to ensure the successful execution of the project. Managing priority changes and conflicts gracefully and addressing customer escalations promptly are part of your responsibilities. Proactively suggesting tools and systems to enhance quality and productivity is encouraged, along with staying updated on relevant technology and process advancements. To qualify for this role, you should have more than three years of experience in NodeJS development, expertise in developing Web APIs and RESTful services, familiarity with relational databases like MySQL and PostgreSQL, non-relational databases such as MongoDB, and experience with code quality tools and unit testing. Knowledge of Kubernetes, Docker, GCP services (e.g., Cloud Healthcare API, GKE, Cloud Run, Cloud functions, Firestore, Cloud SQL, etc.), deploying, scaling, and monitoring applications in GCP, proficiency in code versioning tools like git, understanding of software development lifecycles (SDLC), version control, and traceability, experience in Agile development methodology, and proficiency in various development tools related to designing, coding, debugging, testing, bug tracking, collaboration, and source control are required. Additional knowledge of the Healthcare domain and protocols like DICOM and HL7 will be an added advantage.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be responsible for designing and implementing cloud-native and hybrid solutions using GCP services such as Compute Engine, Kubernetes (GKE), Cloud Functions, BigQuery, Pub/Sub, Cloud SQL, and Cloud Storage. Additionally, you will define cloud adoption strategies, migration plans, and best practices for performance, security, and scalability. You will also be required to implement and manage Terraform, Cloud Deployment Manager, or Ansible for automated infrastructure provisioning. The ideal candidate should have expertise as a GCP data architect with network domain skills in GCP (DataProc, cloud composer, data flow, BQ), python, spark Py spark, and hands-on experience in the network domain, specifically in 4G, 5G, LTE, and RAN technologies. Knowledge and work experience in these areas are preferred. Moreover, you should be well-versed in ETL architecture and data pipeline management. This is a full-time position with a day shift schedule from Monday to Friday. The work location is remote, and the application deadline is 15/04/2025.,

Posted 2 weeks ago

Apply

8.0 - 10.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we&aposve set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you&aposll make a valuable - and valued - contribution. We&aposre a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the role Do you want to help build Rokus next-generation unified cloud-agnostic hosting platform Are you experienced with Terraform, Kubernetes, and Istio Can you write applications and automation in Golang, Python, or Shell Are you interested in being part of a multinational team to design and create the platform If so, this role is for you! About the team The central Infrastructure Engineering team is looking for highly skilled infrastructure and software engineers to help develop and drive Rokus service mesh hosting architecture. Our team is responsible for building and scaling both the Platform (Kubernetes, Istio, Envoy, operators, and more) to affect Rokus transition towards a single, unified, cloud-agnostic system where all teams speak the same infrastructure language. We are engaging with Rokus engineering teams to migrate hundreds of workloads to our common platform, including helping augment and automate CI/CD flows. We are looking for engineers that love working collaboratively across teams to achieve results that impact the entire company. What youll be doing: Help architect, design, build, deploy Rokus next generation service mesh and cloud infrastructure. Contribute to evolving our?deployments?by building solutions using?Docker, Kubernetes, Istio/Envoy, and Terraform. Join in efforts to investigate new technology and tools to be adopted by Roku.? Help build and integrate security as part of the infrastructure. Collaborate on internal customer engagements as we migrate workloads to Kubernetes + Istio + open-source observability tools and technologies. Work closely with the Observability team to integrate and scale existing and new?observability?tools as part of a holistic solution. Work closely with the SRE team to maintain availability of our services and improve onboarding workflows. Mentor other team members to define and adopt new or improve existing processes and procedures. Were excited if you have: Strong hands-on experience in cloud technologies. AWS, ECS, and Kubernetes (EKS, GKE, AKS or other) preferred.?Knowledge of another cloud platform like GCP or Azure is a plus but not required. Demonstrated understanding of overall infrastructure design and developing tools to enable and automate the infrastructure.? Experienced with a high-level scripting language?(such as?Python) and?a?system programming language?(such as Go). Strong experience with Kubernetes. Production experience in testing and deploying applications via modern CI/CD tools and concepts Familiarity with Observability tools like Prometheus, Thanos, Loki, Grafana, etc. The drive and self-motivation to understand the intricate details of a complex infrastructure environment.?? Ability to work independently. Demonstrated ability to communicate clearly with both technical and non-technical project stakeholders.?? Experience with integrating AI tools for improving processes and reducing toil is a plus. Masters degree or equivalent experience (8+ years) You have either tried Gen AI in your previous work or outside of work or are curious about Gen AI and have explored it. Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It&aposs important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company&aposs success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We&aposre independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you&aposll be part of a company that&aposs changing how the world watches TV.? We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn&apost real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002.? To learn more about Roku, our global footprint, and how we&aposve grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms. Show more Show less

Posted 2 weeks ago

Apply

4.0 - 6.0 years

0 Lacs

pune, maharashtra, india

On-site

Req ID: 327318 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a GCP & GKE - Sr Cloud Engineer to join our team in Pune, Mah?r?shtra (IN-MH), India (IN). Job Title / Role: GCP & GKE - Sr Cloud Engineer Job Description: Primary Skill: Cloud-Infrastructure-Google Cloud Platform Minimum work experience: 4+ yrs Total Experience: 4+ Years Mandatory Skills: Technical Qualification/ Knowledge: Expertise in assessment, designing and implementing GCP solutions including aspects like compute, network, storage, identity, security , DR/business continuity strategy, migration , templates , cost optimization, PowerShell ,Terraforms, Ansible etc.. Must have GCP Solution Architect Certification Should have prior experience in executing large complex cloud transformation programs including discovery, assessment, business case creation, design , build , migration planning and migration execution Should have prior experience in using industry leading or native discovery, assessment and migration tools. Good knowledge on the cloud technology, different patterns, deployment methods, compatibility of the applications Good knowledge on the GCP technologies and associated components and variations Anthos Application Platform . Working knowledge on GCE, GAE, GKE and GCS . Hands-on experience in creating and provisioning compute Instances using GCP console, Terraform and Google Cloud SDK. . Creating Databases in GCP and in VM's . Knowledge of data analyst tool (big query). . Knowledge of cost analysis and cost optimization. . Knowledge of Git & GitHub. . Knowledge on Terraform and Jenkins. . Monitoring the VM and Applications using Stack driver. . Working knowledge on VPN and Interconnect setup. . Hands on experience in setting up HA environment. . Hands on experience in Creating VM instances in Google cloud Platform. . Hands on experience in Cloud storage and retention policies in storage. . Managing Users on Google IAM Service and providing them appropriate permissions. . GKE . Install Tools - Set up Kubernetes tools . Administer a Cluster . Configure Pods and Containers . Perform common configuration tasks for Pods and containers. . Monitoring, Logging, and Debugging . Inject Data Into Applications . Specify configuration and other data for the Pods that run your workload. . Run Applications . Run and manage both stateless and stateful applications. . Run Jobs . Run Jobs using parallel processing. . Access Applications in a Cluster . Extend Kubernetes . Understand advanced ways to adapt your Kubernetes cluster to the needs of your work environment. . Manage Cluster Daemons . Perform common tasks for managing a DaemonSet, such as performing a rolling update. . Extend kubectl with plugins . Extend kubectl by creating and installing kubectl plugins. . Manage HugePages . Configure and manage huge pages as a schedulable resource in a cluster. . Schedule GPUs . Configure and schedule GPUs for use as a resource by nodes in a cluster. Certification: GCP Engineer & GKE Academic Qualification:B. Tech or equivalentor MCA Process/ Quality Knowledge: Must have clear knowledge on ITIL based service delivery. ITIL certification is desired. Knowledge on quality Knowledge on security processes Soft Skills: Good communication skill and capability to work directly with global customers Timely and accurate communication Need to demonstrate the ownership for the technical issues and engage the right stakeholders for timely resolution. Flexibility to learn and lead other technology areas like other public cloud technologies, private cloud, automation About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

5 - 15 Lacs

gurugram, bengaluru, mumbai (all areas)

Hybrid

Hands-on experience with GKE , GCVE , landing zone design , and DevOps automation . Working knowledge of networking , firewalls , and security architecture in GCP. GCP certifications such as Associate Cloud Engineer , Professional Cloud Architect , or Security Engineer preferred. Lead and execute VMware to GCVE migration projects, ensuring minimal disruption and optimal performance. Design and implement landing zones on GCP, incorporating security, governance, and scalability best practices. Deploy and manage Google Kubernetes Engine (GKE) clusters for containerized workloads.

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

A career at HARMAN Automotive is an opportunity to be part of a global, multi-disciplinary team dedicated to leveraging technology to shape the future. Join us and gain the tools to accelerate your professional growth. Engineer cutting-edge audio systems and integrated technology platforms that enhance the driving experience by combining creativity, thorough research, and a collaborative spirit with design and engineering excellence. Contribute to the advancement of in-vehicle infotainment, safety, efficiency, and enjoyment. We are currently looking for an experienced Cloud Platform and Data Engineering Specialist well-versed in GCP (Google Cloud Platform) or Azure to join our team. The ideal candidate should possess a solid background in cloud computing, data engineering, and DevOps. Your responsibilities will include managing and optimizing cloud infrastructure (GCP) to ensure scalability, security, and performance, designing and implementing data pipelines, data warehousing, and data processing solutions, developing and deploying applications using Kubernetes and Google Kubernetes Engine (GKE), as well as creating and maintaining scripts and applications using Python. To excel in this role, you should have 3-6 years of experience in cloud computing, data engineering, and DevOps. Technical proficiency in GCP or Azure, Kubernetes and GKE, and Python programming is essential. Additionally, you should demonstrate strong problem-solving abilities, attention to detail, and effective communication and collaboration skills. Experience with GCP services like Compute Engine, Storage, and BigQuery, data engineering tools such as Apache Beam, Dataflow, or BigQuery, as well as DevOps tools like Jenkins, GitLab CI/CD, or Cloud Build will be considered a bonus. We offer a competitive salary and benefits package, opportunities for professional growth and development, a collaborative work environment, exposure to cutting-edge technologies, recognition for outstanding performance through BeBrilliant, and the chance to collaborate with a renowned German OEM. The standard work schedule is five days a week. At HARMAN, we are dedicated to creating an inclusive and supportive environment where every employee is valued. We encourage you to share your ideas, perspectives, and unique qualities within a culture that celebrates diversity. We also provide opportunities for training, development, and continuing education to help you thrive in your career. HARMAN is a pioneer in unleashing next-level technology that amplifies the sense of sound. Our integrated technology platforms make the world smarter, safer, and more connected, catering to automotive, lifestyle, and digital transformation needs. Under 16 iconic brands such as JBL, Mark Levinson, and Revel, we deliver innovative solutions that transform ordinary moments into extraordinary experiences, setting high engineering and design standards for our customers, partners, and employees. If you are ready to innovate, make a lasting impact, and join a community of talented individuals, we invite you to explore opportunities with us at HARMAN Automotive.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

You will be responsible for managing and supporting multiple SAS platforms like standalone, Grid, VA, Viya hosted on UNIX/LINUX servers. This includes providing SAS platform security management, SAS application, and underlying infrastructure support (OS, Storage, SAS 9.x EBI applications, Web, and Databases) while ensuring processes are in line with organizational policies. Your role will involve monitoring the overall availability and performance of the current SAS server environments and taking corrective actions as necessary to ensure peak operational performance. It is essential to have a good knowledge of SAS ACLs and UNIX/LINUX security and monitor usage logs to assist in performance tuning of systems and servers. Additionally, you will need knowledge of applying hotfixes and renewing SAS licenses across the platforms. Interacting directly with various IT teams as it pertains to SAS Administration is also a key aspect of the role. You will be responsible for scheduling and coordinating all planned outages on the SAS server and serving as the point of contact for unplanned outages or other urgent issues related to the SAS environment. Installation, upgrade, and configuration of SAS technology products on all supported server environments will be part of your responsibilities. To qualify for this role, you should have 4+ years of hands-on experience as a SAS Administrator of the 9.4 suite in Linux environments, performing system monitoring, log analysis, performance analysis, performance tuning, and capacity planning. You should also possess 3+ years of hands-on experience as a SAS Grid Administrator of the 9.4 suite in a Linux environment, with good knowledge of LSF and Grid Manager. Additionally, 2+ years of hands-on experience administering in SAS Visual Analytics Environment is required. Understanding Linux commands that enable monitoring of SAS jobs, the ability to navigate Linux environments for basic SAS functioning, and a good understanding of the integration between Linux and SAS are also necessary. Knowledge of schedulers like Cron, Autosys, and SAS schedule manager is preferred. A thorough understanding of SAS products and how they work together in an environment is essential, along with experience in troubleshooting SAS platform and client issues to root cause. Proficiency in working with the SAS Management Console (SMC) and SAS configuration files is also required. Strong analytical, communication, teamwork, problem-solving, and interpersonal skills are important for this role. In addition to the SAS-specific skills, you are expected to have a minimum of 7+ years of hands-on experience with GCP services, including GKE, Filestore, IAM, and networking, or other similar cloud technologies. You should have at least 5+ years of experience building Kubernetes clusters and container orchestration with deep expertise in GKE. Experience using Terraform for infrastructure as code, including complex cluster provisioning in production settings, is required. Furthermore, you should have at least 3+ years of experience configuring and managing scalable storage solutions, including GCP Filestore. Experience in cloud migration projects, including operating in a hybrid cloud mode, will be an added advantage.,

Posted 2 weeks ago

Apply

3.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

We are seeking passionate technologists who are eager to lead client engagements and oversee the delivery of complex technical projects. Your responsibilities will include developing test strategies, plans, and test cases. You will also be tasked with automating tests using homegrown and open source test frameworks. Collaboration across teams to create solution-based test plans, incorporating feedback from various stakeholders, is a key aspect of this role. Additionally, you will work towards maintaining current QA processes while introducing new ones. Defining system test plans to validate all existing feature functionalities configured on system testbeds and automating all test cases will be part of your responsibilities. You will also be responsible for verifying endurance and scale testing to ensure robust feature functionality under stress. To be successful in this role, you should have 3 to 8 years of test engineering experience. Strong scripting or programming skills in languages such as Golang, Python, Node.js, or Ruby are required. Knowledge of test frameworks like Selenium, Webdriver, Watir, and PyUnit/JUnit is essential. Experience in testing container technologies such as Docker, Kubernetes, Mesos, Nomad, ECS, EKS, and GKE is also necessary. Proficiency in debugging and troubleshooting issues at various levels of the software stack is expected. Experience with performance testing using tools like JMeter, Scale, and Reliability testing is important. You should also have experience in different types of testing, including Functional testing, Integration testing, Regression testing, System testing, Installation & upgrade testing, and sanity/smoke testing. Hands-on experience with cloud computing platforms like AWS, Azure, or GCP is preferred. Familiarity with Agile methodologies and the ability to work in small teams are advantageous. A minimum of 3 years of experience in automating tests for web-based applications and proficiency in REST API testing is required. Experience working with US-based startups and enterprises, along with strong communication skills, will be beneficial for this role.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Data Engineer at Zebra, your primary responsibility will be to understand the technical requirements of clients and design and build data pipelines to meet those requirements. In addition to developing solutions, you will also oversee the development of other Engineers. Strong verbal and written communication skills are essential as you will be required to effectively communicate with clients and internal teams. Success in this role will require a deep understanding of databases, SQL, cloud technologies, and modern data integration and orchestration tools like GCP Dataflow, GKE, Workflow, Cloud Build, and Airflow. You will play a critical role in designing and implementing data platforms for AI products, developing productized and parameterized data pipelines, and creating efficient data transformation code in various languages such as Python, Scala, Java, and Dask. You will also be responsible for building workflows to automate data pipelines using Python, Argo, and Cloud Build, developing data validation tests, and conducting performance testing and profiling of the code. In this role, you will guide Data Engineers in delivery teams to follow best practices in deploying data pipeline workflows, build data pipeline frameworks to automate high-volume and real-time data delivery, and operationalize scalable data pipelines to support data science and advanced analytics. You will also optimize customer data science workloads and manage cloud services costs/utilization while developing sustainable data-driven solutions with cutting-edge data technologies. To qualify for this position, you should have a Bachelor's, Master's, or Ph.D. Degree in Computer Science or Engineering, along with at least 5 years of experience programming in languages like Python, Scala, or Go. You should also have extensive experience in SQL, data transformation, developing distributed systems using open-source technologies like Spark and Dask, and working with relational or NoSQL databases. Experience in AWS, Azure, or GCP environments is highly desired, as well as knowledge of data models in the Retail and Consumer products industry and agile project methodologies. Strong communication skills, the ability to learn new technologies quickly and independently, and the capacity to work in a diverse and fast-paced environment are key competencies required for this role. You should be able to work collaboratively in a team setting and achieve stretch goals while teleworking. Travel is not expected for this position to ensure candidate safety and security from online fraudulent activities.,

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

pune, maharashtra, india

On-site

Job description Some careers shine brighter than others. If you're looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist. In this role, you will: Candidate should have strong knowledge on Linux Administration We are looking for a bigdata Engineer who will be working in our team for deploying and maintaining the Application on bigdata platform. Candidate should have experience in bigdata technologies the workloads by following the industry best practices of bigdata on Cloudera. Candidate should have experience in bigdata scanning and fix vulnerabilities where required. Candidate will have to work with existing cloud Automation team & follow the existing processes and procedures to deliver the automation and maintain the existing Cloud Environments. Should be able to work with different teams (spread across in multiple location) to develop a scalable, reliable, and resilient software running in the GKE. Requirements To be successful in this role, you should meet the following requirements: Candidate should have experience in Bigdata/ GKE Knowledge of ISTIO will be an added advantage Experience in different deployment strategies Should have experience in Cloud monitoring & logging Knowledge of Vault for Secrets storage (will be an added advantage) Strong Linux administration experience within a complex multi-tier enterprise infrastructure environment (OAT, NAS, NetBackup and Puppet) Knowledge and hands on experience on Ansible Tower. DevOps hands-on tooling experience - GITHUB, Jenkins, Ansible & Ansible Tower, Groovy, etc. In-depth automation and scripting skills including use of a scripting languages such as BASH, Python, YAML. PowerShell would also be advantageous Knowledge of CICD (Continuous Integration Continuous Deployment) Experience of working in an Agile team and knowledge of Jira/Confluence (will be an added advantage) Good Analytical skills in issue diagnosis Good communication skill Identify, plan, and deliver IT Initiatives. Coordinate planning & delivery of business changes. Overlook Production support, manage support roaster and xMatter setup. Periodic renewal of BIA, DSS, Data Clinic, BRETT, DR Exemption. Manage Change Order implementation. Manage Incidents communication, SOTN update, representation in Retro calls, PIR meetings. Review and Approve CRs raised by Platform team for patching, SSP activities. Do IA and plan activities accordingly. Requirement gathering & developing PYSPARK, PENTAHO & Control-M jobs. BFF SPOC for Awards & Recognition pillar. Any degree with Strong Finance and Accounting background. PYSPARK, Bigdata knowledge is preferrable Bigdata, Pentaho, PYSPARK,Linux shell scripting.,Python programming., Control M scheduling. You'll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by - HSDI

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

pune, maharashtra, india

On-site

Job description Some careers shine brighter than others. If you're looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Software Engineer. In this role, you will: Design and engineer software with the customer/user experience as a key objective Actively contributes to Technology Engineering Practice. Adhere to all best practice, standards and policies. Ensure service resilience, service sustainability and recovery time objectives for all the solutions delivered. Keep up to date and have expertise on current tools, technologies and areas like cyber security and regulations pertaining to aspects like data privacy, consent, data residency etc. that are applicable Ensuring compliance with all relevant controls and standards. Designing and implementing solutions using the technologies listed. Delivering high performing solutions to complex problems, Developing the most appropriate IT solutions in line with the solution design, to meet customer needs, Ensuring continuous improvement with responsibility to write the unit, integration & automation tests. Working closely with Agile leads, Product Owner, Technical Leads, Data analysts, QA engineers, and Business Analysts throughout the project lifecycle. Performing & Leading Deployments to various environments using DevOps tools /CICD Pipelines. Technical Leadership skills to guide & mentor junior developers Requirements To be successful in this role, you should meet the following requirements: Strong knowledge on Java, Spring boot, SQL/ PL SQL. Microservices developer with SOLID design principles. Experience in UI frameworks (React JS / Angular) Bash/shell scripting & Python Experience in Cloud technologies adoption such as AWS/GCP Databases (Postgres, MySQL, Big Query) Containers (Docker, Kubernetes/GKE) Security (IAM, roles, service accounts, entitlements, code & container scanning) CI/CD Pipelines. DevOps principles & automation tools (Terraform, Jenkins, Ansible, Nexus) Agile development principles (Scrum, Jira, Confluence) Ability to work in a large IT project with development experience in a Banking and/or Financial Crime Risk project. Willingness to adapt and learn new technologies. Willingness to take ownership of tasks. Strong collaboration skills and experience working in diverse, global teams. Excellent problem-solving skills and ability to work independently and as part of a team. Good to have skills: NetReveal / Detica. ESSENTIAL SKILLS (non-technical) Excellent communication skills Ability to explain complex ideas Ability to work as part of a team Ability to work in a team that is located across multiple regions / time zones Willingness to adapt and learn new things Willingness to take ownership of tasks Strong collaboration skills and experience working in diverse, global teams. Excellent problem-solving skills and ability to work independently and as part of a team. You'll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by - HSDI

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

pune, maharashtra, india

On-site

Job description Some careers shine brighter than others. If you're looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Kubernetes Platform Engineer/Consultant Specialist. In this role, you will: . Build and manage the HSBC GKE Kubernetes Platform to easily let application teams deploy to Kubernetes. Mentor and guide support engineers, represent the platform technically through talks, blog posts and discussions . Engineer solutions on HSBC GKE Kubernetes Platform using Coding, Automation and Infrastructure as Code methods (e.g. Python, Tekton, Flux, Helm, Terraform, ). Manage a fleet of GKE clusters from a centrally provided solution . Ensure compliance with centrally defined security controls and with operational risk standards (E.g. Network, Firewall, OS, Logging, Monitoring, Availability, Resiliency and Containers). Ensure good Change management practice is implemented as specified by central standards. Provide impact assessments where requested for changes proposed on HSBC GCP core platform. . Build and support continuous integration (CI), continuous delivery (CD) and continuous testing activities. Engineering activities to implement patches for VMs and containers provided centrally . Support non-functional testing . Update support and operational documentation as required . Fault find and support Applications teams . On a rotational on call basis provide out of business hours support as part of our 24 x 7 coverage Requirements To be successful in this role, you should meet the following requirements: . Demonstrable Kubernetes and Cloud Native experience - building, configuring and extending Kubernetes platforms . Automation scripting (using scripting languages such as Terraform, Python etc.) . Experience of working with Continuous Integration (CI), Continuous Delivery (CD) and continuous testing tools . Experience of working with Kubernetes resource configuration tooling (Helm, Kustomize, kpt) . Experience working within an Agile environment . Programming experience in one or more of the following languages: Python or Go . Ability to quickly acquire new skills and tools You'll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by - HSBC Software Development India

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

The role of a Technical-Specialist Big Data (PySpark) Developer based in Pune, India involves designing, developing, and unit testing software applications in an Agile development environment. As an Engineer, you are responsible for ensuring the delivery of high-quality, maintainable, scalable, and high-performing software applications. You are expected to have a strong technological background with good working experience in Python and Spark technology. The role requires hands-on experience and the ability to work independently with minimal guidance while also providing technical guidance and mentorship to junior team members. You will play a key role in enforcing design and development skills within the team and will actively apply Continuous Integration tools and practices as part of Deutsche Bank's digitalization journey. As part of the benefits package, you will enjoy a best-in-class leave policy, gender-neutral parental leaves, childcare assistance benefit, sponsorship for industry-relevant certifications and education, employee assistance program, comprehensive hospitalization insurance, accident and term life insurance, and complementary health screening for individuals above 35 years. Your key responsibilities will include designing solutions for user stories, developing and unit-testing software, integrating, deploying, maintaining, and improving software, performing peer code reviews, participating in sprint activities and ceremonies, applying continuous integration best practices, collaborating with team members, reporting progress using Agile team management tools, managing task priorities and deliverables, ensuring the quality of solutions provided, and contributing to planning and continuous improvement activities. To be successful in this role, you should have at least 5 years of development experience in Big Data platforms, hands-on experience in Spark and Python programming, familiarity with BigQuery, Dataproc, Composer, Terraform, GKE, Cloud SQL, and Cloud functions, experience in setting up and maintaining continuous build/integration infrastructure, knowledge of development platforms and SDLC processes and tools, strong analytical and communication skills, proficiency in English, ability to work in virtual teams and matrixed organizations, a willingness to learn and keep pace with technical innovation, and the ability to share knowledge and expertise with team members. You will receive training and development opportunities, coaching and support from experts in your team, and a culture of continuous learning to aid your career progression. The company strives for a culture of empowerment, responsibility, commercial thinking, initiative, and collaboration, celebrating the successes of its people as part of the Deutsche Bank Group. Applications from all individuals are welcome, and the company promotes a positive, fair, and inclusive work environment. For further information about the company, please visit: [Deutsche Bank Company Website](https://www.db.com/company/company.htm),

Posted 2 weeks ago

Apply

5.0 - 10.0 years

12 - 24 Lacs

bengaluru

Work from Office

Senior Backend Developer: -Hands-on experience in backend software development using Java 11+ and Spring Boot. -Strong experience in GCP, GKE, gRPC, REST API -Well-versed in configuring and maintaining CI/CD pipelines using Jenkins Office cab/shuttle Health insurance Food allowance Provident fund Annual bonus

Posted 3 weeks ago

Apply

7.0 - 12.0 years

14 - 24 Lacs

chennai

Work from Office

Bachelors degree with 7 or more years experience in the IT industry and a relevant experience of 5 or more years in Kubernetes platform automation Hands-on experience with one or more of the following platforms: EKS, Red Hat OpenShift, GKE, AKS, OCI GitOps CI/CD workflows (ArgoCD, Flux)and Working in Agile Ceremonies Model Very Strong Development and Engineering Expertise in the following- Ansible, Terraform, Helm, Jenkins, Gitlab VSC/Pipelines/Runners, Artifactory Have written Terraform Modules and Code in GitOps setting for K8s Lifecycle Management (any k8s flavor is fine) Strong proficiency with monitoring/observability tools such as New Relic, Prometheus/Grafana, logging solutions (Fluentd/Elastic/Fluentbit/OTEL/ADOT/Splunk) to include creating/customizing metrics and/or logging dashboards Familiarity with Cloud cost optimization (e.g. Kubecost, CloudHealth) Strong experience with infra components like ArgoCD, Flux, cert-manager, Karpenter, Cluster Autoscaler, VPC CNI, Over-provisioning, CoreDNS, metrics-server, Keda Familiarity with Wireshark, tshark, dumpcap, etc., capturing network traces and performing packet analysis Demonstrated expertise with the K8S ecosystem (inspecting cluster resources, determining cluster health, identifying potential application issues, etc.) Strong Bash scripting experience to include automation scripting (netshoot, RBAC lookup, etc.) Strong Development of K8S tools/components which may include standalone utilities/plugins, cert-manager plugins, etc. Development and working experience with Istio Service Mesh lifecycle management and configuring, troubleshooting applications deployed on Service Mesh and Service Mesh related issues Expertise creating, modifying RBAC and Pod Security Standards, Quotas, LimitRanges, OPA & Gatekeeper Policies Working experience with security tools such as Sysdig, Crowdstrike, Black Duck, Xray, etc. Demonstrated expertise with the K8s security ecosystem (SCC, network policies, RBAC, CVE remediation, CIS benchmarks/hardening, etc.) Networking of microservices, solid understanding of Kubernetes networking and troubleshooting Certified Kubernetes Administrator (CKA) Terraform Certified Demonstrated very strong troubleshooting and problem-solving skills Excellent verbal communication and written skills

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies