Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Noida
On-site
Join our Team About this opportunity: We are seeking a Senior OpenShift Engineer to lead the migration, modernization, and management of enterprise container platforms using Red Hat OpenShift. This role involves migrating legacy applications to OpenShift, optimizing workloads, and ensuring high availability across hybrid and multi-cloud environments. The ideal candidate will be skilled in container orchestration, DevOps automation, and cloud-native transformations. What you will do: Lead migration projects to move workloads from legacy platforms ( on-prem running on KVM/VMware/Openstack, on-prem Kubernetes, OpenShift 3.x) to OpenShift 4.x. Assess and optimize monolithic applications for containerization and microservices architecture. Develop strategies for stateful and stateless application migrations with minimal downtime. Work with developers and architects to refactor or replatform applications for cloud-native environments. Implement migration automation using Ansible, Helm, or OpenShift GitOps (ArgoCD/FluxCD). Design, deploy, and manage scalable, highly available OpenShift clusters across on-prem and cloud. Implement multi-cluster, hybrid cloud, and multi-cloud OpenShift architectures. Define resource quotas, auto-scaling policies, and workload optimizations for performance tuning. Oversee OpenShift upgrades, patching, and lifecycle management. The skills you bring: Deep hands-on experience with Red Hat OpenShift (OCP 4.x+), Kubernetes, and Docker. Strong knowledge of application migration strategies (Lift & Shift, Replatforming, Refactoring). Proficiency in cloud-native application development and microservices. Expertise in Cloud Platforms (AWS, Azure, GCP) with OpenShift deployments. Advanced scripting and automation using Bash, Python, Ansible, or Terraform. Experience with GitOps methodologies (ArgoCD, FluxCD) and Infrastructure as Code (IaC). Certifications (Preferred but not Mandatory): Red Hat Certified Specialist in OpenShift Administration (EX280) Certified Kubernetes Administrator (CKA) AWS/Azure/GCP Kubernetes/OpenShift-related certifications Strong problem-solving skills with a strategic mindset for complex migrations. Experience in leading technical projects and mentoring engineers. Excellent communication and documentation skills. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Noida Req ID: 766908
Posted 3 weeks ago
12.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Only AWS certified candidates required Exp:6 y to 12 years Location: Noida Linux (Centos, Ubuntu, RHEL & Rocky) Kubernetes with on-premise deployment in both IPv4 & IPv6 Calico networking in Kubernetes Helm Chart Able to create Docker, docker-compose file & deployment file in Kubernetes AWS & GCP Services Shell scripting Ansible automation Jenkins for CI/CD Monitoring using Grafana & Prometheus Cassandra & Timescale DB with HA deployment Redis & Rabbitmq (Basic) PCS cluster High-Availability Deployment for all services DC-DR configuration
Posted 3 weeks ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Summary We are seeking a skilled Java MFA Developer with 6–10 years of experience to join our dynamic team. This opportunity is perfect for professionals who are passionate about Identity and Access Management (IAM) and looking to deepen their expertise in Java, microservices, Docker, and cloud-native development . In this role, you’ll work hands-on with tools and platforms such as Helm, Kubernetes, Docker , and other containerization technologies. You'll also apply your Java development and DevOps knowledge to design and deliver secure, scalable solutions. We understand that not every candidate will have experience with all the technologies listed. If you’re passionate, committed, and eager to learn, we’re here to support your growth and help you succeed. Key Responsibilities · 6–10 years of experience in Java, Spring, Spring Boot, and Microservices · Cloud proficiency (AWS, Azure, or GCP preferred) · Hands-on with Helm, Kubernetes, Docker, and containerization tools · Agile/Scrum working experience · Strong troubleshooting and solution-oriented mindset · Experience with OpenShift platform · Exposure to Oracle DB (SQL, Stored Procedures, Capacity Planning) · Familiarity with MFA products (any) · Regular updates to technical leads and proactive issue resolution · Ability to integrate and innovate across components Additional Skills (Nice to Have) · Full-stack development exposure · Maven / Gradle · GitLab CI/CD, Vault, and test automation integration · Shell scripting on Linux · Monitoring tools: Splunk, DX-APM or similar · Understanding of networking and system monitoring fundamentals · Git flow knowledge Soft Skills · Strong analytical and problem-solving abilities · Excellent communication and collaboration skills · Self-starter, capable of handling multiple priorities independently Educational Qualifications · Bachelor's degree in Computer Science, Information Technology, or related field Salary & Benefits · Salary: Based on experience and expertise · Competitive benefits package · Health insurance, retirement plans, and paid time off · Support for certifications and continuous professional development Why Join Us? · Work on cutting-edge IAM and MFA solutions · Collaborate in a high-performing, supportive team · Exposure to impactful projects with enterprise-level clients · Opportunity to grow and learn new technologies in a fast-paced, innovative environment Interested candidates can send their CVs to: careers@trevonix.com
Posted 3 weeks ago
4.0 years
0 Lacs
Thane, Maharashtra, India
On-site
DevOps Engineer - Kubernetes Specialist Experience: 4 - 8 Years Exp Salary : Competitive Preferred Notice Period : Within 30 Days Opportunity Type: Hybrid (Mumbai) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Kubernetes , CI/CD , Google Cloud Ripplehire (One of Uplers' Clients) is Looking for: DevOps Engineer - Kubernetes Specialist who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description We are seeking an experienced DevOps Engineer with deep expertise in Kubernetes primarily Google Kubernetes Engine (GKE) to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and maintaining scalable containerized infrastructure, with a strong focus on cost optimization and operational excellence. Key Responsibilities & Required Skills Kubernetes Infrastructure & Deployment Responsibilities: Design, deploy, and manage production-grade Kubernetes clusters Perform cluster upgrades, patching, and maintenance with minimal downtime Deploy and manage multiple microservices with ingress controllers and networking Configure storage solutions and persistent volumes for stateful applications Required Skills: 3+ years of hands-on Kubernetes experience in production environments, primarily on Google Kubernetes Engine (GKE) Strong experience with Google Cloud Platform (GCP) and GKE-specific features Deep understanding of Docker, container orchestration, and GCP networking concepts Knowledge of Helm charts, YAML/JSON configuration, and service mesh technologies CI/CD, Monitoring & Automation Responsibilities: Design and implement robust CI/CD pipelines for Kubernetes deployments Implement comprehensive monitoring, logging, and alerting solutions Leverage AI tools and automation to improve team efficiency and task speed Create dashboards and implement GitOps workflows Required Skills: Proficiency with Jenkins, GitLab CI, GitHub Actions, or similar CI/CD platforms Experience with Prometheus, Grafana, ELK stack, or similar monitoring solutions Knowledge of Infrastructure as Code tools (Terraform, Ansible) Familiarity with AI/ML tools for DevOps automation and efficiency improvements Cost Optimization & Application Management Responsibilities: Analyze and optimize resource utilization across Kubernetes workloads Implement right-sizing strategies for services and batch jobs Deploy and manage Java-based applications and MySQL databases Configure horizontal/vertical pod autoscaling and resource management Required Skills: Experience with resource management, capacity planning, and cost optimization Understanding of Java application deployment and MySQL database administration Knowledge of database operators, StatefulSets, and backup/recovery solutions Proficiency in scripting languages (Bash, Python, or Go) Preferred Qualifications Experience with additional Google Cloud Platform services (Compute Engine, Cloud Storage, Cloud SQL, Cloud Build) Knowledge of GKE advanced features (Workload Identity, Binary Authorization, Config Connector) Experience with other cloud Kubernetes services (AWS EKS, Azure AKS) is a plus Knowledge of container security tools and chaos engineering Experience with multi-cluster GKE deployments and service mesh (Istio, Linkerd) Familiarity with AI-powered monitoring and predictive analytics platforms Key Competencies Strong problem-solving skills with innovative mindset toward AI-driven solutions Excellent communication and collaboration abilities Ability to work in fast-paced, agile environments with attention to detail Proactive approach to identifying issues using modern tools and AI assistance Ability to mentor team members and promote AI adoption for team efficiency Join our team and help shape the future of our DevOps practices with cutting-edge containerized infrastructure. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Ripplehire is a recruitment SaaS for companies to identify correct candidates from employees' social networks and gamify the employee referral program with contests and referral bonus to engage employees in the recruitment process. Developed and managed by Trampoline Tech Private Limited. Recognized by InTech50 as one of the Top 50 innovative enterprise software companies coming out of India and; NHRD (HR Association) Staff Pick for the most innovative social recruiting tool in India. Used by 7 clients as of July 2014. It is a tool available on the subscription-based pricing model. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
2.0 - 5.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Your Impact: As a QA Engineer, your mission will be to use latest technology to develop complex module/component design, automate and maintain an efficient, flexible, and fault-tolerant cloud solutions. You should be highly motivated and talented engineer with a strong desire to learn and grow your skills in developing the solutions for cloud offerings and deployment. You will be responsible for all aspects of the development, load balancing and disaster recovery etc. and work continuously with cloud solutions. What the role offers: Using software quality assurance tools and processes. Software testing methodology, including writing and execution of test plans, debugging, and testing scripts and tools. Strong analytical and problem-solving skills. Experience with test management and bug tracking tools. Excellent written and verbal communication skills; mastery in English and local language. Ability to effectively communicate product architectures, design proposals and negotiate options at management levels. Core Python/Java, Selenium preferred Strong experience in Robot Framework for test automation. Experience with Git and CI/CD tools (Jenkins, GitHub, etc) preferred. Knowledge/Experience on Docker containers/micro services and any cloud is a plus What you need to succeed: Bachelors or Master's degree in Computer Science, Information Systems, or equivalent. Typically 2-5 years experience Microservices, Kubernetes, Helm, Resiliency Testing
Posted 3 weeks ago
8.0 - 12.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Your Impact: We are seeking a highly skilled and experienced Lead Software Engineer to design, implement, and manage robust and scalable CI/CD pipelines using GitLab CI and other DevOps tools . The ideal candidate will have a deep understanding of infrastructure as code (IaC), cloud platforms ( AWS , GCP , Azure), and automation techniques to streamline deployment and infrastructure management . This role requires expertise in Terraform , Ansible , Kubernetes , and Python to enhance operational efficiency and security. What the role offers: Define and lead the architectural vision for Helm-based installers and upgrade frameworks for Kubernetes applications. Design and optimize Helm charts to streamline installation, upgrades, and rollbacks. Establish best practices for Kubernetes-based deployments, scalability, and fault tolerance. Lead the evaluation and adoption of new tools and technologies in the Kubernetes ecosystem. Oversee security, compliance, and performance considerations in installation and upgrade processes. Provide technical leadership and mentorship to engineers in the Helm/Kubernetes domain. Troubleshoot and resolve complex deployment and upgrade issues. Stay up to date with emerging trends and advancements in Kubernetes, Helm, and cloud-native technologies. What you need to succeed: Bachelors or Masters degree in computer science, Information Technology, or a related field. 8 to 12 years of experience in software engineering or DevOps/CD. Strong understanding on Cloud Platforms: AWS, Azure, GCP, OpenShift Proficiency in Infrastructure as Code (IaC) tools such as Terraform, Ansible, or CloudFormation. Hands on experience with CICD pipelines, GitOps mythologies and automation framework. DevOps Tools: GitLab CI/CD, ArgoCD/FluxCD, Helm, Maven, NodeJs, JFrog Artifactory, SonarQube DataBase: Postgres Experience in Monitoring C Logging tools like Prometheus, Grafana. Automation C Scripting: Shell Script, Python Deep knowledge of security, networking, and performance optimization in Kubernetes environments. Strong problem-solving, leadership, and communication skills.
Posted 3 weeks ago
1.0 - 3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Urban Company is a tech-enabled home services marketplace. Customers use our platform to book services, such as home cleaning, live-out helper, aircon servicing, mani-pedi, massage for women, pet grooming and more, which are delivered in the comfort of their home and at a time of their choosing. We promise our customers a high quality, standardized and reliable service experience. To fulfill this promise, we work closely with our hand-picked service partners, enabling them with technology, training, products, tools, financing, insurance, and brand, helping them succeed and deliver on this promise. About The Role & Team What will you do? Are you ready to shape the future of a dynamic business unit? As a Senior Associate at UC, you'll be at the helm of driving growth and steering business success. Collaborating with diverse teams and agency partners, you'll craft and execute strategies that propel our brand forward. With a diverse portfolio of marketing projects, you'll showcase your leadership prowess while delving into consumer insights to unlock the next level of growth. You will get hands on building a D2C product brand. Here's what you'll do: A. Crafting Solutions: ● Product/Offering: Pioneering new service offerings that resonate with our target audience. From introducing innovative variants like female waxing in South India to exploring video consulting for Appliance Repairs, you'll lead the charge in driving product evolution. ● Price: Determining optimal pricing strategies to maximize revenue while staying competitive. Whether it's analysing cost structures or benchmarking against rivals, you'll ensure our pricing reflects value and drives profitability ● Channel Marketing: Crafting compelling storefront(In App flow) experiences that address consumer queries and showcase our brand's essence. From refining app/marketplace interfaces and visibility to communicating luxury through visuals and language, you'll elevate our brand presence. B. Overcoming Deployment Challenges: ● Proposition: Articulating the unique value proposition that sets us apart from the competition. By understanding consumer insights and structuring persuasive messaging, you'll drive consumer engagement and loyalty. ● Place: Strategically aligning supply with demand across key markets to optimize reach and accessibility. Ensuring a seamless match between supply depth and target audience presence, you'll strengthen our market position. ● Promotions: Strategically allocating marketing resources across a blend of online and offline channels. Leveraging insights to deploy budgets effectively, you'll forge strategic partnerships and maximize brand exposure. What We Need ● Graduation from a Tier 1/2 Institute ● SQL is a must ● 1-3 years of relevant experience with high scale startups / FMCGs / Direct to Consumer eCommerce brands/ media agencies ● Hands-on practitioner with strong analytical skills: Likes to get their hands dirty with data & numbers, spend time exploring data, building models. We eat, sleep & breathe Excel & Google Sheets. Comfort with Excel / G sheets is an absolute must. SQL skills are preferable. ● Strong interpersonal skills to manage stakeholders (business teams, brand manager counterparts) and liaise with agencies (brand marketing, performance marketing, creative production). ● High on Business Outcomes and Ambition: Looking to make a trajectory-changing impact at UC ● Outcome-first and Customer-first rather than Solution-first: At UC, we pride ourselves in being outcome focused i.e. "the customer doesn't care what algorithm powers the backend, as long as his job gets done" At Urban Company, we are committed to providing equal and fair opportunities in employment and creating an inclusive work environment. We endeavor to maintain a work environment free from harassment based on age, colour, physical ability, marital status, parental status, ethnic origin, religion, sexual orientation, or gender identity.
Posted 3 weeks ago
0.0 years
0 Lacs
Noida, Uttar Pradesh
On-site
Noida,Uttar Pradesh,India +1 more Job ID 766908 Join our Team About this opportunity: We are seeking a Senior OpenShift Engineer to lead the migration, modernization, and management of enterprise container platforms using Red Hat OpenShift. This role involves migrating legacy applications to OpenShift, optimizing workloads, and ensuring high availability across hybrid and multi-cloud environments. The ideal candidate will be skilled in container orchestration, DevOps automation, and cloud-native transformations. What you will do: Lead migration projects to move workloads from legacy platforms ( on-prem running on KVM/VMware/Openstack, on-prem Kubernetes, OpenShift 3.x) to OpenShift 4.x. Assess and optimize monolithic applications for containerization and microservices architecture. Develop strategies for stateful and stateless application migrations with minimal downtime. Work with developers and architects to refactor or replatform applications for cloud-native environments. Implement migration automation using Ansible, Helm, or OpenShift GitOps (ArgoCD/FluxCD). Design, deploy, and manage scalable, highly available OpenShift clusters across on-prem and cloud. Implement multi-cluster, hybrid cloud, and multi-cloud OpenShift architectures. Define resource quotas, auto-scaling policies, and workload optimizations for performance tuning. Oversee OpenShift upgrades, patching, and lifecycle management. The skills you bring: Deep hands-on experience with Red Hat OpenShift (OCP 4.x+), Kubernetes, and Docker. Strong knowledge of application migration strategies (Lift & Shift, Replatforming, Refactoring). Proficiency in cloud-native application development and microservices. Expertise in Cloud Platforms (AWS, Azure, GCP) with OpenShift deployments. Advanced scripting and automation using Bash, Python, Ansible, or Terraform. Experience with GitOps methodologies (ArgoCD, FluxCD) and Infrastructure as Code (IaC). Certifications (Preferred but not Mandatory): Red Hat Certified Specialist in OpenShift Administration (EX280) Certified Kubernetes Administrator (CKA) AWS/Azure/GCP Kubernetes/OpenShift-related certifications Strong problem-solving skills with a strategic mindset for complex migrations. Experience in leading technical projects and mentoring engineers. Excellent communication and documentation skills. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?
Posted 3 weeks ago
0.0 years
0 Lacs
Noida, Uttar Pradesh
On-site
Noida,Uttar Pradesh,India Job ID 766541 Join our Team With the introduction of 5G and cloud, the role of IT Managed Services has evolved to become an enabler of new revenue opportunities, in addition to delivering efficient cloud and IT operations for service providers on their 5G journey. Join us to understand how different technologies come together to build a best-in-class solution which has made Ericsson lead the 5G evolution. We will also explain how you can be part of this outstanding culture and advance your career while creating a global impact. We believe in trust – we trust each other to do the right things! Therefore, we believe in taking decisions as close to the product and technical expertise as possible. We believe in creativity – trying new things and learning from our mistakes. We believe in sharing our insights and helping one another to build an even better user plane. We truly believe in happiness, we enjoy and feel passionate about what we do and value each other’s technical competence deeply. What you will do Back-End Development: Develop server-side logic using Java and SpringBoot, ensuring high performance and reliability. Implement microservices architecture and containerization using Kubernetes and Helm. Utilize Azure and AWS services to enhance the functionality and scalability of applications. Work with SQL and NoSQL databases for data storage and retrieval. Cloud Architecture: Leverage Azure and cloud architecture principles to deploy, manage, and optimize cloud resources, services, and applications. You will bring Java: Proficiency in Java for both front-end and back-end development. SpringBoot: Strong knowledge and experience in SpringBoot for back-end development. Kubernetes: Expertise in Kubernetes for container orchestration. Containers: Experience with containerization technologies. Microservices: Proficiency in microservices architecture. Helm: Knowledge of Helm for managing Kubernetes applications. Azure/AWS Services: Familiarity with cloud services offered by Azure and AWS. SQL/NoSQL DBs: Working knowledge of both SQL and NoSQL databases. Requirements: Bachelor's degree in Computer Science, Information Technology, or related field (or equivalent experience). Proven experience in Backend development with proficiency in a wide range of technical skills. Strong knowledge of Azure services and cloud architecture. Proficiency Java, and SpringBoot. Experience with Docker containers, Kubernetes, Helm, and microservices. Knowledge of Azure/AWS services and working with SQL/NoSQL databases. Location – Bangalore/Noida/Gurgaon/Chennai/Kolkata/Pune
Posted 3 weeks ago
3.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Devops Engineer : Bangalore Job Description DevOps Engineer_Qilin Lab Bangalore, India Role We are seeking an experienced DevOps Engineer to deliver insights from massive-scale data in real time. Specifically, were searching for someone who has fresh ideas and a unique viewpoint, and who enjoys collaborating with a cross-functional team to develop real-world solutions and positive user experiences for every of this role : Work with DevOps to run the production environment by monitoring availability and taking a holistic view of system health Build software and systems to manage our Data Platform infrastructure Improve reliability, quality, and time-to-market of our Global Data Platform Measure and optimize system performance and innovate for continual improvement Provide operational support and engineering for a distributed Platform at : Define, publish and defend service-level objectives (SLOs) Partner with data engineers to improve services through rigorous testing and release procedures Participate in system design, Platform management and capacity planning Create sustainable systems and services through automation and automated run-books Proactive approach to identifying problems and seeking areas for improvement Mentor the team in infrastructure best : Bachelors degree in Computer Science or an IT related field, or equivalent practical experience with a proven track record. The following hands-on working knowledge and experience is required : Kubernetes , EC2 , RDS,ELK Stack, Cloud Platforms (AWS, Azure, GCP) preferably AWS. Building & operating clusters Related technologies such as Containers, Helm, Kustomize, Argocd Ability to program (structured and OOP) using at least one high-level language such as Python, Java, Go, etc. Agile Methodologies (Scrum, TDD, BDD, etc.) Continuous Integration and Continuous Delivery Tools (gitops) Terraform, Unix/Linux environments Experience with several of the following tools/technologies is desirable : Big Data platforms (eg. Apache Hadoop and Apache Spark)Streaming Technologies (Kafka, Kinesis, etc.) ElasticSearch Service, Mesh Orchestration technologies, e.g., Argo Knowledge of the following is a plus : Security (OWASP, SIEM, etc.)Infrastructure testing (Chaos, Load, Security), Github, Microservices architectures. Notice period : Immediate to 15 days Experience : 3 to 5 years Job Type : Full-time Schedule : Day shift Monday to Friday Work Location : On Site Job Type : Payroll Must Have Skills Python - 3 Years - Intermediate DevOps - 3 Years - Intermediate AWS - 2 Years - Intermediate Agile Methodology - 3 Years - Intermediate Kubernetes - 3 Years - Intermediate ElasticSearch - 3 Years - Intermediate (ref:hirist.tech)
Posted 3 weeks ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
JD - Director of DevOps and Cloud Operations About Us Infra360 is an emerging global leader in cloud consulting that specializes in innovative cloud-native solutions and exceptional customer service. We partner with clients to modernize and optimize their cloud, ensuring resilience, scalability, cost efficiency and innovation. Our core services include Cloud Strategy, Site Reliability Engineering (SRE), DevOps, Cloud Security Posture Management (CSPM), and related Managed Services. We specialize in driving operational excellence across multi-cloud environments, helping businesses achieve their goals with agility and reliability. We thrive on ownership, collaboration, problem-solving, and excellence, fostering an environment where innovation and continuous learning are at the forefront. Join us as we expand and redefine whats possible in cloud technology and infrastructure. Role Summary The Director of DevOps and Cloud Operations will lead and scale Infra360s technology team, driving growth, operational excellence, and client success. The role involves strategic leadership, project management, and delivering innovative solutions in cloud, DevOps, SRE, and security. The ideal candidate will foster a culture of collaboration and innovation while ensuring high-quality service delivery and identifying opportunities to expand client engagements. Key Responsibilities Leadership & People Management : Lead, mentor, and grow a team of engineers, scaling the team from 10 to 50. Foster a culture of innovation, collaboration, ownership, and excellence. Oversee talent acquisition, retention, and professional development within the team. Time Management : Prioritize tasks effectively to balance strategic initiatives, team management, and client interactions. Accountability : Take ownership of deliverables and decisions, ensuring alignment with company goals and values. Pressure Handling : Maintain composure under pressure and manage competing priorities effectively. Technology Operations Requirement Gathering & Statement of Work (SOW) Creation : Client Needs Analysis : As and when required, conduct detailed requirement-gathering sessions with clients to understand their objectives, pain points, and technical needs. Audit Facilitation : Coordinate with the tech team to perform cloud audits, identifying areas for cost optimization, security improvements, and enhanced reliability. SOW Creation : As and when required, draft and finalize comprehensive Statements of Work (SOW) that clearly outline deliverables, timelines, and expectations. Should be able to participate in client discovery calls actively Client & Resource Onboarding SOW Understanding : Thoroughly review and understand the SOW, including scope, deliverables, timelines, milestones, and SLAs to own the whole process Resource Allocation & Onboarding : Identify and onboard the right resources for the project, ensuring team members are briefed on client requirements, project scope, and deliverables. Stakeholder Alignment : Ensure alignment with clients and internal teams on all aspects of the SOW to avoid scope creep and ensure clear expectations. Onboarding Process : Develop and execute a structured client onboarding process, ensuring a smooth transition and setup of services. Access & Tools Setup : Facilitate timely access to client environments, tools, and necessary documentation for the team. Documentation : Provide regular documentation on service usage, reporting, and escalation processes. Project & Operations Management Project Monitoring : Weekly sprint planning with clients and daily stand-up calls with project teams to ensure timely delivery, quality, and efficiency of team members Work Review & Oversight : Regularly review team members work and technical approaches to ensure alignment with best practices. Quality Assurance : Implement processes to maintain high-quality standards across all deliverables. Delivery Excellence : Ensure timely and successful delivery of projects, meeting client expectations and SLAs. Ensuring progress according to SOW and achieving milestones Client Engagement & Stakeholder Management Monthly SOW progress & achievements to get the sign-off through feedback integrations Regular Client Meetings : Schedule and conduct weekly/bi-weekly meetings with clients to discuss project progress, address concerns, and gather feedback. Client Rapport Building : Establish and maintain strong relationships with clients through proactive engagement and communication Act as a subject matter expert to clients, helping them achieve their cloud and infrastructure goals. Technical Content & Marketing Support Case Study Development : Provide technical insights and content for creating impactful case studies that highlight successful client engagements and solutions. Architecture Diagrams : Design and deliver detailed architecture diagrams to visually represent technical solutions for marketing and sales materials. Collaboration with Marketing : As and when required, work with the marketing team to ensure technical accuracy and relevance in promotional content, showcasing the companys expertise. Strategic Planning & Upselling Account Growth Strategy : Develop and execute strategies to expand service offerings within existing client accounts. Client Needs Assessment : Regularly engage with clients to identify evolving needs and opportunities for additional services in cloud, DevOps, SRE, and security. Service Expansion : Identify and introduce premium services, add-ons, or long-term engagements that enhance client outcomes. Cross-Selling Opportunities : Collaborate with internal teams to bundle services and present holistic solutions. Process Optimization & Innovation Process Standardization : Identify areas for improvement and implement standardized processes across projects to enhance efficiency and consistency. Automation : Leverage automation tools and frameworks to streamline repetitive tasks and improve operational workflows. Continuous Improvement : Foster a culture of continuous improvement by encouraging feedback, conducting regular process reviews, and implementing best practices. Innovation Initiatives : Drive innovation by introducing new tools, technologies, and methodologies that align with business goals and client needs. Metrics & KPIs : Define and track key performance indicators (KPIs) to measure process effectiveness and drive data-driven decisions. Technical Expertise Technical Skills of Ideal Candidate : Deep knowledge of Infrastructure, Cloud, DevOps, SRE, Database Management, Observability, and Cybersecurity services. Solid 10+ years of experience as an SRE and DevOps with a proven track record of handling large-scale production environments Strong Experience with Databases (PostgreSQL, MongoDB, ElasticSearch, Kafka) Hands-on experience with ELK or other logging and observability tools Hands-on experience with Prometheus, Grafana & Alertmanager and on-call processes like Pagerduty Strong with skills - K8s, Terraform, Helm, ArgoCD, AWS/GCP/Azure etc Good with Python/Go Scripting Automation Strong with fundamentals like DNS, Networking, Linux Experience with APM tools like - Newrelic, Datadog, and OpenTelemetry Good experience with Incident Response, Incident Management, Writing detailed RCAs Experience with Git and coding best practices Solutioning & Architecture : Proven ability to design, implement, and optimize end-to-end cloud solutions, following well-architected frameworks and best practices. Leadership & Team Management : Demonstrated success in scaling teams, fostering a collaborative and innovative work culture, and mentoring talent to achieve excellence. Problem-Solving & Innovation : Strong analytical skills to understand complex client needs and deliver creative, scalable, and impactful solutions. Project & Stakeholder Management : Expertise in project planning, execution, and stakeholder management, ensuring alignment with business objectives and client expectations. Effective Communication : Exceptional verbal and written communication skills to engage with clients, teams, and stakeholders effectively. Documentation & Organization : Ability to maintain well-organized, structured documentation and adhere to standardized folder structures. Attention to Detail & Follow Through Consistently capture key points, action items, and follow-ups during meetings and ensure timely execution. Time Management & Prioritization : Strong time management skills, with the ability to balance multiple priorities, meet deadlines, and optimize productivity. Task Tracking & Accountability : Maintain a personal task tracker to manage work priorities, monitor progress, and ensure accountability. Results-Driven & Growth Mindset : A proactive, results-oriented approach with a focus on continuous learning and improvement. Qualifications Experience : 12+ years in technology operations, with at least 5 years in a leadership role, managing teams and delivering complex solutions. Education : Bachelors or Masters degree in Computer Science, Engineering, or a related field. (ref:hirist.tech)
Posted 4 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Cloud Engineering Specialist at BT, you will have the opportunity to be part of a team that is shaping the future of communication services and defining how people interact with these services. Your role will involve fulfilling various requirements in Voice platforms, ensuring timely delivery and integration with other platform components. Your responsibilities will include deploying infrastructure, networking, and software packages, as well as automating deployments. You will implement up-to-date security practices and manage issue diagnosis and resolution across infrastructure, software, and networking areas. Collaboration with development, design, ops, and test teams will be essential to ensure the reliable delivery of services. To excel in this role, you should possess in-depth knowledge of Linux, server management, and issue diagnosis, along with hands-on experience. Proficiency in TCP/IP, HTTP, SIP, DNS, and Linux tooling for debugging is required. Additionally, you should be comfortable with Bash/Python scripting, have a strong understanding of Git, and experience in automation through tools like Ansible and Terraform. Your expertise should also include a solid background in cloud technologies, preferably Azure, and familiarity with container technologies such as Docker, Kubernetes, and GitOps tooling like FluxCD/ArgoCD. Exposure to CI/CD frameworks, observability tooling, RDBMS, NoSQL databases, service discovery, message queues, and Agile methodologies will be beneficial. At BT, we value inclusivity, safety, integrity, and customer-centricity. Our leadership standards emphasize building trust, owning outcomes, delivering value to customers, and demonstrating a growth mindset. We are committed to building diverse, future-ready teams where individuals can thrive and contribute positively. BT, as part of BT Group, plays a vital role in connecting people, businesses, and public services. We embrace diversity and inclusion in everything we do, reflecting our core values of being Personal, Simple, and Brilliant. Join us in making a difference through digital transformation, and be part of a team that empowers lives and businesses through innovative communication solutions.,
Posted 4 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
should be able to write bash scripts for monitoring existing running infrastructure and report out. should be able to extend existing IAC code in pulumi typescript ability to debug and fix kubernetes deployment failures, network connectivity, ingress, volume issues etc with kubectl good knowledge of networking basics to debug basic networking and connectivity issues with tools like dig, bash, ping, curl, ssh etc knowledge for using monitoring tools like splunk, cloudwatch, kube dashboard and create dashboards and alerts when and where needed. knowledge of aws vpc, subnetting, alb/nlb, egress/ingress knowledge of doing disaster recovery from prepared backups for dynamodb, kube volume storage, keyspaces etc (AWS Backup, Amazon S3, Systems Manager Setup sensible permission defaults for seamless access management for cloud resources using services like aws iam, aws policy management, aws kms, kube rbac, etc. Understanding of best practices for security, access management, hybrid cloud, etc. Knowledge of advance kube concepts and tools like service mesh, cluster mesh, karpenter, kustomize etc Templatise infra IAC creation with pulumi and terraform, using advanced techniques for modularisation. Extend existing helm charts for repetitive stuff and orchestration, and write terraform/pulumi creation. Use complicated manual infrastructure setup with Ansible, Chef, etc. Certifications: ▪ AWS Certified Advanced Networking - Specialty ▪ AWS Certified DevOps Engineer - Professional (DOP-C02)
Posted 4 weeks ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position Overview Job Title: GCP Developer with Java Corporate Title: Associate Location: Pune, India Role Description The Engineer designs and implements technical solutions and configures applications in different environments in response to business problems. Engineer is required to ensure environment stability, expeditious and timely resolution of Production issues, ensuring minimal downtimes and continuity of services. Further the Engineer investigates, proposes and implements various solutions, standardizing where possible, to ensures stability and reliability of the application platforms. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Use industry best coding practices and develop the functional requirements. Adhere to application and data architectural standards. Complete the delivery commitments within stipulated timelines with highest quality. Ensure production stability by maintaining close to 100% application availability. Work with engineers, to prioritize, troubleshoot and resolve reported bugs / issues / CRs (change requests) on applications. Conducts demonstrations of developed functionality to users. Your Skills And Experience Extensive demonstrated hands-on experience using Java, J2EE, Spring framework and experience designing, developing, and maintaining complex applications. 5 to 8 years of proven experience with Java and focus on multi-threading, collections, concurrency & optimization. Strong knowledge of software design patterns, including SOLID principles. Experience with Agile development methodologies and DevOps practices. Hands-on knowledge of Spring Core, MVC, JPA, Security and transaction. Extensive Web Service (REST), JUnit experience Exposure to Cloud platforms like GCP is good to have. Exposure to IaC using Terraform, Helm charts for Continuous Deployment, GitHub w/f for SRE operations is good to have. Exposure to frameworks like Map Struct, Free Marker is good to have. Practical experience with Build Tools (preferably Gradle), Source Code Control (preferably GitHub), Continuous Integration (GitHub W/f) and Cloud/ Docker based application deployment. Sound knowledge of JSON. How We’ll Support You Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 4 weeks ago
9.0 - 14.0 years
17 - 22 Lacs
Bengaluru
Work from Office
What it takes: Excellent grasp of the principles of Customer Communications Management. Mandatory experience in administration and management of Exstream platforms that involves: Communication Server, Content Author & Empower, OTDS. Working experience in design and development of Exstream Templates. Good to have working experience on Cloud Native version of Exstream or OT Notifications. Good to have working experience in management of Exstream applications in Cloud Technologies like AWS or GCP or Azure with containerized deployments. Proficient in Exstream administration, maintenance activities, patch & upgrade of Exstream components. Participate in the day-to-day administration of the systems, including Incident & Problem Management, Change & Release Management. Proficient Knowledge in, OS - Win/Linux - OS fundamentals, troubleshooting fundamentals, Logs, DB - MS SQL, Oracle - Write basic queries, API basics. Experience delivering service within an ITIL based service delivery model. Programming/scripting is helpful, (e.g., SQL, .sh/bat, Java, JavaScript). Familiarity with configuration and management of web/application servers (IIS, Apache, Tomcat, JBoss, etc.). Familiarity with Docker, Kubernetes, and Helm. Strong analytical skills combined with ability to work in a fast-paced environment with geographically distributed teams Relevant Experience: 9+ years You are great at: Represent OPENTEXT in a professional manner to customers, partners and other OPENTEXT personnel always. Technical SPOC for assigned customer engagements. Take complete ownership of technical delivery including providing SME guidance to the AMS team. Proficient in conducting architecture, performance and capacity review and come up with recommendations for improvement. Lead & drive all major activities/milestones to successful completion within the agreed timelines. Improve team & delivery efficiency by showcasing process improvements and identifying automation opportunities wherever needed. Strong hands-on in managing Exstream applications for global customers using ITIL framework. Support and report to engagement Service Delivery Manager while assigned to active customer engagements. Regularly communicate status to the engagement project manager and proactively identify issues and preventive/remedial measures. Establishes relationships with client technical counterparts. Participate in weekly and monthly client service delivery meetings including escalation calls. Work in conjunction with colleagues from different teams of OpenText including product support, engineering, product management & Cloud Ops Teams (like DB, Storage, network etc.) Prepare, maintain, and submit activity/progress reports and time recording/management reports in accordance with published procedures. Keep delivery managers informed of activities and alert of any issues promptly. Provide inputs as part of engagement closure on project learnings and suggest improvements Provide knowledge transfer to team members, train staff personnel, provide on the job guidance and mentoring and conduct training sessions for customer personnel when authorized by management. Adhere to processes and methodologies to perform the required function. Report deviations from defined processes to the engagement project manager and recommend associated improvements.
Posted 4 weeks ago
4.0 - 7.0 years
11 - 16 Lacs
Bengaluru
Work from Office
What the role offers: The main responsibility is to act as a shield for the rest of the R&D team, stepping in first (but not as frontline support, first for R&D) and coordinating teams, leading investigation, and communication with all stakeholders, internal and external. Participate in calls with support and customers to investigate and communicate on current fix plans and technical issues. Manage dynamic backlog and help prioritize work for SCRUM teams working with PM. Qualify Enhancement Requests and help prioritize with Product Management based on your experience and feedback. Act as single point of contact for customer/support and collaborate with different SCRUM teams to find the adequate resources. Collaborate with a worldwide team to hand over work and get help from many experts. Debug the issues and provide fix/patches to customers. Work with team of engineers guide and mentor them . What you need to succeed: Strong analytical, problem-solving and troubleshooting skills. Using software applications design tools and languages. Excellent time management and self-organization/autonomy. Experience in debugging JAVA/J2EE products built on Kubernetes/Helm platforms Excellent written and verbal communication skills; mastery in English. Ability to effectively communicate product architectures, design proposals and negotiate options at management levels. Experience in working with version control and build tools like GIT, Maven and Jenkins. Experience in SaaS environments Experience with AWS - advantage Experience with Kubernetes, Helm Good Networking skills, familiarity with network protocols such as TCP/IP, HTTP, SSH, and SNMP is a plus Must have Skills : Basic Knowledge of Cloud Computing and SaaS model. Understanding of working in public cloud technologies - (AWS, Azure or GCP is preferred). Familiar with Agile framework/SCRUM development methodologies Excellent knowledge of Java and Python with scripting Customer oriented and instinct to help resolve issues.
Posted 4 weeks ago
9.0 - 12.0 years
12 - 17 Lacs
Bengaluru
Work from Office
Your Impact: The ESM R&D team is seeking an experienced Python Developer, to join our Global R&D team to deliver innovative enterprise software solutions by working in a fast paced challenging and enriching environment. This is a high-growth business, and our solutions are used by enterprise class highly demanding customers across the globe. We are using a Microservices based architecture composed of multiple services running on Kubernetes using Docker Containers. As a lead software systems engineer, You will have to design and develop new product capabilities by working with the System Architect and a team of Engineers and other architects. You will contribute as a team member and take responsibility for own work commitments and take part in project functional problem-solving. You will make decisions based on established practices. You will work under general guidance with progress reviewed on a regular basis. You will also be involved in handling customer incidents (CPE), understanding customer use cases, designing & implementing, and troubleshooting and debugging of software programs. What the role offers: Produce high quality code according to design specifications. Software design/coding for a functional requirement, ensure quality and adherence to company standards. Utilize analytical skills to troubleshoot and fix complex code defects. Work across teams and functional roles to ensure interoperability among other products, including training and consultation. Provide status updates to stakeholders and escalates issues when necessary. Participate in the software development process from design to release in an Agile Development Framework. Design enhancements, updates, and programming changes for portions and subsystems of the software Analyses design and determines coding, programming, and integration activities required based on general objectives and knowledge of overall architecture of product or solution. Current Product Engineering (CPE) based on customer submitted incidents. Experience in troubleshooting and providing solutions for customer issues in a complex environment. Excellent team player and focus on collaboration activities. Ability to take up other duties as assigned. Provide guidance and mentoring to less-experienced team members. What you need to succeed: Bachelor's or Master's engineering degree in Computer Science, Information Systems, or equivalent from premier institutes. 9-12 years of overall software development experience, with at least 2+ recent years of experience in developing python applications on a large-scale environment. Fundamentally good programming and debugging skills Working knowledge in Python and Core Java Programming skills Working knowledge in Docker/Container technologies, Kubernetes, Helm Knowledge on XML, JSON and processing them programmatically. User or Administration knowledge on Linux Operating System Database user level Knowledge, preferably PostgreSQL, Vertica and Oracle DB. Should be capable of writing and debugging SQL queries. Exposure to Cloud technologies usage and deployments would be good (AWS, GCP, Azure etc.) Working experience in Agile environment or Scaled Agile (SAFe) Strong Knowledge on Object oriented design and Data Structures. Ability to work independently in a cross functional distributed team culture with focus on teamwork. Experience of technically mentoring and guiding junior engineers Strong Communication, analytical and problem-solving skills. Understanding on CI-CD/build tools like GIT, Maven, Gradle, Jenkins Knowledge and experience in IT Operations Management Domain.
Posted 4 weeks ago
5.0 - 10.0 years
4 - 6 Lacs
Pune
Work from Office
About The Role Project Role : Technology OpS Support Practitioner Project Role Description : Own the integrity and governance of systems, including best practices for delivering services. Develop, deploy and support infrastructures, applications and technology initiatives from an architectural and operational perspective in conjunction with existing standards and methods of delivery. Must have skills : Microsoft Azure DevOps Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time educationJob Title:Cloud Compute & Containers Technology SpecialistRole Description:This role requires deep technical expertise in cloud infrastructure (e.g., Azure, AWS or GCP base), on-prem servers management, storage, container platforms, and orchestration tools. The candidate must align technology solutions with business objectives and collaborate with cross-functional teams to design, implement, and optimize cloud-based solutions.Key Responsibilities:Deep technical understanding of cloud compute services, including virtual machines, storage, serverless computing and containers on cloud platforms such as Azure, AWS, Google Cloud, or hybrid environments. Expertise in cloud-native architecture, microservices, and event-driven design patterns. Hands-on experience with container platforms such as Docker, Kubernetes, and Red Hat OpenShift. Hands-on experience with Windows/Linux server OS administration, configuration management and image development. Advanced knowledge of container orchestration e.g., Helm, networking, security, and monitoring. Proficiency in DevOps practices, IaC, and scripting with tools like Terraform, Bash, and PowerShell, as well as Strong understanding of CI/CD pipelines and automation. Ensure compliance with cloud security best practices, including IAM, encryption, and auditing. Strong mindset for cloud resource efficiency and management, including tagging and optimization, to improve performance and scalability.Required Skills and Experience:Bachelors degree in Computer Science, Information Technology, or related field Minimum 5+ years of experience in cloud computing, containerization, and DevOps practices. Proven track record of delivering large-scale cloud transformation projects for enterprise clients. Strong experience of with hybrid or multi-cloud environments, and container platforms (Kubernetes, Docker). Azure / AWS / GCP Certifications, and Certified Kubernetes Administrator (CKA) or CKAD. Strong communication (in English), problem solving, and learning agility skills Qualification 15 years full time education
Posted 4 weeks ago
8.0 - 13.0 years
27 - 32 Lacs
Bengaluru
Work from Office
Your Impact: We are part of OpenText Cybersecurity Enterprise division specializing in Security Domain. Our product helps security operations teams to efficiently and effectively preempt and respond to threats that matter with proactive threat hunting, real-time threat detection, and response automation using AI/ML. What the role offers: Develops product architectures and methodologies for software applications design and development across multiple platforms and organizations within the Global Business Unit. Identifies and evaluates new technologies, innovations, and outsourced development partner relationships for alignment with technology roadmap and business value. Reviews and evaluates designs and project activities for compliance with development guidelines and standards; provides tangible feedback to improve product quality and mitigate failure risk. Leverages recognized domain expertise, business acumen, and experience to influence decisions of executive business leadership, outsourced development partners, and industry standards groups. Provides guidance and mentoring to less-experienced staff members to set an example of software applications design and development innovation and excellence. What you need to succeed: 8+ years of technical experience with complex technology projects within large, distributed organizations on public clouds platforms like Azure (preferred) or AWS. Hands-on experience with C#, Windows and SQL (Oracle, MSSQL, MySQL) Hands-on with development of applications on cloud technologies like Azure (preferred), GCP or AWS Knowledge and understanding of Docker, Kubernetes, Helm, Microservices Knowledge and understanding of REST-like APIs Knowledge and understanding of GitLab Experience in overall architecture of software applications (multi-platform) for products and solutions. Excellent written and verbal communication skills Ability to effectively communicate product architectures, design proposals and negotiate options at business unit and executive levels.
Posted 4 weeks ago
12.0 - 17.0 years
11 - 16 Lacs
Mumbai
Work from Office
The role of Technical Architect is to design, implement, and maintain a robust and scalable applications. This framework would require ensuring compliance with regulatory requirements while optimizing operational efficiency. Technical Architect would play a pivotal role in safeguarding BNPP Group against fraud, money laundering, and other financial crimes by leveraging advanced technologies and innovative solutions. This position is critical in driving technical vision, strategy of core banking team, thereby contributing to the overall risk management and integrity of the organization. Position will require to work in a globally distributed setup. Responsibilities Direct Responsibilities Lead the design of enterprise-level software solutions leveraging technology frameworks Collaborate with cross-functional teams to gather requirements and create architectural blueprints for complex systems Design and implement scalable, resilient, and secure distributed architectures which are cost-effective. Provide technical guidance and mentorship to development teams on best practices, design patterns, and coding standards. Contributing Responsibilities Perform architectural reviews, code reviews, and technology assessments to ensure adherence to BNPP CIB defined architectural principles and standards. Stay abreast of industry trends, emerging technologies, and best practices in architecture. Lead automation and guide teams to align with shift left and shift right strategy by encouraging a mindset for automation first and reduce recursive manual efforts Take ownership of technical feasibility studies, demos, proposal development and represent in ITSVC (architectural committee) Technical & Behavioral Competencies Strong knowledge in RDBMS / SQL with Oracle or SQL server or Postgres Troubleshooting & Performance tuning using any profiling tools e.g. Dynatrace, JProfiler etc. Hands-on experience with containerization and orchestration tools such as Docker, Kubernetes, and Helm. Experienced on building CI/CD pipelines, Bitbucket, Git, Jenkins, SonarQube, infrastructure as code (IaC), and serverless computing. Excellent communication and interpersonal skills, with the ability to articulate complex technical concepts to diverse audiences. Ability & willingness to learn & work on diverse technologies (languages, frameworks, and tools) Specific Qualifications (if required) Project Management skills in Core Banking systems Good hands on MS Office (Excel, PowerPoint, MS Projects) Handson experience in Project Management Tools (Clarity / JIRA / ServiceNow) Good to have exposure to Oracle, Autosys /Unix/Linux Certifications such as AWS Certified Solutions Architect, Microsoft Certified: Azure Solutions Architect, or Google Professional Cloud Architect. Skills Referential Behavioural Skills : Ability to collaborate / Teamwork Client focused Ability to deliver / Results driven Attention to detail / rigor Transversal Skills: Analytical Ability Ability to develop and adapt a process Ability to develop others & improve their skills Choose an item. Choose an item. Education Level: Bachelor Degree or equivalent Experience Level At least 12 years
Posted 4 weeks ago
3.0 - 8.0 years
1 - 5 Lacs
Chennai
Work from Office
Provide a brief description of the overall purpose of the position, why this position exists and how it will contribute in achieving the teams goal. Within BNP Paribas, FRB are responsible for activities at the Group level. FRB OPS is part of BNP Paribas Group IT and is responsible for supporting the IT production environments and activities for the FRB. FRB OPS activities are critical and highly visible. Around 250 applications and 4000 servers are under FRB OPS responsibility. FRB OPS has a roadmap to ensure production security & stability, reduce time-to-market by using the most advanced technologies and integrate functional application support to provide full applications production & support services. Therefore, FRB OPS is hiring experienced IT professionals who are open minded, willing to learn and to be part of the transformation roadmap. Responsibilities Direct Responsibilitie Incident management Analysis, communication, coordination, correction, PIRs Problem creation and follow-up Change management Review and validation Technical implementation Coordination and communication Monitor and supervise application and infrastructure components Obsolescence and vulnerability management Capacity planning management DRP technical management and coordination Decommission management Maintain up-to-date referential and documentation (e.g., operating procedures) Morning/Evening checks achievement Definition of the technical solution Schedule definition with stakeholders and deadline commitment Production and infrastructure resources provisioning on all environments Technical integration of applications using existing products and services Technical user stories implementation, independently or by coordinating the right players (BP21 and other ITGP teams) Implement monitoring and supervision Follow FinOPS and SRE guidelines Ensure the upskilling of the team Application deployment automation Identify the best deployment strategy with dev team Technical implementation and tests Operating tasks or processes automation Depending on the toolchain group roadmap, migrate to up-to-date Continuous Deployment tools Identify and share automation use-cases with the community to make reuse easier Contributing Responsibilities Promote standard architectures, solutions and products Feed the product backlog with Production requirements Coordinate and challenge stakeholders to meet deadlines Identify the best practices about Production and share them within teams Support development teams to secure and speed up project delivery Technical & Behavioral Competencies Linux and Windows * Middleware (Websphere, CFT, MQSeries, Kafka,) . * Kubernetes and associated tools (Docker, Helm, ArgoCD) * Cl / CD (Ansible, Jenkins, Git, Release, ...) * Monitoring (Dynatrace, Splunk, LogDna, Sysdig) Scripting (Ansible, Shell, SQL, Scheduling Management (Autosys) Tracking / ticketing tools (Service Now, JIRA) Network protocols Database management Devops Agile Knowledge of ITIL Experience on handling Incident / Problem / Chang General IT infrastructure knowledge Knowledge on banking application will be preferable Good written and spoken English Measure and identify areas for improving Quality and overall Delivery Able to communicate efficiently Good Team Player Resource should be comfortable in working shifts - morning/evening/general, oncall weekends and holidays. Specific Qualifications (if required) Skills Referential Behavioural Skills : (Please select up to 4 skills) Ability to collaborate / Teamwork Decision Making Personal Impact / Ability to influence Attention to detail / rigor Transversal Skills: Ability to understand, explain and support change Analytical Ability Ability to develop and adapt a process Ability to develop and leverage networks Ability to anticipate business / strategic evolution Education Level: Bachelor Degree or equivalent
Posted 4 weeks ago
7.0 - 12.0 years
12 - 16 Lacs
Mumbai
Work from Office
The squad of IT APS Customers, CS&E Robotics, Data and AI Platform Support is looking for a Ops Engineer with a proven track record deploying python applications and/or data science models. You should be specialized or have a strong interest in Kubernetes, Ansible, Terraform, Linux system administration and Python deployment. As an Ops Engineer within the Tribe APS, you will have the opportunity to work at the centre of the banks initiatives in generative AI, smart automation and traditional machine learning. You will work to maintain both the workbench environment of the data scientists as well as monitor and deploy their models and applications. . Responsibilities As part of your responsibilities, you will have to work on the following: - Evaluate sizing and infrastructure requirements for new use cases. - Setup self-service deployment pipelines for AI applications. - Ensure reproducibility of deployments in both environment Non-Prod and Production. - Make sure that all applications are properly monitored, and alerting is in place. - Evolve in an environment where innovation and lean processes are praised, straight-forward communication is encouraged, and peers understand the meaning of team up. - Work with a team of colleagues who are ready to collaborate and to share their experience. Technical & Behavioral Competencies Mandatory: Knowledge of Python ecosystem. Experience with http rest APIs with focus on Django Experience with Git (version control system). E.g. Gitlab, Gitlab CI Experience in DevOps /OPS Linux operating system experience Experience in containerization (docker, podman) LLM operations Cloud experience (e.g. IBM Cloud / Azure) Preferable: Kubernetes/Helm Familiar with code quality gating (Sonar, Nexus, Fortify) Ansible Domino Datalab, Jupyter Artifactory Kafka SQL Postgres Terraform Dynatrace Business Experience: Knowledge of the financial services industry. Specific Qualifications (if required) Agile environment. Follows the Customer processes for projects, incident and change management. Being standalone and team worker, analytical minded, meet commitment, ability to work in a dynamic and multi-cultural environment, flexible, customer-oriented, understand risk awareness. Motivated self-starter, process-oriented with high attention to detail Quick self-starter, pro-active attitude. Good communication skills, Good analytical and synthesis skills. Autonomy, commitment and perseverance. Ability to work in a dynamic and multicultural environment. Flexibility (in peak periods extra efforts may be required). Open minded and show flexibility in self-learning new technologies/tools. You are customer minded and can translate technical issues into non-technical explanations. You are always conscious about continuity of services. You have a very good team spirit and share your knowledge and experience with other members of the team. Working in collaboration with team. Client-oriented, analytical, initiative oriented and able to work independently. Be flexible and ready to provide support outside of Business hours (on-call). Able to take additional responsibility. Able to work from base location Mumbai (Whichever is your base location) during hybrid model. You are flexible and ready to provide support outside of Business hours (on-call). Skills Referential Behavioural Skills : (Please select up to 4 skills) Ability to collaborate / Teamwork Ability to deliver / Results driven Ability to share / pass on knowledge Client focused Transversal Skills: Ability to understand, explain and support change Ability to develop and adapt a process Ability to develop others & improve their skills Analytical Ability Ability to inspire others & generate people's commitment Education Level: Bachelor Degree or equivalent
Posted 4 weeks ago
5.0 years
30 - 60 Lacs
Kolkata, West Bengal, India
Remote
Experience : 5.00 + years Salary : INR 3000000-6000000 / year (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Java, Python, Golang, AWS, Google Cloud, Azure, MongoDB, PostgreSQL, Yugabyte, AuroraDB Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope's API Protection team is responsible for designing and implementing a scalable and elastic architecture to provide protection for enterprise SaaS and IaaS application data. This is achieved by ingesting high volume activity events at near real-time and analyzing data to provide security risk management for our customers, including data security, access control, threat prevention, data loss prevention, user coaching and more. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to build Cloud-Native solutions using technologies like Kubernetes, Helm, Prometheus, Grafana, Jaeger (open tracing), persistent messaging queues, SQL/NO-SQL databases, key-value stores, etc. You will solve complex scale problems, and deploy and manage the solution in production. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What you will be doing Architect and implement critical software infrastructure for distributed large-scale multi-cloud environments. Review architectures and designs across the organization to help guide other engineers to build scalable cloud services. Provide technical leadership and strategic direction for large-scale distributed cloud-native solutions. Be a catalyst for improving engineering processes and ownership. Research, incubate, and drive new technologies to ensure we are leveraging the latest innovations. Required Skills And Experience 5 to 15 years of experience in the field of software development Excellent programming experience with Go, C/C++, Java, Python Experience building and delivering cloud microservices at scale Expert understanding of distributed systems, data structures, and algorithms A skilled problem solver well-versed in considering and making technical tradeoffs A strong communicator who can quickly pick up new concepts and domains Bonus points for Golang knowledge Production experience with building, deploying and managing microservices in Kubernetes or similar technologies is a bonus Production experience with Cloud-native concepts and technologies related to CI/CD, orchestration (e.g. Helm charts), observability (e.g. Prometheus, Opentracing), distributed databases, messaging (REST, gRPC) is a bonus Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 4 weeks ago
8.0 years
30 - 50 Lacs
Kolkata, West Bengal, India
Remote
Experience : 8.00 + years Salary : INR 3000000-5000000 / year (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: JMeter, Selenium, Automation Anywhere, API Testing, UI Testing, Java, Python, Golang Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope's API Protection Framework team is responsible for designing and implementing a scalable and elastic architecture to provide protection for enterprise SaaS and IaaS application data. This is achieved by ingesting high volume activity events at near real-time and analyzing data to provide security risk management for our customers, including data security, access control, threat prevention, data loss prevention, user coaching and more. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to build Cloud-Native solutions using technologies like Kubernetes, Helm, Prometheus, Grafana, Jaeger (open tracing), persistent messaging queues, SQL/NO-SQL databases, key-value stores, etc. You will solve complex scale problems, and deploy and manage the solution in production. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What You Will Be Doing Developing expertise in our cloud security solutions, and using that expertise and your experience to help design and qualify the solution as a whole Contributing to building a flexible and scalable automation solution Working closely with the development and design team to help create an amazing user experience Helping to create and implement quality processes and requirements Working closely with the team to replicate customer environments Automating complex test suites Developing test libraries and coordinating their adoption. Identifying and communicating risks about our releases. Owning and making quality decisions for the solution. Owing the release and being a customer advocate. Required Skills And Experience 8+ years of experience in the field of SDET and a track record showing that you are a highly motivated individual, capable of coming up with creative, innovative and working solutions in a collaborative environment Strong Java and/or Python programming skills. (Go a plus) Knowledge of Jenkins, Hudson, or any other CI systems. Experience testing distributed systems A proponent of Strong Quality Engineering methodology. Strong knowledge of linux systems, Docker, k8s Experience building automation frameworks Experience with Databases, SQL and NoSQL (MongoDB or Cassandra) a plus Knowledge of network security, authentication and authorization. Comfortable with ambiguity and taking the initiative regarding issues and decisions Proven ability to apply data structures and algorithms to practical problems. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 4 weeks ago
8.0 years
30 - 50 Lacs
Cuttack, Odisha, India
Remote
Experience : 8.00 + years Salary : INR 3000000-5000000 / year (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: JMeter, Selenium, Automation Anywhere, API Testing, UI Testing, Java, Python, Golang Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope's API Protection Framework team is responsible for designing and implementing a scalable and elastic architecture to provide protection for enterprise SaaS and IaaS application data. This is achieved by ingesting high volume activity events at near real-time and analyzing data to provide security risk management for our customers, including data security, access control, threat prevention, data loss prevention, user coaching and more. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to build Cloud-Native solutions using technologies like Kubernetes, Helm, Prometheus, Grafana, Jaeger (open tracing), persistent messaging queues, SQL/NO-SQL databases, key-value stores, etc. You will solve complex scale problems, and deploy and manage the solution in production. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What You Will Be Doing Developing expertise in our cloud security solutions, and using that expertise and your experience to help design and qualify the solution as a whole Contributing to building a flexible and scalable automation solution Working closely with the development and design team to help create an amazing user experience Helping to create and implement quality processes and requirements Working closely with the team to replicate customer environments Automating complex test suites Developing test libraries and coordinating their adoption. Identifying and communicating risks about our releases. Owning and making quality decisions for the solution. Owing the release and being a customer advocate. Required Skills And Experience 8+ years of experience in the field of SDET and a track record showing that you are a highly motivated individual, capable of coming up with creative, innovative and working solutions in a collaborative environment Strong Java and/or Python programming skills. (Go a plus) Knowledge of Jenkins, Hudson, or any other CI systems. Experience testing distributed systems A proponent of Strong Quality Engineering methodology. Strong knowledge of linux systems, Docker, k8s Experience building automation frameworks Experience with Databases, SQL and NoSQL (MongoDB or Cassandra) a plus Knowledge of network security, authentication and authorization. Comfortable with ambiguity and taking the initiative regarding issues and decisions Proven ability to apply data structures and algorithms to practical problems. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 4 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39928 Jobs | Dublin
Wipro
19400 Jobs | Bengaluru
Accenture in India
15955 Jobs | Dublin 2
EY
15128 Jobs | London
Uplers
11280 Jobs | Ahmedabad
Amazon
10521 Jobs | Seattle,WA
Oracle
9339 Jobs | Redwood City
IBM
9274 Jobs | Armonk
Accenture services Pvt Ltd
7978 Jobs |
Capgemini
7754 Jobs | Paris,France