Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 years
0 Lacs
India
Remote
Role : Principal Kubernetes Infrastructure Engineer Duration : 6 months Work Mode : Remote Budget : 1.5 L PM Scope : We’re looking for a Rancher Kubernetes expert to lead the design, automation, and reliability of our on-prem and hybrid container platform. Sitting at the intersection of the Platform Engineering and Infrastructure Reliability teams, this role owns the lifecycle of Rancher-managed clusters—from bare-metal provisioning and performance tuning to observability, security, and automated operations. You’ll apply SRE principles to ensure high availability, scalability, and resilience across environments supporting mission-critical workloads. Core Responsibilities: Platform & Infrastructure Engineering Design, deploy, and maintain Rancher-managed Kubernetes clusters (RKE2/K3s) at enterprise scale. Architect highly available clusters integrated with on-prem infrastructure: UCS, VxLAN, storage, DNS, and load balancers. Lead Rancher Fleet implementations for GitOps-driven cluster and workload management. Performance Engineering & Optimization Tune clusters for high-performance workloads on bare-metal hardware , optimizing CPU, memory, and I/O paths. Align cluster scheduling and resource profiles with physical infrastructure topologies (NUMA, NICs, etc.). Optimize CNI, kubelet, and scheduler settings for low-latency, high-throughput applications. Security & Compliance Implement security-first Kubernetes patterns: RBAC, Pod Security Standards, network policies, and image validation. Drive left-shifted security using Terraform, Helm, and CI/CD pipelines; align to PCI, FIPS, and CIS benchmarks. Lead infrastructure risk reviews and implement guardrails for regulated environments. Automation & Tooling Build and maintain IaC stacks using Terraform, Helm, and Argo CD. Develop platform automation and observability tooling using Python or Go Ensure declarative management of infrastructure and applications through GitOps pipelines SRE & Observability. Apply SRE best practices for platform availability, capacity, latency, and incident response. Operate and tune Prometheus, Grafana, and ELK/EFK stacks for complete platform observability. Drive actionable alerting, automated recovery mechanisms, and clear operational documentation. Lead postmortems and drive systemic improvements to reduce MTTR and prevent recurrence. Required Skills · 7+ years in infrastructure, platform, or SRE roles · Deep hands-on experience with Rancher (RKE2/K3s) in production environments · Proficient with Terraform, Helm, Argo CD, Python, and/or Go · Demonstrated performance tuning in bare-metal Kubernetes environments (UCS, VxLAN, MetalLB) · Expert in Linux systems (systemd, networking, kernel tuning), Kubernetes internals, and container runtimes · Real-world application of SRE principles in high-stakes, always-on environments · Strong background operating Prometheus, Grafana, and Elasticsearch/Fluentd/Kibana (ELK/EFK) stacks Preferred Qualifications · Experience integrating Kubernetes with OpenStack and Magnum · Knowledge of Rancher add-ons: Fleet, Longhorn, CIS Scanning · Familiarity with compliance-driven infrastructure (PCI, FedRAMP, SOC2) · Certifications: CKA, CKS, or Rancher Kubernetes Administrator · Strategic thinker with strong technical judgment and execution ability · Calm and clear communicator, especially during incidents or reviews · Mentorship-oriented; supports team learning and cross-functional collaboration · Self-motivated, detail-oriented, and thrives in a fast-moving, ownership-driven culture
Posted 1 week ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
C1X AdTech Private Limited is a global technology company Our mission is to empower enterprise clients with the smartest marketing platform, enabling seamless integration with our personalization engines and delivering cross-channel marketing capabilities. We are dedicated to enhancing customer engagement and experiences while focusing on increasing Lifetime Value (LTV) through consistent messaging across all channels. We are a world-class engineering team that encompasses front end (UI), back end (API / Java), and Big Data engineering to deliver compelling products. As a DevOps engineer, you will be responsible for managing our infrastructure, supporting development pipelines, and ensuring system reliability and also responsible for automating deployment processes, maintaining server environments, monitoring system performance, and supporting engineering operations across the development lifecycle. Objectives: Design and manage scalable, cloud-native infrastructure using GCP services, Kubernetes, and Argo CD for high-availability applications. Implement and monitor observability tools (Elasticsearch, Logstash, Kibana) to provide full system visibility and support performance tuning. Enable real-time data streaming and processing pipelines using Apache Kafka and GCP DataProc. Automate CI/CD pipelines using GitHub Actions and Argo CD for faster, secure, and auditable releases across dev and production environments. Responsibilities Build, manage, and monitor Kubernetes clusters and containerized workloads using GKE and Argo CD. Design and maintain CI/CD pipelines using GitHub Actions integrated with GitOps practices and Argo CD. Configure and maintain real-time data pipelines using Apache Kafka and GCP DataProc. Manage logging and observability infrastructure using Elasticsearch, Logstash, and Kibana (ELK stack). Set up and secure GCP services including Artifact Registry, Compute Engine, Cloud Storage, VPC, and IAM. Implement caching and session stores using Redis to optimize performance and scalability. Monitor system health, availability, and performance with tools like Prometheus, Grafana, and ELK. Collaborate with development and QA teams to streamline deployment processes and environment stability. Automate infrastructure provisioning and configuration using Bash, Python, or Terraform. Maintain backup, failover, and recovery strategies for production environments. Qualifications Bachelor's degree in Computer Science, Engineering, or a related technical field. 4–8 years of experience in DevOps, Cloud Infrastructure, or Site Reliability Engineering. Strong experience with Google Cloud Platform (GCP) services including GKE, IAM, VPC, Artifact Registry, and DataProc. Hands-on with Kubernetes, Argo CD, and GitHub Actions for CI/CD workflows. Proficiency with Apache Kafka for real-time data streaming. Experience managing ELK Stack (Elasticsearch, Logstash, Kibana) in production. Working knowledge of Redis and its role in distributed caching and session management. Skilled in scripting/automation using Bash, Python, Terraform, etc. Solid understanding of containerization, infrastructure-as-code, and system monitoring. Familiarity with cloud security, IAM policies, and audit/compliance best practices.
Posted 1 week ago
11.0 years
0 Lacs
Hyderābād
On-site
Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* The Data Analytics Strategy platform and decision tool team is responsible for Data strategy for entire CSWT and development of platforms which supports the Data Strategy. Data Science platform, Graph Data Platform, Enterprise Events Hub are key platforms of Data Platform initiative. Job Description* We're seeking a highly skilled AI/ML Platform Engineer to architect and build a modern, scalable, and secure Data Science and Analytical Platform. This pivotal role will drive end-to-end (E2E) model lifecycle management, establish robust platform governance, and create the foundational infrastructure for developing, deploying, and managing Machine Learning models across both on-premise and hybrid cloud environments. Responsibilities* Lead the architecture and design for building scalable, resilient, and secure distributed applications ensuring compliance with organizational technology guidelines, security standards, and industry best practices like 12-factor principles and well-architected framework guidelines. Actively contribute to hands-on coding, building core components, APIs and microservices while ensuring high code quality, maintainability, and performance. Ensure adherence to engineering excellence standards and compliance with key organizational metrics such as code quality, test coverage and defect rates. Integrate secure development practices, including data encryption, secure authentication, and vulnerability management into the application lifecycle. Work on adopting and aligning development practices with CI/CD best practices to enable efficient build and deployment of the application on the target platforms like VMs and/or Container orchestration platforms like Kubernetes, OpenShift etc. Collaborate with stakeholders to align technical solutions business requirements, driving informed decision-making and effective communication across teams. Mentor team members, advocate best practices, and promote a culture if continuous improvement and innovation in engineering processes. Develop efficient utilities, automation frameworks, data science platforms that can be utilized across multiple Data Science teams. Propose/Build variety of efficient Data pipelines to support the ML Model building & deployment. Propose/Build automated deployment pipelines to enable self-help continuous deployment process for the Data Science teams. Analyze, understand, execute and resolve the issues in user scripts / model / code. Perform release and upgrade activities as required. Well versed in the open-source technology and aware of emerging 3rd party technology & tools in AI-ML space. Ability to fire fight, propose fix, guide the team towards day-to-day issues in production. Ability to train partner Data Science teams on frameworks and platform. Flexible with time and shift to support the project requirements. It doesn’t include any night shift. This position doesn’t include any L1 or L2 (first line of support) responsibility. Requirements* Education* Graduation / Post Graduation: BE/B.Tech/MCA/MTech Certifications If Any: FullStack Bigdata Experience Range* 11+ Years Foundational Skills* Microservices & API Development: Strong proficiency in Python, building performant microservices and REST APIs using frameworks like FastAPI and Flask . API Gateway & Security: Hands-on experience with API gateway technologies like Apache APISIX (or similar, e.g., Kong, Envoy) for managing and securing API traffic, including JWT/OAuth2 based authentication . Observability & Monitoring: Proven ability to monitor, log, and troubleshoot model APIs and platform services using tools such as Prometheus, Grafana , or the ELK/EFK stack . Policy & Governance: Proficiency with Open Policy Agent (OPA) or similar policy-as-code frameworks for implementing and enforcing governance policies. MLOps Expertise: Solid understanding of MLOps capabilities , including ML model versioning, registry, and lifecycle automation using tools like MLflow, Kubeflow , or custom metadata solutions. Multi-Tenancy: Experience designing and implementing multi-tenant architectures for shared model and data infrastructure. Containerization & Orchestration: Strong knowledge of Docker and Kubernetes for containerization and orchestration. CI/CD & GitOps: Familiarity with CI/CD tools and GitOps practices for automated deployments and infrastructure management. Hybrid Cloud Deployments: Understanding of hybrid deployment strategies across on-premise virtual machines and public cloud platforms ( AWS, Azure, GCP ). Data science workbench understanding: Basic understanding of the requirements for data science workloads (Distributed training frameworks like Apache Spark, Dash, and IDE’s like Jupyter notebooks abd VScode) Desired Skills* Security Architecture: Understanding of zero-trust security architecture and secure API design patterns. Model Serving Frameworks: Knowledge of specialized model serving frameworks like Triton Inference Server . Vector Databases: Familiarity with Vector databases (e.g., Redis, Qdrant) and embedding stores. Data Lineage & Metadata: Exposure to data lineage and metadata management using tools like DataHub or OpenMetadata Codes solutions and unit test to deliver a requirement/story per the defined acceptance criteria and compliance requirements. Utilizes multiple architectural components (across data, application, business) in design and development of client requirements. Performs Continuous Integration and Continuous Development (CI-CD) activities. Contributes to story refinement and definition of requirements. Participates in estimating work necessary to realize a story/requirement through the delivery lifecycle. Extensive hands on supporting platforms to allow modelling and analysts go through the complete model lifecycle management (data munging, model develop/train, governance, deployment) Experience with model deployment, scoring and monitoring for batch and real-time on various different technologies and platforms. Experience in Hadoop cluster and integration includes ETL, streaming and API styles of integration. Experience in automation for deployment using Ansible Playbooks, scripting. Experience with developing and building RESTful API services in an efficient and scalable manner. Design and build and deploy streaming and batch data pipelines capable of processing and storing large datasets quickly and reliably using Kafka, Spark and YARN for large volumes of data (TBs) Experience designing and building full stack solutions utilizing distributed computing or multi-node architecture for large datasets (terabytes to petabyte scale) Experience with processing and deployment technologies such YARN, Kubernetes /Containers and Serverless Compute for model development and training Hands on experience working in a Cloud Platform (AWS/Azure/GCP) to support the Data Science Effective communication, Strong stakeholder engagement skills, Proven ability in leading and mentoring a team of software engineers in a dynamic environment. Work Timings* 11:30 AM to 8:30 PM IST Job Location* Hyderabad
Posted 1 week ago
3.0 years
5 Lacs
Delhi
On-site
#hiring Hey Folks we are hiring for the profile of Kubernetes Developer /Administrator /DevOps Engineer Job Description: Kubernetes Developer /Administrator /DevOps Engineer Location: Shastri Park, Delhi Experience: 3+ years Education: Btech/ B.E./ MCA/ MSC/ MS Salary: Upto 70k (rest depends on interview and the experience) Notice Period: Immediate joiner to 20 days of joiners Candidates from Delhi/ NCR will only be preferred Job Description: We are looking for a skilled Kubernetes Developer, Administrator, and DevOps Engineer who can effectively manage and deploy our development images into Kubernetes environments. The ideal candidate should be highly proficient in Kubernetes, CI/CD pipelines, and containerization. Qualifications: Minimum 3 years of experience working with Kubernetes in production environments. Key Responsibilities: Design, deploy, and manage Kubernetes clusters for development, testing, and production environments. Build and maintain CI/CD pipelines for automated deployment of applications on Kubernetes. Manage container orchestration using Kubernetes, including scaling, upgrades, and troubleshooting. Work closely with developers to containerize applications and ensure smooth deployment to Kubernetes. Monitor and optimize the performance, security, and reliability of Kubernetes clusters. Implement and manage Helm charts, Docker images, and Kubernetes manifests. Mandatory Skills: Kubernetes Expertise: In-depth knowledge of Kubernetes, including deploying, managing, and troubleshooting clusters and workloads CI/CD Tools:Proficiency in setting up and managing CI/CD pipelines using tools like Jenkins, GitLab CI, GitHub Actions, or similar. Containerization: Strong experience with Docker for creating, managing, and deploying containerized , applications. Infrastructure as Code (IaC): Familiarity with Terraform, Ansible, or similar tools for managing infrastructure. Networking and Security: Understanding of Kubernetes networking, service meshes, and security best practices. Scripting Skills: Proficiency in scripting languages like Bash, Python, or similar for automation tasks. Nice to Have: Experience with cloud platforms like AWS, GCP, or Azure. Knowledge of monitoring and logging tools such as Prometheus, Grafana, and ELK stack. Familiarity with GitOps practices using Argo CD or Flux. Job Types: Full-time, Contractual / Temporary Pay: From ₹500,000.00 per year Work Location: In person
Posted 1 week ago
0 years
0 Lacs
Chennai
On-site
Join our team focused on Google Cloud Data Messaging Services, leveraging technologies like Pub/Sub and Kafka to build scalable, decoupled, and resilient cloud-native applications. This position involves close collaboration with development teams, as well as product vendors, to implement and support the suite of Data Messaging Services offered within GCP and Confluent Kafka. GCP Data Messaging Services provide powerful capabilities for handling streaming data and asynchronous communication. Key benefits include: Enabling real-time data processing and event-driven architectures Decoupling applications for improved resilience and scalability Leveraging managed services like Cloud Pub/Sub and integrating with Kafka environments (Apache Kafka, Confluent Cloud) Providing highly scalable and available infrastructure for data streams Enhancing automation for messaging setup and management Supporting Infrastructure as Code practices for messaging components The Data Messaging Services Specialist plays a crucial role as the corporation migrates and onboards applications that rely on robust data streaming and asynchronous communication onto GCP Pub/Sub and Confluent Kafka. This position requires staying abreast of the continual evolution of cloud data technologies and understanding how GCP messaging services like Pub/Sub, alongside Kafka, integrate with other native services like Cloud Run, Dataflow, etc., within the new Ford Standard app hosting environment to meet customer needs. This is an exciting opportunity to work on highly visible data streaming technologies that are becoming industry standards for real-time data processing. Highly motivated individual with strong technical skills and an understanding of emerging data streaming technologies (including Google Pub/Sub, Kafka, Tekton, and Terraform). Experience with Apache Kafka or Confluent Cloud Kafka, including concepts like brokers, topics, partitions, producers, consumers, and consumer groups. Working experience in CI/CD pipelines, including building continuous integration and deployment pipelines using Tekton or similar technologies for applications interacting with Pub/Sub or Kafka. Understanding of GitOps and other DevOps processes and principles as applied to managing messaging infrastructure and application deployments. Understanding of Google Identity and Access Management (IAM) concepts and various authentication/authorization options for securing access to Pub/Sub and Kafka. Knowledge of any programming language (e.g., Java, Python, Go) commonly used for developing messaging producers/consumers. Experience with public cloud platforms (preferably GCP), with a focus on data messaging services. Understanding of agile methodologies and concepts, or experience working in an agile environment. Develop a solid understanding of Google Cloud Pub/Sub and Kafka (Apache Kafka and/or Confluent Cloud). Gain experience in using Git/GitHub and CI/CD pipelines for deploying messaging-related cluster and infrastructure. Collaborate with Business IT and business owners to prioritize improvement efforts related to data messaging patterns and infrastructure. Work with team members to establish best practices for designing, implementing, and operating scalable and reliable data messaging solutions. Identify opportunities for adopting new data streaming technologies and patterns to solve existing needs and anticipate future challenges. Create and maintain Terraform modules and documentation for provisioning and managing Pub/Sub topics/subscriptions, Kafka clusters, and related networking configurations, often with a paired partner. Develop automated processes to simplify the experience for application teams adopting Pub/Sub and Kafka client libraries and deployment patterns. Improve continuous integration tooling by automating manual processes within the delivery pipeline for messaging applications and enhancing quality gates based on past learnings.
Posted 1 week ago
0 years
4 - 7 Lacs
Chennai
On-site
Location Chennai, Tamil Nadu, India Job ID R-232129 Date posted 22/07/2025 Job Title: Senior Consultant - DevOps Career Level: D2 Introduction to role: Are you ready to make a difference in the world of scientific research? As a Senior Consultant - DevOps Engineer within the ELN Product Team, you'll be at the forefront of developing, validating, integrating, and maintaining Electronic Laboratory Notebooks (ELNs). Your mission will be to design and implement seamless integrations between various applications and the ELN, ensuring efficient data flow and communication. Join us in transforming our ability to develop life-changing medicines! Accountabilities: In this pivotal role, you'll ensure the smooth operation and optimization of ELN systems within our scientific research environment. Collaborate with scientific teams to enhance ELN configurations, fostering collaboration and data sharing. Set up monitoring tools for proactive issue resolution, minimizing disruptions in research workflows. Implement ELN-specific security practices to protect sensitive data and ensure regulatory compliance. Work closely with researchers to understand ELN requirements and optimize integrations with lab tools. Create and maintain comprehensive documentation for ELN configurations and integrations. Embrace a pragmatic, hands-on mentality while identifying knowledge gaps and raising them. Essential Skills/Experience: Experience with process tools like GIT, JIRA, Confluence and CI/CD tools. Experience with Kubernetes, GitOps, Infrastructure as code, monitoring/observability concepts and related tools. Experience in building applications incorporating cloud (AWS), API’s, microservices, containerisation and serverless architectures. Experience of delivering and supporting software in a DevOps environment. Exposure to analytics tools like Power BI (Business Intelligence). Excellent problem solving and adaptability. Willing to work in cross-cultural environment across multiple time zones. Ability to work effectively independently or as part of a team to achieve objectives. Eager to learn and develop new tech skills, as required. Good written and verbal skills, fluent English. Advanced experience with ELN platforms and their technical integration into biopharmaceutical laboratory workflows. Expertise in handling scientific data formats, laboratory instrument integrations, and industry-specific technical compliance standards. Familiarity with advanced GxP regulations and practices. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, we leverage technology to impact patients and ultimately save lives. Our global organization is driven by purpose, pushing the boundaries of science to discover and develop life-changing medicines. We take pride in working close to the cause, unlocking potential to make a massive difference in the world. With cutting-edge science combined with leading digital technology platforms, we empower our business to perform at its peak. Ready to join us on this exciting journey? Apply now and be part of a team that dares to innovate! Date Posted 23-Jul-2025 Closing Date AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.
Posted 1 week ago
15.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction A career in IBM Software means you’ll be part of a team that transforms our customer’s challenges into solutions. Seeking new possibilities and always staying curious, we are a team dedicated to creating the world’s leading AI-powered, cloud-native software solutions for our customers. Our renowned legacy creates endless global opportunities for our IBMers, so the door is always open for those who want to grow their career. IBM Intelligent Automation, powered by AI, addresses challenges by helping People become more productive, Businesses more scalable, and Systems more resilient. We combine human skills with automation and AI to enhance team productivity and improve decision making. We help companies digitize and intelligently automate and connect their business processes and systems end-to-end to improve business outcomes at scale. We assure that all of the applications and systems businesses rely on are always on and perform cost effectively to deliver the best possible user experience. IBM’s product and technology landscape includes Research, Software, and Infrastructure. Entering this domain positions you at the heart of IBM, where growth and innovation thrive. Your Role And Responsibilities This is a role for a Senior Product Architect for IBM Concert, responsible for leading the product engineering team in designing scalable, secure, and resilient solutions. Drive architectural strategy, technical excellence, and innovation across the Concert product organisation. Architectural Leadership: Work collaboratively within a team responsible for shaping the architecture and technical trajectory of our software networking and edge computing portfolio. Write product requirements for new integrations, enhance existing integrations, and identify ecosystem products to interact with. Serve as an end-to-end SME: Demonstrates skills/expertise across all functional/non-functional aspects. Gain insights into customer adoption of product capabilities and rise above the silos of being client side, server side or module specific knowledge. Be the agent of change/innovation: Challenge the status quo and provide though leadership and operational drive. Real-world Product Building: Leverage practical experience in building products to contribute valuable insights to the software engineering practices within the team. Contribute to the core development/delivery: Be thorough with the technology domain and associated programming languages/frameworks/patterns and tool. Own coding/testing of some of the key/complex modules/components Domain Expertise: Possess a deep understanding of software networking and edge computing, utilizing this knowledge to develop informed opinions that play a pivotal role in shaping product direction and strategy. Conduct market research and analysis to identify market trends. Provide expert inputs/insights: On product architecture, design decisions, and technology choices. Effectively collaborate with the stakeholders like Product/Offering Management, Sales and Mktg, Support. Collaboration and Adaptability: Collaborate seamlessly with diverse teams and contribute to other IBM product initiatives. A crucial aspect is the ability to extend expertise beyond your domain to promote a holistic understanding of interconnected domains. Enable tech sales and support teams by building specialized demos, assisting with RFIs, fielding scanner-specific questions Engage prospects, customers, and internal stakeholders for discovery sessions and roadmap reviews Flexible Mindset: Demonstrate flexibility in mindset to navigate and resolve points of intersection with other system components. Adaptability is key to finding mutually beneficial solutions. Effective Communication: Utilize strong communication skills to articulate complex technical concepts, fostering collaboration and understanding across interdisciplinary teams. Provide regular updates to the leadership team on product development progress and market signals Drive eminence and respect: Be an active evangelist and thought leader inside and outside the organization. Preferred Education Bachelor's Degree Required Technical And Professional Expertise 15+ years of experience in building and architecting enterprise-grade software products, preferably in the automation, observability, or risk & resilience domain. Deep domain expertise in Application Risk and Resilience Expertise in Go and Python application development, with exposure to additional languages and development/test environments. Solid understanding of modern application architecture including microservices, containers (Docker), orchestration (Kubernetes), and RESTful APIs. Proficiency in cloud-native development, with mastery of IBM Cloud and AWS platforms. Strong command of software architecture patterns, design principles, and best practices. Development experience with PostgreSQL and cloud-native databases. Proven technical leadership experience in driving design and architectural decisions within cross-functional agile teams. Hands-on experience with DevOps/DevSecOps practices, CI/CD pipelines, and GitOps workflows. Excellent problem-solving and debugging skills to resolve complex technical issues effectively. Preferred Technical And Professional Experience Brings a solid background in Automation and Cloud technologies. Demonstrates a growth mindset, is open to feedback, and continuously seeks to learn and evolve. Approaches challenges with ownership and resilience, focusing on solutions rather than assigning blame. Can effectively balance people, product, and process goals, with a collaborative and open-minded leadership style—critical for success in this role.
Posted 1 week ago
0.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Bengaluru, Karnataka Factspan Overview: Factspan is a pure play data and analytics services organization. We partner with fortune 500 enterprises to build an analytics center of excellence, generating insights and solutions from raw data to solve business challenges, make strategic recommendations and implement new processes that help them succeed. With offices in Seattle, Washington and Bengaluru, India; we use a global delivery model to service our customers. Our customers include industry leaders from Retail, Financial Services, Hospitality, and technology sectors. Responsibilities We are seeking a highly skilled uDeploy Developer to manage and enhance deployment pipelines for enterprise applications. This role involves maintaining legacy IBM UrbanCode Deploy (uDeploy) systems, integrating with CI tools like Jenkins , collaborating with application and support teams, and supporting the phased transition to modern GitOps-based delivery using ArgoCD and GitLab . The candidate will be part of a larger DevOps transformation program closely aligned with SRE operations. Key Responsibilities: Develop & implement machine learning models & algorithms to extract insights from large datasets. Own, manage, and support end-to-end uDeploy pipelines for critical enterprise applications across multiple environments (DEV, QA, UAT, PROD). Build reusable templates and promote consistent deployment standards. Work with app teams to configure uDeploy components: processes, applications, environments, component versions. Create and manage pre/post-deployment scripts, environment variable mappings, and rollback plans. Integrate Jenkins CI jobs with uDeploy processes using REST APIs or plugins. Collaborate with DevOps engineers and architects to transition uDeploy-based pipelines to GitOps-native deployments using ArgoCD. Troubleshoot deployment failures and support L2/L3 incident resolution. Maintain deployment documentation, audit trails, and release notes for compliance and traceability. Work with Retail-specific tools and frameworks to enable seamless holiday/seasonal deployment readiness. Required Skills & Tools: Deployment Tools - IBM UrbanCode Deploy (uDeploy), UrbanCode CLI, Process Designer CI Tools – Jenkins, GitLab CI Scripting – Groovy, Shell, Python Version Control – Git Gitlab Containerization (Plus) - Docker, Helm, Kubernetes (basic awareness helpful) Infrastructure (Bonus) - GCP / AWS experience in context of deployments Monitoring/Logging (Bonus) - Splunk, AppDynamics (integration-level exposure preferred) Good to Have Experience in migrating uDeploy pipelines to ArgoCD or similar tools. Exposure to Retail domain CI/CD patterns and high-availability deployment scheduling. Familiarity with ITIL-based change management processes. Understanding of DevOps-SRE collaboration models. If you are passionate about leveraging technology to drive business innovation, possess excellent problem-solving skills, and thrive in a dynamic environment, we encourage you to apply for this exciting opportunity. Join us in shaping the futureo f data analytics and making a meaningful impact in the industry Why Should You Apply? Grow with Us: Be part of a hyper- growth People: Join hands with the talented, warm, Buoyant Culture: Embark on an exciting journey startup with great opportunities to Learn & collaborative team and highly accomplished with a team that innovates solutions, tackles Innovate. leadership. challenges head-on and crafts a vibrant work environment .
Posted 1 week ago
0.0 years
0 Lacs
Delhi, Delhi
Remote
ABOUT TIDE At Tide, we are building a business management platform designed to save small businesses time and money. We provide our members with business accounts and related banking services, but also a comprehensive set of connected administrative solutions from invoicing to accounting. Launched in 2017, Tide is now used by over 1 million small businesses across the world and is available to UK, Indian and German SMEs. Headquartered in central London, with offices in Sofia, Hyderabad, Delhi, Berlin and Belgrade, Tide employs over 2,000 employees. Tide is rapidly growing, expanding into new products and markets and always looking for passionate and driven people. Join us in our mission to empower small businesses and help them save time and money. ABOUT THE TEAM: Our 40+ engineering teams are working on designing, creating and running the rich product catalogue across our business areas (e.g. Payments Services, Business Services). We have a long roadmap ahead of us and always have interesting problems to tackle. We trust and empower our engineers to make real technical decisions that affect multiple teams and shape the future of Tide's Global One Platform. It's an exceptional opportunity to make a real difference by taking ownership of engineering practices in a rapidly expanding company! We work in small autonomous teams, grouped under common domains owning the full lifecycle of some microservices in Tide's service catalogue. Our engineers self-organize, gather together to discuss technical challenges, and set their own guidelines in the different Communities of Practice regardless of where they currently stand in our Growth Framework. ABOUT THE ROLE: Contribute to our event-driven Microservice Architecture (currently 200+ services owned by 40+ teams). You will define and maintain the services your team owns (you design it, you build it, you run it, you scale it globally) Use Java 17 , Spring Boot and JOOQ to build your services. Expose and consume RESTful APIs . We value good API design and we treat our APIs as Products (in the world of Open Banking often times they are gonna be public!) Use SNS + SQS and Kafka to send events Utilise PostgreSQL via Aurora as your primary datastore (we are heavy AWS users) Deploy your services to Production as often as you need to (this usually means multiple times per day!). This is enabled by our CI/CD pipelines powered by GitHub with GitHub actions , and solid JUnit/Pact testing (new joiners are encouraged to have something deployed to production in their first 2 weeks) Experience modern GitOps using ArgoCD . Our Cloud team uses Docker, Terraform, EKS/Kubernetes to run the platform. Have DataDog as your best friend to monitor your services and investigate issues Collaborate closely with Product Owners to understand our Users' needs, Business opportunities and Regulatory requirements and translate them into well-engineered solutions WHAT WE ARE LOOKING FOR: Have some experience building server-side applications and detailed knowledge of the relevant programming languages for your stack. You don't need to know Java, but bear in mind that most of our services are written in Java, so you need to be willing to learn it when you have to change something there! Have a sound knowledge of a backend framework (e.g. Spring/Spring Boot) that you've used to write microservices that expose and consume RESTful APIs Have experience engineering scalable and reliable solutions in a cloud-native environment (the most important thing for us is understanding the fundamentals of CI/CD, practical Agile so to speak) Demonstrate a mindset of delivering secure, well-tested and well-documented software that integrates with various third party providers and partners (we do that a lot in the fintech industry) OUR TECH STACK: Java 17 , Spring Boot and JOOQ to build the RESTful APIs of our microservices Event-driven architecture with messages over SNS+SQS and Kafka to make them reliable Primary datastores are MySQL and PostgreSQL via RDS or Aurora (we are heavy AWS users) Docker, Terraform, EKS/Kubernetes used by the Cloud team to run the platform DataDog, ElasticSearch/Fluentd/Kibana and Rollbar to keep it running GitHub with GitHub actions for Sonarcloud, Snyk and solid JUnit/Pact testing to power the CI/CD pipelines WHAT YOU WILL GET IN RETURN: Competitive salary Self & Family Health Insurance Term & Life Insurance OPD Benefits Mental wellbeing through Plumm Learning & Development Budget WFH Setup allowance 25 Annual leaves Family & Friendly Leaves TIDEAN WAYS OF WORKING: At Tide, we champion a flexible workplace model that supports both in-person and remote work to cater to the specific needs of our different teams. While remote work is supported, we believe in the power of face-to-face interactions to foster team spirit and collaboration. Our offices are designed as hubs for innovation and team-building, where we encourage regular in-person gatherings to foster a strong sense of community. #LI-NN1 TIDE IS A PLACE FOR EVERYONE At Tide, we believe that we can only succeed if we let our differences enrich our culture. Our Tideans come from a variety of backgrounds and experience levels. We consider everyone irrespective of their ethnicity, religion, sexual orientation, gender identity, family or parental status, national origin, veteran, neurodiversity or differently-abled status. We celebrate diversity in our workforce as a cornerstone of our success. Our commitment to a broad spectrum of ideas and backgrounds is what enables us to build products that resonate with our members' diverse needs and lives. We are One Team and foster a transparent and inclusive environment, where everyone's voice is heard. At Tide, we thrive on diversity, embracing various backgrounds and experiences. We welcome all individuals regardless of ethnicity, religion, sexual orientation, gender identity, or disability. Our inclusive culture is key to our success, helping us build products that meet our members' diverse needs. We are One Team, committed to transparency and ensuring everyone's voice is heard. You personal data will be processed by Tide for recruitment purposes and in accordance with Tide's Recruitment Privacy Notice .
Posted 1 week ago
0.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Category: Software Development/ Engineering Main location: India, Tamil Nadu, Chennai Position ID: J0325-0698 Employment Type: Full Time Position Description: Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Job Title: SSO Engineer Position: Lead Analyst Experience: 8 - 10 Years Category: Software Development/ Engineering Shift: General (5 Days work from Office) Main location: India, Tamil Nadu, Chennai/Karnataka, Bangalore Position ID: J0325-0698 Employment Type: Full Time Education Qualification: Bachelor's degree in Computer Science or related field or higher with minimum 8 years of relevant experience. Role Description: Works independently under limited supervision and applies knowledge of subject matter in Applications Development. Possess sufficient knowledge and skills to effectively deal with issues, challenges within field of specialization to develop simple applications solutions. Second level professional with direct impact on results and outcome. Your future duties and responsibilities: Working experience in Information Security Technology field. working knowledge of IAM technologies (mainly SAML/Oauth/OIDC/ Radius/Kerberos etc) Working experience for application migration to modern Authentication protocols Understanding and ability to execute on the full technical stack Experience with the industry standard IT infrastructure (web, middleware, operating systems). Strong technical problem-solving skills with the ability to detect underlying patterns, identify root causes and design optimal process solutions. Understanding of technical implementation of legacy applications Work across functions to improve IAM solutions to enhance compliance requirements and best practices. Participate in meetings to understand and recommend the best technical, most cost-effective solutions. Update knowledgebase and documentation to stay relevant with latest versions, configuration changes and development initiatives. Assist with the preparation of Business required / requested reports. Required qualifications to be successful in this role: Must-Have Skills: 8+ years of work experience in Identity and Access Management, preferrable using the Forgerock/ Ping identity and access management solution. Expertise in integrating Web and mobile applications for single sign on. Experience with technologies such as SAML 2.0, OAuth 2.0, OpenID Connect, Role-Based Access Control (RBAC). Experience with LDAP. Experience with Java, Javascript, Node JS, Quarkus, Web Servers, Application Servers, Directory Servers Good-to-Have Skills: Basic knowledge in CI/CD, GitOps, Git/Gitlab, Docker and Kubernetes technologies Basic knowledge in AWS or other cloud. CGI is an equal opportunity employer. In addition, CGI is committed to providing accommodation for people with disabilities in accordance with provincial legislation. Please let us know if you require reasonable accommodation due to a disability during any aspect of the recruitment process and we will work with you to address your needs. Skills: English Java JavaScript Node.Js What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 1 week ago
5.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : AWS CloudFormation Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, manage project timelines, and contribute to the overall success of application development initiatives. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor project progress and ensure alignment with business goals. CORE COMPETENCIES - Cloud Platforms: AWS (Landing Zone, Control Tower, Organizations, Backup, EKS) - DevOps Tooling: GitLab/Gitlab Runners, Jenkins, Docker, Kubernetes, Terraform Enterprise TFE, Rancher, Gerrit, Bamboo, Rally, UrbanCodeDeploy - IaC & Automation: Terraform, AWS Account Factory for Terraform (AFT), Ansible, Puppet, Ant, Maven, Groovy, Python, Bash - Governance & Security: IAM, SCPs, AWS Backup Policies, Sentinel Policies, OPA Policies - CI/CD & GitOps: GitLab Pipelines, Jenkins Pipelines, AWS CodePipeline, Helm, FluxCD, Kustomization, Nexus, Artifactory, SonarQube - Monitoring & Logging: CloudWatch - Leadership: Team Management, Stakeholder Engagement, Agile Delivery, Mentoring, Platform Strategy Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS CloudFormation. - Strong understanding of infrastructure as code principles. - Experience with cloud architecture and deployment strategies. - Familiarity with DevOps practices and tools. - Ability to troubleshoot and optimize cloud-based applications. Additional Information: - The candidate should have minimum 5 years of experience in AWS CloudFormation. - This position is based in Mumbai. - A 15 years full time education is required.
Posted 1 week ago
7.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Position DevOps with Github Action Job Description DevOps principles and Agile practices, including Infrastructure as Code (IaC) and GitOps, to streamline and enhance development workflows. Infrastructure Management: Oversee the management of Linux-based infrastructure and understand networking concepts, including microservices communication and service mesh implementations. Containerization & Orchestration: Leverage Docker and Kubernetes for containerization and orchestration, with experience in service discovery, auto-scaling, and network policies. Automation & Scripting: Automate infrastructure management using advanced scripting and IaC tools such as Terraform, Ansible, Helm Charts, and Python. AWS and Azure Services Expertise: Utilize a broad range of AWS and Azure services, including IAM, EC2, S3, Glacier, VPC, Route53, EBS, EKS, ECS, RDS, Azure Virtual Machines, Azure Blob Storage, Azure Kubernetes Service (AKS), and Azure SQL Database, with a focus on integrating new cloud innovations. Incident Management: Manage incidents related to GitLab pipelines and deployments, perform root cause analysis, and resolve issues to ensure high availability and reliability. Development Processes: Define and optimize development, test, release, update, and support processes for GitLab CI/CD operations, incorporating continuous improvement practices. Architecture & Development Participation: Contribute to architecture design and software development activities, ensuring alignment with industry best practices and GitLab capabilities. Strategic Initiatives: Collaborate with the leadership team on process improvements, operational efficiency, and strategic technology initiatives related to GitLab and cloud services. Required Skills & Qualifications Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Experience: 7-9+ years of hands-on experience with GitLab CI/CD, including implementing, configuring, and maintaining pipelines, along with substantial experience in AWS and Azure cloud services. Location: IN-GJ-Ahmedabad, India-Ognaj (eInfochips) Time Type Full time Job Category Engineering Services
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position Overview Job Title: Senior Cloud Engineer Location: Pune, India Corporate Title: VP Role Description Technology underpins Deutsche Bank’s entire business and is changing and shaping the way we engage, interact and transact with all our stakeholders, both internally and externally. Our Technology, Data and Innovation (TDI) strategy is focused on strengthening engineering expertise, introducing an agile delivery model, as well as modernising the bank's IT infrastructure with long-term investments and taking advantage of cloud computing. But this is only the foundation. We continue to invest and build a team of visionary tech talent who will ensure we thrive in this period of unprecedented change for the industry. It means hiring the right people and giving them the training, freedom and opportunity they need to do pioneering work. We are seeking a Senior Engineer to work within our Google Cloud adoption programme with experience of re-platforming and re-architecting solutions onto cloud. You will work closely with global architecture, platform engineering, infrastructure, and application teams to define and execute scalable and compliant infrastructure strategies across multiple cloud environments . And will be hands on technical lead within our delivery pods and provide technical direction and oversight of the solutions. With responsibility for engineering delivery you will consistently review designs and quality, drive re-use whilst playing a pivotal role in improving our GCP engineering capability. You will make strategic design decisions and define engineering approaches that can be disruptive, with the goals of simplifying architecture, reducing technical debt and increasing flow by taking advantage of the platform features and engineering benefits of Google Cloud. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities: Defining and building application architectures for re-platform or re-architect strategies and implement blueprints and patterns for common application architectures. Collaboration across the TDI areas such as Cloud Platform, Security, Data, Risk & Compliance areas to create optimum solutions for the business, increasing re-use, creating best practice and sharing knowledge. Driving optimisations in the cloud SDLC process to provide productivity improvements, including tools and techniques. Enabling the adoption of practices such as SRE and DevSecOps to minimise toil and manual tasks and increase automation and stability. Define and implement Terraform modules, CI/CD pipelines, and governance frameworks supporting self-service infrastructure provisioning. Collaborate with enterprise security, risk, and audit teams to enforce cloud compliance, controls, and policy-as-code (OPA, Sentinel, Conftest). Partner with senior stakeholders across technology and business domains to enable multi-cloud delivery platforms with reusable infrastructure blueprints. Mentor and lead a team of cloud engineers, fostering a culture of innovation, automation, and reliability. Actively contribute to the TDI-wide cloud governance board and cloud community of practice. Your Skills and Experience: You will be a hands-on engineer, focused on building working examples and reference implementations in code. You have experience in implementing applications onto cloud platforms (Azure, AWS or GCP) and usage of their major components (Software Defined Networks, IAM, Compute, Storage, etc.) to define cloud native application architectures such as Microservices, Service Mesh or Data Streaming applications. You would adopt automation-first approaches to testing, deployment, security and compliance of solutions through Infrastructure as Code and automated policy enforcement. You enjoy supporting our community of engineers and creating opportunities for progression, promoting continuous learning and skills development. Proven experience leading Terraform-based infrastructure provisioning at scale. Expertise in at least one major public cloud (GCP preferred; AWS/Azure acceptable). Strong understanding of DevSecOps, container orchestration (Kubernetes), and GitOps principles. Experience with tools such as GitHub Actions, Jenkins, ArgoCD, Vault, Terraform Enterprise/Cloud. Strong knowledge of cloud networking, IAM, workload identity federation, and encryption standards. How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Cloud Architect at FICO, you will play a crucial role in architecting, designing, implementing, and managing cloud infrastructure solutions using tools like ArgoCD, Crossplane, GitHub, Terraform, and Kubernetes. You will lead initiatives to enhance our Cloud and GitOps best practices, mentor junior team members, collaborate with cross-functional teams, and ensure that our cloud environments are scalable, secure, and cost-effective. Your responsibilities will include designing, deploying, and managing scalable cloud solutions on public cloud platforms such as AWS, Azure, or Google Cloud, developing deployment strategies, utilizing Infrastructure as Code tools like Terraform and Crossplane, collaborating with various teams, providing mentorship, evaluating and recommending new tools and technologies, implementing security best practices, ensuring compliance with industry standards, and much more. To be successful in this role, you should have proven experience as a Senior level engineer/Architect in a cloud-native environment, extensive experience with ArgoCD and Crossplane, proficiency in GitHub workflows, experience with Infrastructure as Code tools, leadership experience, proficiency in scripting languages and automation tools, expert knowledge in containerization and orchestration tools like Docker and Kubernetes, network concepts and implementation on AWS, observability, monitoring and logging tools, security principles and frameworks, and familiarity with security-related certifications. Your unique strengths, leadership skills, and ability to drive and motivate a team will be essential in fulfilling the responsibilities of this role. At FICO, you will be part of an inclusive culture that values diversity, collaboration, and innovation. You will have the opportunity to make an impact, develop professionally, and participate in valuable learning experiences. FICO offers competitive compensation, benefits, and rewards programs to encourage you to bring your best every day. You will work in an engaging, people-first environment that promotes work/life balance, employee resource groups, and social events to foster interaction and camaraderie. Join FICO and be part of a leading organization in Big Data analytics, making a real difference in the business world by helping businesses use data to improve their decision-making processes. FICO's solutions are used by top lenders and financial institutions worldwide, and the demand for our solutions is rapidly growing. As part of the FICO team, you will have the support and freedom to develop your skills, grow your career, and contribute to changing the way businesses operate globally. Explore how you can fulfill your potential by joining FICO at www.fico.com/Careers.,
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Apply Before: 05-08-2025 Job Title: DevOps Engineer Job Location: Pune Employment Type: Full-Time Experience Required: 5 Years Salary: 20 LPA to 25 LPA Max Notice Period: Immediate Joiners We are looking for a DevOps Engineer with strong Kubernetes and cloud infrastructure experience to join our Pune-based team. The ideal candidate will play a key role in managing CI/CD pipelines, infrastructure automation, cloud resource optimization, and ensuring high availability and reliability of production systems. Required Skills Certified Kubernetes Administrator (CKA) mandatory Very good knowledge and operational experience with containerization and cluster management infrastructure setup and production environment maintenance (Kubernetes, vCluster, Docker, Helm) Very good knowledge and experience with high availability requirements (RTO and RPO) on cloud (AWS preferred with VPC, Subnet, ELB, Secrets manager, EBS Snapshots, EC2 Security groups, ECS, Cloudwatch and SQS) Very good knowledge and experience in administrating Linux, clients and servers Experience working with data storage, backup and disaster recovery using DynamoDB, RDS PostgreSQL and S3 Good experience and confidence with code versioning (Gitlab Preferred) Experience in automation with programming and IaC scripts (Python / Terraform) Experience with SSO setup and user management with Keycloak and / or Okta SSO Experience in service mesh monitoring setup with Istio, Kiali, Grafana, Loki and Prometheus Experience with GitOps setup and management for ArgoCD / FluxCD
Posted 1 week ago
7.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Position DevOps with Github Action Job Description DevOps principles and Agile practices, including Infrastructure as Code (IaC) and GitOps, to streamline and enhance development workflows. Infrastructure Management: Oversee the management of Linux-based infrastructure and understand networking concepts, including microservices communication and service mesh implementations. Containerization & Orchestration: Leverage Docker and Kubernetes for containerization and orchestration, with experience in service discovery, auto-scaling, and network policies. Automation & Scripting: Automate infrastructure management using advanced scripting and IaC tools such as Terraform, Ansible, Helm Charts, and Python. AWS and Azure Services Expertise: Utilize a broad range of AWS and Azure services, including IAM, EC2, S3, Glacier, VPC, Route53, EBS, EKS, ECS, RDS, Azure Virtual Machines, Azure Blob Storage, Azure Kubernetes Service (AKS), and Azure SQL Database, with a focus on integrating new cloud innovations. Incident Management: Manage incidents related to GitLab pipelines and deployments, perform root cause analysis, and resolve issues to ensure high availability and reliability. Development Processes: Define and optimize development, test, release, update, and support processes for GitLab CI/CD operations, incorporating continuous improvement practices. Architecture & Development Participation: Contribute to architecture design and software development activities, ensuring alignment with industry best practices and GitLab capabilities. Strategic Initiatives: Collaborate with the leadership team on process improvements, operational efficiency, and strategic technology initiatives related to GitLab and cloud services. Required Skills & Qualifications Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Experience: 7-9+ years of hands-on experience with GitLab CI/CD, including implementing, configuring, and maintaining pipelines, along with substantial experience in AWS and Azure cloud services. Location: IN-GJ-Ahmedabad, India-Ognaj (eInfochips) Time Type Full time Job Category Engineering Services
Posted 1 week ago
2.0 years
0 Lacs
Delhi, India
On-site
About AlphaSense The world’s most sophisticated companies rely on AlphaSense to remove uncertainty from decision-making. With market intelligence and search built on proven AI, AlphaSense delivers insights that matter from content you can trust. Our universe of public and private content includes equity research, company filings, event transcripts, expert calls, news, trade journals, and clients’ own research content. The acquisition of Tegus by AlphaSense in 2024 advances our shared mission to empower professionals to make smarter decisions through AI-driven market intelligence. Together, AlphaSense and Tegus will accelerate growth, innovation, and content expansion, with complementary product and content capabilities that enable users to unearth even more comprehensive insights from thousands of content sets. Our platform is trusted by over 6,000 enterprise customers, including a majority of the S&P 500. Founded in 2011, AlphaSense is headquartered in New York City with more than 2,000 employees across the globe and offices in the U.S., U.K., Finland, India, Singapore, Canada, and Ireland. Come join us! About The Role You will join our team of world-class experts who are developing the AlphaSense platform. The team is right at the very core of what we do and is responsible for implementing cutting-edge technology for scalable, distributed processing of millions of documents. We are seeking a highly skilled Software Engineer II to join our dynamic team responsible for building and maintaining data ingestion systems at scale. As a key member of our team, you will play a crucial role in designing, implementing, and optimizing robust solutions for ingesting millions of documents per month, including the addition of multimedia content such as audio and video from the public web. You'll play a key role in integrating cutting-edge AI models, enabling intelligent suggestions, and content synchronization. You are a good fit if you're a proactive problem-solver with a “go-getter” attitude, startup experience, and a readiness to learn whatever comes your way! Responsibilities Design, develop, and maintain high-performance, scalable applications using Python. Solve complex technical challenges with innovative solutions that enhance product features and operational efficiencies. Collaborate across teams to integrate applications, optimize system performance, and streamline data flows. Take full ownership of projects from inception to deployment, delivering high-quality solutions that improve user experience. Lead or support data ingestion processes, ensuring seamless data flow and management. Continuously learn and adapt to new tools, frameworks, and technologies as they arise, embracing a growth mindset. Mentor and guide junior developers, fostering a collaborative, innovative culture. Requirements 2+ years of professional Python development experience, with a strong understanding of Python frameworks (Django, Flask, FastAPI, etc.). Proven success working in a startup environment, demonstrating adaptability and flexibility in fast-changing conditions. Proactive problem-solver with a keen eye for tackling challenging technical issues. A willingness to learn and adapt to new technologies and challenges as they arise. Strong team player with a go-getter attitude, comfortable working both independently and within cross-functional teams. Nice To Have Experience with media processing and live streaming techniques is a major plus. Familiarity with Crossplane and/or ArgoCD for GitOps-based infrastructure management. Experience with working on Docker, K8s. AlphaSense is an equal-opportunity employer. We are committed to a work environment that supports, inspires, and respects all individuals. All employees share in the responsibility for fulfilling AlphaSense’s commitment to equal employment opportunity. AlphaSense does not discriminate against any employee or applicant on the basis of race, color, sex (including pregnancy), national origin, age, religion, marital status, sexual orientation, gender identity, gender expression, military or veteran status, disability, or any other non-merit factor. This policy applies to every aspect of employment at AlphaSense, including recruitment, hiring, training, advancement, and termination. In addition, it is the policy of AlphaSense to provide reasonable accommodation to qualified employees who have protected disabilities to the extent required by applicable laws, regulations, and ordinances where a particular employee works. Recruiting Scams and Fraud We At AlphaSense Have Been Made Aware Of Fraudulent Job Postings And Individuals Impersonating AlphaSense Recruiters. These Scams May Involve Fake Job Offers, Requests For Sensitive Personal Information, Or Demands For Payment. Please Note AlphaSense never asks candidates to pay for job applications, equipment, or training. All official communications will come from an @alpha-sense.com email address. If you’re unsure about a job posting or recruiter, verify it on our Careers page. If you believe you’ve been targeted by a scam or have any doubts regarding the authenticity of any job listing purportedly from or on behalf of AlphaSense please contact us. Your security and trust matter to us.
Posted 1 week ago
3.0 years
0 Lacs
Delhi, Delhi
On-site
#hiring Hey Folks we are hiring for the profile of Kubernetes Developer /Administrator /DevOps Engineer Job Description: Kubernetes Developer /Administrator /DevOps Engineer Location: Shastri Park, Delhi Experience: 3+ years Education: Btech/ B.E./ MCA/ MSC/ MS Salary: Upto 70k (rest depends on interview and the experience) Notice Period: Immediate joiner to 20 days of joiners Candidates from Delhi/ NCR will only be preferred Job Description: We are looking for a skilled Kubernetes Developer, Administrator, and DevOps Engineer who can effectively manage and deploy our development images into Kubernetes environments. The ideal candidate should be highly proficient in Kubernetes, CI/CD pipelines, and containerization. Qualifications: Minimum 3 years of experience working with Kubernetes in production environments. Key Responsibilities: Design, deploy, and manage Kubernetes clusters for development, testing, and production environments. Build and maintain CI/CD pipelines for automated deployment of applications on Kubernetes. Manage container orchestration using Kubernetes, including scaling, upgrades, and troubleshooting. Work closely with developers to containerize applications and ensure smooth deployment to Kubernetes. Monitor and optimize the performance, security, and reliability of Kubernetes clusters. Implement and manage Helm charts, Docker images, and Kubernetes manifests. Mandatory Skills: Kubernetes Expertise: In-depth knowledge of Kubernetes, including deploying, managing, and troubleshooting clusters and workloads CI/CD Tools:Proficiency in setting up and managing CI/CD pipelines using tools like Jenkins, GitLab CI, GitHub Actions, or similar. Containerization: Strong experience with Docker for creating, managing, and deploying containerized , applications. Infrastructure as Code (IaC): Familiarity with Terraform, Ansible, or similar tools for managing infrastructure. Networking and Security: Understanding of Kubernetes networking, service meshes, and security best practices. Scripting Skills: Proficiency in scripting languages like Bash, Python, or similar for automation tasks. Nice to Have: Experience with cloud platforms like AWS, GCP, or Azure. Knowledge of monitoring and logging tools such as Prometheus, Grafana, and ELK stack. Familiarity with GitOps practices using Argo CD or Flux. Job Types: Full-time, Contractual / Temporary Pay: From ₹500,000.00 per year Work Location: In person
Posted 1 week ago
2.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About AlphaSense The world’s most sophisticated companies rely on AlphaSense to remove uncertainty from decision-making. With market intelligence and search built on proven AI, AlphaSense delivers insights that matter from content you can trust. Our universe of public and private content includes equity research, company filings, event transcripts, expert calls, news, trade journals, and clients’ own research content. The acquisition of Tegus by AlphaSense in 2024 advances our shared mission to empower professionals to make smarter decisions through AI-driven market intelligence. Together, AlphaSense and Tegus will accelerate growth, innovation, and content expansion, with complementary product and content capabilities that enable users to unearth even more comprehensive insights from thousands of content sets. Our platform is trusted by over 6,000 enterprise customers, including a majority of the S&P 500. Founded in 2011, AlphaSense is headquartered in New York City with more than 2,000 employees across the globe and offices in the U.S., U.K., Finland, India, Singapore, Canada, and Ireland. Come join us! About The Role You will join our team of world-class experts who are developing the AlphaSense platform. The team is right at the very core of what we do and is responsible for implementing cutting-edge technology for scalable, distributed processing of millions of documents. We are seeking a highly skilled Software Engineer II to join our dynamic team responsible for building and maintaining data ingestion systems at scale. As a key member of our team, you will play a crucial role in designing, implementing, and optimizing robust solutions for ingesting millions of documents per month, including the addition of multimedia content such as audio and video from the public web. You'll play a key role in integrating cutting-edge AI models, enabling intelligent suggestions, and content synchronization. You are a good fit if you're a proactive problem-solver with a “go-getter” attitude, startup experience, and a readiness to learn whatever comes your way! Responsibilities Design, develop, and maintain high-performance, scalable applications using Python. Solve complex technical challenges with innovative solutions that enhance product features and operational efficiencies. Collaborate across teams to integrate applications, optimize system performance, and streamline data flows. Take full ownership of projects from inception to deployment, delivering high-quality solutions that improve user experience. Lead or support data ingestion processes, ensuring seamless data flow and management. Continuously learn and adapt to new tools, frameworks, and technologies as they arise, embracing a growth mindset. Mentor and guide junior developers, fostering a collaborative, innovative culture. Requirements 2+ years of professional Python development experience, with a strong understanding of Python frameworks (Django, Flask, FastAPI, etc.). Proven success working in a startup environment, demonstrating adaptability and flexibility in fast-changing conditions. Proactive problem-solver with a keen eye for tackling challenging technical issues. A willingness to learn and adapt to new technologies and challenges as they arise. Strong team player with a go-getter attitude, comfortable working both independently and within cross-functional teams. Nice To Have Experience with media processing and live streaming techniques is a major plus. Familiarity with Crossplane and/or ArgoCD for GitOps-based infrastructure management. Experience with working on Docker, K8s. AlphaSense is an equal-opportunity employer. We are committed to a work environment that supports, inspires, and respects all individuals. All employees share in the responsibility for fulfilling AlphaSense’s commitment to equal employment opportunity. AlphaSense does not discriminate against any employee or applicant on the basis of race, color, sex (including pregnancy), national origin, age, religion, marital status, sexual orientation, gender identity, gender expression, military or veteran status, disability, or any other non-merit factor. This policy applies to every aspect of employment at AlphaSense, including recruitment, hiring, training, advancement, and termination. In addition, it is the policy of AlphaSense to provide reasonable accommodation to qualified employees who have protected disabilities to the extent required by applicable laws, regulations, and ordinances where a particular employee works. Recruiting Scams and Fraud We At AlphaSense Have Been Made Aware Of Fraudulent Job Postings And Individuals Impersonating AlphaSense Recruiters. These Scams May Involve Fake Job Offers, Requests For Sensitive Personal Information, Or Demands For Payment. Please Note AlphaSense never asks candidates to pay for job applications, equipment, or training. All official communications will come from an @alpha-sense.com email address. If you’re unsure about a job posting or recruiter, verify it on our Careers page. If you believe you’ve been targeted by a scam or have any doubts regarding the authenticity of any job listing purportedly from or on behalf of AlphaSense please contact us. Your security and trust matter to us.
Posted 1 week ago
2.0 years
0 Lacs
Delhi
On-site
About AlphaSense: The world's most sophisticated companies rely on AlphaSense to remove uncertainty from decision-making. With market intelligence and search built on proven AI, AlphaSense delivers insights that matter from content you can trust. Our universe of public and private content includes equity research, company filings, event transcripts, expert calls, news, trade journals, and clients' own research content. The acquisition of Tegus by AlphaSense in 2024 advances our shared mission to empower professionals to make smarter decisions through AI-driven market intelligence. Together, AlphaSense and Tegus will accelerate growth, innovation, and content expansion, with complementary product and content capabilities that enable users to unearth even more comprehensive insights from thousands of content sets. Our platform is trusted by over 6,000 enterprise customers, including a majority of the S&P 500. Founded in 2011, AlphaSense is headquartered in New York City with more than 2,000 employees across the globe and offices in the U.S., U.K., Finland, India, Singapore, Canada, and Ireland. Come join us! About the Role: You will join our team of world-class experts who are developing the AlphaSense platform. The team is right at the very core of what we do and is responsible for implementing cutting-edge technology for scalable, distributed processing of millions of documents. We are seeking a highly skilled Software Engineer II to join our dynamic team responsible for building and maintaining data ingestion systems at scale. As a key member of our team, you will play a crucial role in designing, implementing, and optimizing robust solutions for ingesting millions of documents per month, including the addition of multimedia content such as audio and video from the public web. You'll play a key role in integrating cutting-edge AI models, enabling intelligent suggestions, and content synchronization. You are a good fit if you're a proactive problem-solver with a "go-getter" attitude, startup experience, and a readiness to learn whatever comes your way! Responsibilities: Design, develop, and maintain high-performance, scalable applications using Python. Solve complex technical challenges with innovative solutions that enhance product features and operational efficiencies. Collaborate across teams to integrate applications, optimize system performance, and streamline data flows. Take full ownership of projects from inception to deployment, delivering high-quality solutions that improve user experience. Lead or support data ingestion processes, ensuring seamless data flow and management. Continuously learn and adapt to new tools, frameworks, and technologies as they arise, embracing a growth mindset. Mentor and guide junior developers, fostering a collaborative, innovative culture. Requirements: 2+ years of professional Python development experience, with a strong understanding of Python frameworks (Django, Flask, FastAPI, etc.). Proven success working in a startup environment, demonstrating adaptability and flexibility in fast-changing conditions. Proactive problem-solver with a keen eye for tackling challenging technical issues. A willingness to learn and adapt to new technologies and challenges as they arise. Strong team player with a go-getter attitude, comfortable working both independently and within cross-functional teams. Nice to Have: Experience with media processing and live streaming techniques is a major plus. Familiarity with Crossplane and/or ArgoCD for GitOps-based infrastructure management. Experience with working on Docker, K8s. AlphaSense is an equal-opportunity employer. We are committed to a work environment that supports, inspires, and respects all individuals. All employees share in the responsibility for fulfilling AlphaSense's commitment to equal employment opportunity. AlphaSense does not discriminate against any employee or applicant on the basis of race, color, sex (including pregnancy), national origin, age, religion, marital status, sexual orientation, gender identity, gender expression, military or veteran status, disability, or any other non-merit factor. This policy applies to every aspect of employment at AlphaSense, including recruitment, hiring, training, advancement, and termination. In addition, it is the policy of AlphaSense to provide reasonable accommodation to qualified employees who have protected disabilities to the extent required by applicable laws, regulations, and ordinances where a particular employee works. Recruiting Scams and Fraud We at AlphaSense have been made aware of fraudulent job postings and individuals impersonating AlphaSense recruiters. These scams may involve fake job offers, requests for sensitive personal information, or demands for payment. Please note: AlphaSense never asks candidates to pay for job applications, equipment, or training. All official communications will come from an @alpha-sense.com email address. If you're unsure about a job posting or recruiter, verify it on our Careers page. If you believe you've been targeted by a scam or have any doubts regarding the authenticity of any job listing purportedly from or on behalf of AlphaSense please contact us. Your security and trust matter to us.
Posted 1 week ago
2.0 years
0 Lacs
Delhi
On-site
About AlphaSense: The world's most sophisticated companies rely on AlphaSense to remove uncertainty from decision-making. With market intelligence and search built on proven AI, AlphaSense delivers insights that matter from content you can trust. Our universe of public and private content includes equity research, company filings, event transcripts, expert calls, news, trade journals, and clients' own research content. The acquisition of Tegus by AlphaSense in 2024 advances our shared mission to empower professionals to make smarter decisions through AI-driven market intelligence. Together, AlphaSense and Tegus will accelerate growth, innovation, and content expansion, with complementary product and content capabilities that enable users to unearth even more comprehensive insights from thousands of content sets. Our platform is trusted by over 6,000 enterprise customers, including a majority of the S&P 500. Founded in 2011, AlphaSense is headquartered in New York City with more than 2,000 employees across the globe and offices in the U.S., U.K., Finland, India, Singapore, Canada, and Ireland. Come join us! About the Role: You will join our team of world-class experts who are developing the AlphaSense platform. The team is right at the very core of what we do and is responsible for implementing cutting-edge technology for scalable, distributed processing of millions of documents. We are seeking a highly skilled Software Engineer II to join our dynamic team responsible for building and maintaining data ingestion systems at scale. As a key member of our team, you will play a crucial role in designing, implementing, and optimizing robust solutions for ingesting millions of documents per month, including the addition of multimedia content such as audio and video from the public web. You'll play a key role in integrating cutting-edge AI models, enabling intelligent suggestions, and content synchronization. You are a good fit if you're a proactive problem-solver with a "go-getter" attitude, startup experience, and a readiness to learn whatever comes your way! Responsibilities: Design, develop, and maintain high-performance, scalable applications using Python. Solve complex technical challenges with innovative solutions that enhance product features and operational efficiencies. Collaborate across teams to integrate applications, optimize system performance, and streamline data flows. Take full ownership of projects from inception to deployment, delivering high-quality solutions that improve user experience. Lead or support data ingestion processes, ensuring seamless data flow and management. Continuously learn and adapt to new tools, frameworks, and technologies as they arise, embracing a growth mindset. Mentor and guide junior developers, fostering a collaborative, innovative culture. Requirements: 2+ years of professional Python development experience, with a strong understanding of Python frameworks (Django, Flask, FastAPI, etc.). Proven success working in a startup environment, demonstrating adaptability and flexibility in fast-changing conditions. Proactive problem-solver with a keen eye for tackling challenging technical issues. A willingness to learn and adapt to new technologies and challenges as they arise. Strong team player with a go-getter attitude, comfortable working both independently and within cross-functional teams. Nice to Have: Experience with media processing and live streaming techniques is a major plus. Familiarity with Crossplane and/or ArgoCD for GitOps-based infrastructure management. Experience with working on Docker, K8s. AlphaSense is an equal-opportunity employer. We are committed to a work environment that supports, inspires, and respects all individuals. All employees share in the responsibility for fulfilling AlphaSense's commitment to equal employment opportunity. AlphaSense does not discriminate against any employee or applicant on the basis of race, color, sex (including pregnancy), national origin, age, religion, marital status, sexual orientation, gender identity, gender expression, military or veteran status, disability, or any other non-merit factor. This policy applies to every aspect of employment at AlphaSense, including recruitment, hiring, training, advancement, and termination. In addition, it is the policy of AlphaSense to provide reasonable accommodation to qualified employees who have protected disabilities to the extent required by applicable laws, regulations, and ordinances where a particular employee works. Recruiting Scams and Fraud We at AlphaSense have been made aware of fraudulent job postings and individuals impersonating AlphaSense recruiters. These scams may involve fake job offers, requests for sensitive personal information, or demands for payment. Please note: AlphaSense never asks candidates to pay for job applications, equipment, or training. All official communications will come from an @alpha-sense.com email address. If you're unsure about a job posting or recruiter, verify it on our Careers page. If you believe you've been targeted by a scam or have any doubts regarding the authenticity of any job listing purportedly from or on behalf of AlphaSense please contact us. Your security and trust matter to us.
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job location: Gurugram (Hybrid) About the role: Sun King is looking for a self-driven Infrastructure engineer, who is comfortable working in a fast-paced startup environment and balancing the needs of multiple development teams and systems. You will work on improving our current IAC, observability stack, and incident response processes. You will work with the data science, analytics, and engineering teams to build optimized CI/CD pipelines, scalable AWS infrastructure, and Kubernetes deployments. What you would be expected to do: Work with engineering, automation, and data teams to work on various infrastructure requirements. Designing modular and efficient GitOps CI/CD pipelines, agnostic to the underlying platform. Managing AWS services for multiple teams. Managing custom data store deployments like sharded MongoDB clusters, Elasticsearch clusters, and upcoming services. Deployment and management of Kubernetes resources. Deployment and management of custom metrics exporters, trace data, custom application metrics, and designing dashboards, querying metrics from multiple resources, as an end-to-end observability stack solution. Set up incident response services and design effective processes. Deployment and management of critical platform services like OPA and Keycloak for IAM. Advocate best practices for high availability and scalability when designing AWS infrastructure, observability dashboards, implementing IAC, deploying to Kubernetes, and designing GitOps CI/CD pipelines. You might be a strong candidate if you have/are: Hands-on experience with Docker or any other container runtime environment and Linux with the ability to perform basic administrative tasks. Experience working with web servers (nginx, apache) and cloud providers (preferably AWS). Hands-on scripting and automation experience (Python, Bash) and experience debugging and troubleshooting Linux environments and cloud-native deployments. Experience building CI/CD pipelines, with familiarity with monitoring & alerting systems (Grafana, Prometheus, and exporters). Knowledge of web architecture, distributed systems, and single points of failure. Familiarity with cloud-native deployments and concepts like high availability, scalability, and bottleneck. Good networking fundamentals — SSH, DNS, TCP/IP, HTTP, SSL, load balancing, reverse proxies, and firewalls. Good to have: Experience with backend development and setting up databases and performance tuning using parameter groups. Working experience in Kubernetes cluster administration and Kubernetes deployments. Experience working alongside SecOps engineers. Basic knowledge of Envoy, service mesh (Istio), and SRE concepts like distributed tracing. Setup and usage of open telemetry, central logging, and monitoring systems.
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Responsibilities We are currently modernizing how we operate Elastic, migrating the hosting platform from Windows servers to Kubernetes, and in the process we are rebuilding all our automation around it. We are looking for a senior and technically very strong engineer to help us with that. You will join a high performing team who is distributed across our three development centres – Copenhagen, Gurugram and Amsterdam. Along with the team, you will be responsible for Providing Elasticsearch as a service to the bank, for both Observability and Search purposes, with a high degree of self service and automation. Ensuring stable data collection pipelines end to end. Maintaining and improving our new Kubernetes based Elastic platform, utilizing Elastic ECK, Helm, custom Kubernetes CRDs and operators. Constantly improving performance, stability and availability. Building a great user experience for developers and operators. Using your expertise to help users solve their data and monitoring problems. Your Profile You are a team player, positive in nature and passionate about software engineering. You have an open mind and like to learn and grow yourself as well as helping others do the same. We expect you to be able to quickly take a leading technical role in the team, setting a high quality bar and mentoring other team members. We are looking minimum 6+ Yrs experience into development role. Strong experience with the Elastic Stack. This includes: Operating and performance-tuning clusters and data collection, managing data, scaling, and patching. Using the APIs of Elasticsearch to automate tasks, access data and build integrations. Exploring and visualizing data in Kibana Strong experience with Kubernetes Good understanding of Kubernetes concepts Deploying and operating applications in Kubernetes Using Helm charts for templating Experience with programming languages, tools and practices Developing Applications, APIs or system integrations in languages such as C#, Python or Go. We mainly use C#. Experience with automation using scripting languages such as PowerShell. Experience with DevOps practices and tools, such as Azure DevOps. Experience with Git. Additional, desirable Skills: Experience with one or more of the major cloud providers. We use Azure . Experience with GitOps using Flux or ArgoCD Experience with Windows based environments Experience with widely used infrastructure technology such as DNS, PKI and certificates, authentication with OAuth and OIDC. Personal Attributes: BA/BS/BE degree or equivalent experience Strong communication and presentation skills Highly motivated & driven team player; comfortable working independently as required.
Posted 1 week ago
0.0 years
0 Lacs
Noida, Uttar Pradesh
On-site
Noida,Uttar Pradesh,India Job ID 761869 Join our Team About this opportunity: We are seeking a Senior OpenShift Engineer to lead the migration, modernization, and management of enterprise container platforms using Red Hat OpenShift. This role involves migrating legacy applications to OpenShift, optimizing workloads, and ensuring high availability across hybrid and multi-cloud environments. The ideal candidate will be skilled in container orchestration, DevOps automation, and cloud-native transformations. What you will do: Lead migration projects to move workloads from legacy platforms ( on-prem running on KVM/VMware/Openstack, on-prem Kubernetes, OpenShift 3.x) to OpenShift 4.x. Assess and optimize monolithic applications for containerization and microservices architecture. Develop strategies for stateful and stateless application migrations with minimal downtime. Work with developers and architects to refactor or replatform applications for cloud-native environments. Implement migration automation using Ansible, Helm, or OpenShift GitOps (ArgoCD/FluxCD). Design, deploy, and manage scalable, highly available OpenShift clusters across on-prem and cloud. Implement multi-cluster, hybrid cloud, and multi-cloud OpenShift architectures. Define resource quotas, auto-scaling policies, and workload optimizations for performance tuning. Oversee OpenShift upgrades, patching, and lifecycle management. The skills you bring: Deep hands-on experience with Red Hat OpenShift (OCP 4.x+), Kubernetes, and Docker. Strong knowledge of application migration strategies (Lift & Shift, Replatforming, Refactoring). Proficiency in cloud-native application development and microservices. Expertise in Cloud Platforms (AWS, Azure, GCP) with OpenShift deployments. Advanced scripting and automation using Bash, Python, Ansible, or Terraform. Experience with GitOps methodologies (ArgoCD, FluxCD) and Infrastructure as Code (IaC). Certifications (Preferred but not Mandatory): Red Hat Certified Specialist in OpenShift Administration (EX280) Certified Kubernetes Administrator (CKA) AWS/Azure/GCP Kubernetes/OpenShift-related certifications Strong problem-solving skills with a strategic mindset for complex migrations. Experience in leading technical projects and mentoring engineers. Excellent communication and documentation skills. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?
Posted 1 week ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary We are seeking a Senior Azure DevOps & Infrastructure Engineer with deep expertise in CI/CD pipeline management, infrastructure automation (IaC), API lifecycle management, and Azure cloud governance. The ideal candidate will drive DevOps best practices across development, QA, and operations teams, ensuring secure, scalable, and high-performing cloud application environments. This role requires hands-on experience in automating deployments, mentoring junior engineers, and optimizing cloud costs and performance in Azure. Must Have Skills (Mandatory) 5–7 years of experience in DevOps or cloud engineering roles Strong proficiency with Azure DevOps for CI/CD orchestration Hands-on expertise in Terraform and ARM templates for IaC Experience with containerization (Docker) and orchestration (AKS) API management and security using Azure API Management, OAuth2, JWT Scripting skills in PowerShell, Bash, Python, or Go Monitoring & observability tools (Azure Monitor, Log Analytics, Grafana, ELK) DevSecOps best practices (automated scans, secrets management, key rotation) Git-based workflows and version control using GitHub/GitLab/Bitbucket Experience across Windows & Linux environments Good-to-have Skills (Optional) Working knowledge of other cloud platforms (AWS, GCP) Familiarity with configuration management tools like Ansible, Chef, or Puppet Experience with GitOps and branching strategies Exposure to microservices architectures and secure web APIs Knowledge of compliance standards and cost governance tools in Azure Qualifications & Experience Bachelor’s degree in Computer Science, Information Technology, or a related field Preferred Certifications: Microsoft Certified: DevOps Engineer Expert Azure Solutions Architect Expert Terraform Associate Certified Kubernetes Administrator (CKA) Strong leadership qualities with a passion for automation and continuous improvement Excellent collaboration and communication skills, especially across cross-functional teams
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough