Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
karnataka
On-site
We are seeking a skilled and experienced DevOps Lead to become part of our team. The ideal candidate will possess a solid background in constructing and deploying pipelines utilizing Jenkins and GitHub Actions, along with familiarity with messaging systems like ArtemisMQ and extensive expertise in 3-tier and microservices architecture, including Spring Cloud Services SCS. Proficiency in Azure cloud services and deployment models is a crucial requirement. Your responsibilities will include designing, implementing, and maintaining CI/CD pipelines using Jenkins and GitHub Actions for Java applications. Ensuring secure and efficient build and deployment processes, collaborating with development and operations teams to integrate security practices into the DevOps workflow, and managing and optimizing messaging systems specifically ArtemisMQ. You will also be tasked with architecting and implementing solutions based on 3-tier and microservices architecture, utilizing Azure cloud services for application deployment and management, monitoring and troubleshooting system performance and security issues, and staying updated with industry trends in DevSecOps and cloud technologies. Additionally, mentoring and guiding team members on DevSecOps practices and tools will be part of your role. As a DevOps Lead, you will be expected to take ownership of parts of proposal documents, provide inputs in solution design based on your expertise, plan configuration activities, conduct solution product demonstrations, and actively lead small projects. You will also contribute to unit-level and organizational initiatives aimed at delivering high-quality, value-adding solutions to customers. In terms of technical requirements, you should have proven experience as a DevSecOps Lead or in a similar role, strong proficiency in Jenkins and GitHub Actions for building and deploying Java applications, the ability to execute CI/CD pipeline migrations from Jenkins to GitHub Actions for Azure deployments, familiarity with messaging systems such as ArtemisMQ, and extensive knowledge of 3-tier and microservices architecture, including Spring Cloud Services SCS. Furthermore, familiarity with infrastructure as code tools like Terraform or Ansible, knowledge of containerization and orchestration tools like Docker and Kubernetes, proficiency in Azure cloud services and AI services deployment, a strong understanding of security best practices in DevOps, and excellent problem-solving skills are prerequisites. Effective communication, leadership skills, the ability to work in a fast-paced collaborative environment, and knowledge of tools like Gitops, Podman, ArgoCD, Helm, Nexus, Github container registry, Grafana, and Prometheus are also desired. Furthermore, you should possess the ability to develop value-creating strategies, have good knowledge of software configuration management systems, stay updated on the latest technologies and industry trends, exhibit logical thinking and problem-solving skills, understand financial processes for various project types and pricing models, identify improvement areas in current processes and suggest technological solutions, and have client interfacing skills. Project and team management capabilities, along with one or two industry domain knowledge, are also beneficial. Preferred skills include expertise in Azure DevOps within the Cloud Platform technology domain.,
Posted 1 week ago
15.0 - 19.0 years
0 Lacs
pune, maharashtra
On-site
The Data Solutions Technology team at Citigroup is dedicated to providing competitive advantage through delivering high-quality, innovative, and cost-effective reference data technology solutions to meet business, client, regulatory, and stakeholder needs. Olympus is a cutting-edge Data Fabric designed to streamline data sources across ICG and facilitate various analytics, reporting, and data science solutions that are Accurate, Reliable, Relevant, and Scalable. As the Apps Support Group Manager, you will lead a team of professionals in managing complex and critical areas within the technology function. Your role will involve understanding how application support integrates to achieve overall objectives, manage vendor relationships, improve service levels for end users, guide development teams on stability improvements, implement frameworks for capacity management, and drive cost reductions through performance tuning and user training. Key Responsibilities: - Demonstrate a deep understanding of application support within the technology function - Manage vendor relationships and offshore managed services - Enhance operational efficiencies and incident/problem management practices - Provide guidance on application stability and supportability improvements - Define capacity management frameworks and on-boarding guidelines - Coach team members to maximize their potential and work effectively in a team environment - Drive cost reductions through Root Cause Analysis, Knowledge management, and Performance tuning - Participate in business review meetings and ensure adherence to support process standards - Manage a team including people, budget, planning, and performance evaluation - Perform other duties and functions as assigned Qualifications: - 15+ years in Application Production Support with 5+ years in a strategic role - Deep expertise in Big Data platforms such as Hadoop, Spark, Kafka, and Hive - Proven track record in driving stability, resiliency, and automation initiatives - Post-Graduation in relevant field preferred - Experience in senior stakeholder management and project management - Capacity Planning/Forecasting exposure a plus - Strong communication, organization, and planning skills Education: - Bachelors/University degree, Masters degree preferred If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity, review Accessibility at Citi. View Citi's EEO Policy Statement and the Know Your Rights poster.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
As an Azure DevOps Engineer, you will be a valuable member of our technology team, bringing your expertise in system administration, DevOps methodologies, and IT infrastructure management to ensure automation, scalability, and operational excellence are at the forefront of our operations. Your primary responsibilities will include managing and maintaining our enterprise IT infrastructure, which encompasses servers, networks, and cloud environments. You will design and implement DevOps pipelines for continuous integration and deployment (CI/CD), automate system tasks and workflows using scripting and configuration management tools, and monitor system performance to troubleshoot issues and ensure high availability and reliability. Collaboration with development, QA, and operations teams will be essential to streamline deployment and release processes, while also maintaining system security, compliance, and backup strategies. Documenting system configurations, operational procedures, and incident resolutions will also be part of your duties. To excel in this role, you should possess a Bachelor's degree in Information Technology, Computer Science, or a related field, along with at least 3-7 years of experience in DevOps, IT operations, or system administration. Your proficiency should include Linux/Windows server administration, CI/CD tools such as Jenkins, GitLab CI, and Azure DevOps, Infrastructure as Code tools like Terraform and Ansible, familiarity with cloud platforms like AWS, Azure, and GCP, and experience with monitoring tools like Prometheus, Grafana, and Nagios. A strong understanding of networking, security, and virtualization technologies is crucial, along with excellent problem-solving and communication skills. Preferred qualifications for this role include certifications in AWS, Azure, or DevOps tools, experience with containerization using Docker and Kubernetes, and familiarity with ITIL practices and incident management systems. In your role as a Software Engineer, you will apply scientific methods to analyze and solve software engineering problems, developing and maintaining software solutions and applications. Your responsibilities will involve the development and application of software engineering practice and knowledge, requiring original thought, judgment, and the ability to supervise other software engineers. Building your skills and expertise within the software engineering discipline is essential, collaborating as a team player with other software engineers and stakeholders to achieve project goals and standards.,
Posted 1 week ago
5.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
As a Managed Services Provider (MSP), we are looking for an experienced TechOps Lead to take charge of our cloud infrastructure operations team. Your primary responsibility will be ensuring the seamless delivery of high-quality, secure, and scalable managed services across multiple customer environments, predominantly on AWS and Azure. In this pivotal role, you will serve as the main point of contact for customers, offering strategic technical direction, overseeing day-to-day operations, and empowering a team of cloud engineers to address complex technical challenges. Conducting regular governance meetings with customers, you will provide insights and maintain strong, trust-based relationships. As our clients explore AI workloads and modern platforms, you will lead the team in rapidly adopting and integrating new technologies to keep us ahead of evolving industry trends. Your key responsibilities will include: - Acting as the primary technical and operational contact for customer accounts - Leading governance meetings with customers to review SLAs, KPIs, incident metrics, and improvement initiatives - Guiding the team in diagnosing and resolving complex technical problems in AWS, Azure, and hybrid environments - Ensuring adherence to best practices in cloud operations, infrastructure-as-code, security, cost optimization, monitoring, and compliance - Staying updated on emerging cloud, AI, and automation technologies to enhance our service offerings - Overseeing incident, change, and problem management activities to ensure SLA compliance - Identifying trends from incidents and metrics and driving proactive improvements - Establishing runbooks, standard operating procedures, and automation to reduce toil and improve consistency To be successful in this role, you should possess: - 12+ years of overall experience with at least 5 years managing or delivering cloud infrastructure services on Azure and/or AWS - Strong hands-on skills in Terraform, DevOps tools, monitoring, logging, alerting, and exposure to AI workloads - Solid understanding of networking, security, IAM, and cost optimization in cloud environments - Experience leading technical teams in a managed services or consulting environment - Ability to quickly learn new technologies and guide the team in adopting them to solve customer problems Nice to have skills include exposure to container platforms, multi-cloud cost management tools, AI ML Ops services, security frameworks, and relevant certifications like AWS Solutions Architect, Azure Administrator, or Terraform Associate.,
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
haryana
On-site
You lead the way. We've got your back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities, and each other. Here, you'll learn and grow as we help you create a career journey that's unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you'll be recognized for your contributions, leadership, and impact. Every colleague has the opportunity to share in the company's success. Together, we'll win as a team, striving to uphold our company values and powerful backing promise to provide the world's best customer experience every day. And we'll do it with the utmost integrity, in an environment where everyone is seen, heard, and feels like they belong. Join Team Amex and let's lead the way together. About Enterprise Architecture: Enterprise Architecture is an organization within the Chief Technology Office at American Express and is a key enabler of the company's technology strategy. The four pillars of Enterprise Architecture include: - Architecture as Code: This pillar owns and operates foundational technologies leveraged by engineering teams across the enterprise. - Architecture as Design: This pillar includes the solution and technical design for transformation programs and business critical projects requiring architectural guidance and support. - Governance: Responsible for defining technical standards and developing innovative tools that automate controls to ensure compliance. - Colleague Enablement: Focused on colleague development, recognition, training, and enterprise outreach. What you will be working on: We are looking for a Senior Engineer to join our Enterprise Architecture team. In this role, you will be designing and implementing highly scalable real-time systems following best practices and using cutting-edge technology. This role is best suited for experienced engineers with a broad skillset who are open, curious, and willing to learn. Qualifications: What you will Bring: - Bachelor's degree in computer science, computer engineering, or a related field, or equivalent experience. - 10+ years of progressive experience demonstrating strong architecture, programming, and engineering skills. - Firm grasp of data structures, algorithms with fluency in programming languages like Java, Kotlin, Go. - Ability to lead, partner, and collaborate cross-functionally across engineering organizations. - Experience in building real-time large-scale, high-volume, distributed data pipelines on top of data buses (Kafka). - Hands-on experience with large-scale distributed NoSQL databases like Elasticsearch. - Knowledge and/or experience with containerized environments, Kubernetes, Docker. - Knowledge and/or experience with public cloud platforms like AWS, GCP. - Experience in implementing and maintaining highly scalable microservices in Rest, GRPC. - Experience in working with infrastructure layers like service mesh, Istio, Envoy. - Appetite for trying new things and building rapid POCs. Preferred Qualifications: - Knowledge of Observability concepts like Tracing, Metrics, Monitoring, Logging. - Knowledge of Prometheus. - Knowledge of OpenTelemetry / OpenTracing. - Knowledge of observability tools like Jaeger, Kibana, Grafana, etc. - Open-source community involvement. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones" physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: - Competitive base salaries. - Bonus incentives. - Support for financial well-being and retirement. - Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location). - Flexible working model with hybrid, onsite, or virtual arrangements depending on role and business need. - Generous paid parental leave policies (depending on your location). - Free access to global on-site wellness centers staffed with nurses and doctors (depending on location). - Free and confidential counseling support through our Healthy Minds program. - Career development and training opportunities. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a DevOps engineer at C1X AdTech Private Limited, a global technology company, your primary responsibility will be to manage the infrastructure, support development pipelines, and ensure system reliability. You will play a crucial role in automating deployment processes, maintaining server environments, monitoring system performance, and supporting engineering operations throughout the development lifecycle. Our objective is to design and manage scalable, cloud-native infrastructure using GCP services, Kubernetes, and Argo CD for high-availability applications. Additionally, you will implement and monitor observability tools such as Elasticsearch, Logstash, and Kibana to ensure full system visibility and support performance tuning. Enabling real-time data streaming and processing pipelines using Apache Kafka and GCP DataProc will be a key aspect of your role. You will also be responsible for automating CI/CD pipelines using GitHub Actions and Argo CD to facilitate faster, secure, and auditable releases across development and production environments. Your responsibilities will include building, managing, and monitoring Kubernetes clusters and containerized workloads using GKE and Argo CD, designing and maintaining CI/CD pipelines using GitHub Actions integrated with GitOps practices, configuring and maintaining real-time data pipelines using Apache Kafka and GCP DataProc, managing logging and observability infrastructure using Elasticsearch, Logstash, and Kibana (ELK stack), setting up and securing GCP services including Artifact Registry, Compute Engine, Cloud Storage, VPC, and IAM, implementing caching and session stores using Redis for performance optimization, and monitoring system health, availability, and performance with tools like Prometheus, Grafana, and ELK. Collaboration with development and QA teams to streamline deployment processes and ensure environment stability, as well as automating infrastructure provisioning and configuration using Bash, Python, or Terraform will be essential aspects of your role. You will also be responsible for maintaining backup, failover, and recovery strategies for production environments. To qualify for this position, you should hold a Bachelor's degree in Computer Science, Engineering, or a related technical field with at least 4-8 years of experience in DevOps, Cloud Infrastructure, or Site Reliability Engineering. Strong experience with Google Cloud Platform (GCP) services including GKE, IAM, VPC, Artifact Registry, and DataProc is required. Hands-on experience with Kubernetes, Argo CD, and GitHub Actions for CI/CD workflows, proficiency with Apache Kafka for real-time data streaming, experience managing ELK Stack (Elasticsearch, Logstash, Kibana) in production, working knowledge of Redis for distributed caching and session management, scripting/automation skills using Bash, Python, Terraform, etc., solid understanding of containerization, infrastructure-as-code, and system monitoring, and familiarity with cloud security, IAM policies, and audit/compliance best practices are also essential qualifications for this role.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Transformation Engineering professional at Talworx, you will be expected to meet the following requirements: A Bachelor's degree in Computer Science, Information Systems, or a related field is preferred. You should have at least 5 years of experience in application development, deployment, and support. Your expertise should encompass a wide range of technologies including Java, JEE, JSP, Spring, Spring Boot (Microservices), Spring JPA, REST, JSON, Junit, React, Python, Javascript, HTML, and XML. Additionally, you should have a minimum of 3 years of experience in a Platform/Application Engineering role supporting on-premises and Cloud-based deployments, with a preference for Azure. While not mandatory, the following skills would be beneficial for the role: - At least 3 years of experience in Platform/Application Administration. - Proficiency in software deployments on Linux and Windows systems. - Familiarity with Spark, Docker, Containers, Kubernetes, Microservices, Data Analytics, Visualization Tools, and GIT. - Hands-on experience in building and supporting modern AI technologies such as Azure Open AI and LLM Infrastructure/Applications. - Experience in deploying and maintaining applications and infrastructure through configuration management software like Ansible and Terraform, following Infrastructure as Code (IaC) best practices. - Strong scripting skills in languages like bash and Python. - Proficiency in using GitHub to manage application and infrastructure deployment lifecycles within a structured CI/CD environment. - Familiarity with working in a structured ITSM change management environment. - Knowledge of configuring monitoring solutions and creating dashboards using tools like Splunk, Wily, Prometheus, Grafana, Dynatrace, and Azure Monitor. If you are passionate about driving transformation through engineering and possess the required qualifications and skills, we encourage you to apply and be a part of our dynamic team at Talworx.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
punjab
On-site
The key responsibilities for the Monitoring Platform Integrator role include designing, building, deploying, and configuring the new monitoring infrastructure to enhance efficiency and effectiveness. You will collaborate with tech-leads of system migrations to ensure proper monitoring of new platforms, establish alerting rules, and define escalation paths. It is essential to monitor the monitoring system itself and set up redundant escalation paths to detect failures. Developing and maintaining any required code-base for solutions and customer-specific configurations is also part of the role. As a Monitoring Platform Integrator, you will focus on configuring the platform as automatically as possible using technologies like service discovery, ansible, and git to minimize manual configuration. Additionally, you will assist tech-leads and system owners in setting up Grafana and other dashboarding tools. Working closely with NOC teams and system owners, you will gather monitoring and alerting requirements to ensure smooth system transitions. You will also play a key role in transitioning custom monitoring scripts from Nagios to either Prometheus or icinga2 platforms and integrating existing monitoring systems into the new design. Qualifications for this position include a basic degree or diploma in IT and certifications from Microsoft, Enterprise Linux, Cloud Foundations, AWS Cloud Practitioner, or similar DevOps-centered training. The ideal candidate should have over 5 years of experience in a systems admin role, focusing on implementing, developing, and maintaining enterprise-level platforms, preferably in the media industry. Proficiency in various areas such as Docker and Kubernetes management, Redhat/Oracle Linux/CentOS administration, AWS Cloud toolsets, and monitoring technologies like Prometheus, Grafana, and Nagios is crucial. Experience in logging technologies such as Kibana, Elasticsearch, and CloudWatch, as well as orchestration management tools like Ansible, Terraform, or Puppet, is highly desirable. Strong skills in Python development, JSON, API integration, and NetBox are essential for this role. Knowledge of GO and Alerta.io may also be advantageous for the Monitoring Platform Integrator position.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
We are seeking a highly skilled SAP Concur consultant with a strong background in the implementation and global rollout of the SAP Concur - Request, Travel, and Expense module. The ideal candidate will have a minimum of two full lifecycle implementation experiences, as well as experience in rollout and support projects, including integration with SAP. As an Architect, you will be responsible for recommending SAP Concur solutions based on business requirements. Candidates must be SAP Concur certified and possess expertise in SAP Concur Request, Travel, and Expense module, along with its integration with SAP. You should have at least 4 years of relevant work experience, including functional support for SAP Concur. Key Responsibilities: - Manage stakeholders and provide timely and accurate communication to address their issues and keep them informed on IT-related matters. - Prepare blueprint, gap, configuration, and other relevant documents. - Configure the SAP Concur application based on the blueprint. - Conduct unit and integration testing on SAP Concur and SAP systems. - Demonstrate proficiency in managing and delivering multiple projects involving cross-functional teams concurrently. If you meet the qualifications mentioned above and are looking to take on a challenging role in a dynamic environment, we would love to hear from you.,
Posted 1 week ago
5.0 - 10.0 years
27 - 40 Lacs
Noida, Pune, Bengaluru
Work from Office
Description: We are seeking a highly skilled Senior Data Engineer with strong expertise in Python development and MySQL, along with hands-on experience in Big Data technologies, PySpark, and cloud platforms such as AWS, GCP, or Azure. The ideal candidate will play a critical role in designing and developing scalable data pipelines and infrastructure to support advanced analytics and data-driven decision-making across teams. Requirements: 7 to 12 years of overall experience in data engineering or related domains. Proven ability to work independently on analytics engines like Big Data and PySpark. Strong hands-on experience in Python programming, with a focus on data handling and backend services. Proficiency in MySQL, with the ability to write and optimize complex queries; knowledge of Redis is a plus. Solid understanding and hands-on experience with public cloud services (AWS, GCP, or Azure). Familiarity with monitoring tools such as Grafana, ELK, Loki, and Prometheus. Experience with IaC tools like Terraform and Helm. Proficiency in containerization and orchestration using Docker and Kubernetes. Strong collaboration and communication skills to work in agile and cross-functional environments. Job Responsibilities: Design, develop, and maintain robust data pipelines using Big Data and PySpark for ETL/ELT processes. Build scalable and efficient data solutions across cloud platforms (AWS/GCP/Azure) using modern tools and technologies Write high-quality, maintainable, and efficient code in Python for data engineering tasks. Develop and optimize complex queries using MySQL and work with caching systems like Redis. Implement monitoring and logging using Grafana, ELK, Loki, and Prometheus to ensure system reliability and performance. Use Terraform and Helm for infrastructure provisioning and automation (Infrastructure as Code). Leverage Docker and Kubernetes for containerization and orchestration of services. Collaborate with cross-functional teams including engineering, product, and analytics to deliver impactful data solutions. Contribute to system architecture decisions and influence best practices in cloud data infrastructure. What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Noida
Work from Office
Senior Full Stack Engineer We are seeking a Senior Full Stack Engineer to design, build and scale a portfolio of cloud-native products including real-time speech-assessment tools, GenAI content services, and analytics dashboards used by customers worldwide. You will own end-to-end delivery across React/Next.js front-ends, Node/Python micro-services, and a MongoDB-centric data layer, all orchestrated in containers on Kubernetes, while championing multi-tenant SaaS best practices and modern MLOps. Role: Product & Architecture • Design multi-tenant SaaS services with isolated data planes, usage metering, and scalable tenancy patterns. • Lead MERN-driven feature work: SSR/ISR dashboards in Next.js, REST/GraphQL APIs in Node.js or FastAPI, and event-driven pipelines for AI services. • Build and integrate AI/ML & GenAI modules (speech scoring, LLM-based content generation, predictive analytics) into customer-facing workflows. DevOps & Scale • Containerise services with Docker, automate deployment via Helm/Kubernetes, and implement blue-green or canary roll-outs in CI/CD. • Establish observability for latency, throughput, model inference time, and cost-per-tenant across micro-services and ML workloads. Leadership & Collaboration • Conduct architecture reviews, mentor engineers, and promote a culture that pairs AI-generated code with rigorous human code review. • Partner with Product and Data teams to align technical designs with measurable business KPIs for AI-driven products. Required Skills & Experience • Front-End React 18, Next.js 14, TypeScript, modern CSS/Tailwind • Back-End Node 20 (Express/Nest) and Python 3.11 (FastAPI) • Databases MongoDB Atlas, aggregation pipelines, TTL/compound indexes • AI / GenAI Practical ML model integration, REST/streaming inference, prompt engineering, model fine-tuning workflows • Containerisation & Cloud Docker, Kubernetes, Helm, Terraform; production experience on AWS/GCP/Azure • SaaS at Scale Multi-tenant data isolation, per-tenant metering & rate-limits, SLA design • CI/CD & Quality GitHub Actions/GitLab CI, unit + integration testing (Jest, Pytest), E2E testing (Playwright/Cypress) Preferred Candidate Profile • Production experience with speech analytics or audio ML pipelines. • Familiarity with LLMOps (vector DBs, retrieval-augmented generation). • Terraform-driven multi-cloud deployments or FinOps optimization. • OSS contributions in MERN, Kubernetes, or AI libraries. Tech Stack & Tooling - React 18 • Next.js 14 • Node 20 • FastAPI • MongoDB Atlas • Redis • Docker • Kubernetes • Helm • Terraform • GitHub Actions • Prometheus + Grafana • OpenTelemetry • Python/Rust micro-services for ML inference
Posted 1 week ago
12.0 - 17.0 years
12 - 17 Lacs
Pune
Work from Office
BMC is looking for a talented Devops Engineer to join our family who are just as passionate about solving issues with distributed systems as they are to automate, code and collaborate to tackle problem. Here is how, through this exciting role, YOU will contribute to BMC's and your own success: Monitor and manage infrastructure via IasC, ensuring optimal performance, security, and scalability. Define and develop, test, release, update, and support processes for DevOps operations. Troubleshoot and resolve issues related to deployment, and operations. Select and validate tools and technologies which best fit business needs. Lead platform upgrade/migration/maintenance independently. Stay abreast of emerging technologies and industry trends, then utilize them to enhance the software infrastructure. Design, develop, test and maintain CI/CD pipelines, and ability to maintain continuous integration, delivery, and deployment (CI/CD) processes for a complex set of software requirements and products spread across multiple platforms. As every BMC employee, you will be given the opportunity to learn, be included in global projects, challenge yourself and be the innovator when it comes to solving everyday problems. To ensure youre set up for success, you will bring the following skillset & experience: You can embrace, live and breathe our BMC values every day! You have 12+ years of experience working with various DevOps concepts and tools like Terraform, Ansible, Packer, AWS, Jenkins, Git. Familiarity with IBM z/OS-based infrastructure is required, particularly in hybrid enterprise environments. Strong knowledge of Shell Scripting and any other programming languages such as Python, C, Groovy, Java. Hands-On experience with Containers & Container Orchestration tools like Docker, Podman, AWS ECS, Kubernetes, Docker Swarm and infrastructure monitoring tools like Prometheus and Grafana. Experience with designing, building, and maintaining cloud-native applications across major cloud platforms such as AWS, Azure or GCP is a strong plus. Knowledge of Data Protection, Privacy and Security domain. Understanding of agile methodologies and principles. Knowledge of databases and SQL. Excellent communication and collaboration skills, as well as the ability to work effectively in cross-functional teams including nearshore and offshore. Whilst these are nice to have, our team can help you develop in the following skills: Good to have Mainframe Storage management skills, Tapes, Catalogs etc. Good to have Mainframe knowledge (Z/OS JCL).
Posted 1 week ago
7.0 - 12.0 years
9 - 14 Lacs
Pune
Work from Office
Here is how, through this exciting role, YOU will contribute to BMC's and your own success: Participate in all aspects of SaaS product development, from requirements analysis to product release and sustaining Drive the adoption of the DevOps process and tools across the organization. Learn and implement cutting-edge technologies and tools to build best of class enterprise SaaS solutions Deliver high-quality enterprise SaaS offerings on schedule Develop Continuous Delivery Pipeline Initiate projects and ideas to improve the teams results On-board and mentor new employees To ensure youre set up for success, you will bring the following skillset & experience: You can embrace, live and breathe our BMC values every day! You have at least 7 years of experience in a DevOps\SRE role You have experience as a Tech Lead You implemented CI\CD pipelines with best practices You have experience in Kubernetes You have knowledge in AWS\Azure Cloud implementation You worked with GIT repository and JIRA tools You are passionate about quality and demonstrate creativity and innovation in enhancing the product. You are a problem-solver with good analytical skills You are a team player with effective communication skills Whilst these are nice to have, our team can help you develop in the following skills: SRE practices GitHub/ Spinnaker/Jenkins/Maven/ JIRA etc. Automation Playbooks (Ansible) Infrastructure-as-a-code (IaaC) using Terraform/Cloud Formation Template/ ARM Template Scripting in Bash/Python/Go Microservices, Database, API implementation Monitoring Tools, such as Prometheus/Jager/Grafana /AppDynamic, DataDog, Nagios etc.) Agile/Scrum process
Posted 1 week ago
7.0 - 12.0 years
0 - 2 Lacs
Pune, Chennai, Bengaluru
Hybrid
Key Responsibilities: 6+ years of previous hands-on experience as Performance tester in telecom domain . Strong experience in performance test planning, test estimation, script development, test execution, test results analysis Executing and analyzing performance tests routinely and recording history of the results Programming skills in programming languages such as Bash scripting, perl, Java, Python, Groovy, ELK stack, GRAFANNA, Jenkins, Justle programming, Node Js and Linux Expertise in configuring performance counters using PerfMon tool to identify the bottlenecks for CPU, Memory, Disk IO, Network Experience with Database testing and tuning Understanding of Networking concepts at all layers Experience with New Relic monitoring tool Experience with Log Monitoring tools
Posted 1 week ago
8.0 - 13.0 years
12 - 22 Lacs
Kochi, Bengaluru
Work from Office
Job Title: Senior Performance Tester Experience : 8 Years Primary Tool : Apache JMeter Job Summary: We are looking for a Senior Performance Tester with deep expertise in Apache JMeter and experience in planning, designing, executing, and analysing performance test engagements. The candidate should be capable of owning the performance test lifecycle, mentoring junior members, and actively contributing to system performance improvements. Key Responsibilities: Automation using Playwright with JavaScript Performance Strategy & Planning: Collaborate with business and technical teams to gather Non-Functional Requirements (NFRs) . Define Performance Test Strategy , including test objectives, approach, tools, and SLAs. Identify critical business flows and create workload models based on production data. Test Design & Scripting: Develop scalable and modular JMeter test scripts using advanced components and plugins. Parameterize and correlate scripts for realistic user behavior simulation. Prepare test data, environment configuration, and system readiness checklist. Test Execution & Monitoring: Execute Load , Stress , Spike , Endurance , and Scalability tests. Monitor application and infrastructure performance using: AppDynamics, Dynatrace, Grafana, Kibana, InfluxDB, or New Relic . Capture and analyze metrics like CPU, memory, heap, GC, DB performance, and thread utilization. Analysis & Tuning Support: Identify performance bottlenecks and provide tuning suggestions (code, DB, infra). Work closely with developers, DBAs, and DevOps to validate and retest fixes. Perform root cause analysis of latency, errors, and failures under load. Reporting: Generate detailed test execution reports including: Avg/max response time, percentiles, throughput, error rate, server utilization. Document findings, anomalies, RCA, and improvement recommendations. Present performance test results to project stakeholders with clarity. Process Improvements & Best Practices: Contribute to continuous improvement of performance engineering practices. Implement version control, scripting standards, and reusable components. Participate in CI/CD integration using Jenkins, GitLab CI, or Azure DevOps. Required Skills: AreaSkillsTool Expertise Apache JMeter (core tool), BlazeMeter, Taurus, Playwright Monitoring & Analysis AppDynamics, Dynatrace, Grafana, Prometheus, Splunk Protocols HTTP/S, JDBC, REST, SOAP, WebSockets CI/CD Jenkins, GitLab, Azure DevOps Scripting (Nice to Have) Shell, Python, or Groovy Databases Basic SQL for DB performance validation OS/Infrastructure Linux command line, basic container/cloud exposure (AWS, Docker) Performance Testing Strategy Sample Overview: PhaseActivitiesRequirement Gathering Understand SLAs, user volumes, peak usage patterns Test Planning Define scope, tools, entry/exit criteria, KPIs Test Design Script creation, workload modeling, test data prep Execution Load/Stress/Spike/Soak tests with real-like traffic Monitoring Track infra & app metrics via APM tools Analysis Identify bottlenecks, suggest tuning, retest Reporting Summary with graphs, metrics, bottleneck summary
Posted 1 week ago
6.0 - 10.0 years
20 - 35 Lacs
Pune
Hybrid
Senior SRE - SaaS Our SRE role spans software, systems, and operations engineering. If your passion is building stable, scalable systems for a growing set of innovative products, as well as helping to reduce the friction for deploying these products for our engineering team, Pattern is the place for you. Come help us build a best-in-class platform for amazing growth. Key Responsibilities Infrastructure and Automation Design, build, and manage scalable and reliable infrastructure in AWS (Postgres, Redis, Docker, Queues, Kinesis Streams, S3, etc.) Develop Python or shell scripts for automation, reducing operational toil. Implement and maintain CI/CD pipelines for efficient build and deployment processes using Github Actions. Monitoring and Incident Response Establish robust monitoring and alerting systems using observability methods, logs, and APM tools. Participate in on-call rotations to respond to incidents, troubleshoot problems, and ensure system reliability. Perform root cause analysis on production issues and implement preventative measures to mitigate future incidents. Cloud Administration Manage AWS resources, including Lambda functions, SQS, SNS, IAMs, RDS, etc. Perform Snowflake administration and set up backup policies for various databases. Reliability Engineering Define Service Level Indicators (SLIs) and measure Service Level Objectives (SLOs) to maintain high system reliability. Utilise Infrastructure as Code (IaC) tools like Terraform for managing and provisioning infrastructure. Collaboration and Empowerment Collaborate with development teams to design scalable and reliable systems. Empower development teams to deliver value quickly and accurately. Document system architectures, procedures, run books and best practices. Assist developers in creating automation scripts and workflows to streamline operational tasks and deployments. Innovative Infrastructure Solutions Spearhead the exploration of innovative infrastructure solutions and technologies aligned with industry best practices. Embrace a research-based approach to continuously improve system reliability, scalability, and performance. Encourage a culture of experimentation to test and implement novel ideas for system optimization. Required Qualifications : Bachelors degree in a technical field or relevant work experience 6+ years of experience in engineering, development, DevOps/SRE fields 3+ years experience deploying and managing systems using Amazon Web Services 3+ years experience on Software as a Service (SaaS) application. Proven doer” attitude with ability to self-start, take a project to completion. Demonstrate project ownership. Familiarity with container orchestration tools like Kubernetes, Fargate, etc. Familiarity with Infrastructure as Code tooling like Terraform, CloudFormation, Ansible, Puppet Experience working with CI/CD automated deployments using tools like Github Actions, Jenkins, CircleCI Experience working on observability tools like Datadog, NewRelic, Dynatrace, Grafana, Prometheus, etc. Experience with Linux server management, bash scripting, SSH keys, SSL/TLS certificates, MFA, cron, and log files Deep understanding of AWS networking (VPCs, subnets, security groups, route tables, internet gateways, NAT gateways, NACLs), IAM policies, DNS, Route53, and domain management Strong problem-solving and troubleshooting skills Attention to Details: Thoroughness in accomplishing tasks, ensuring accuracy and quality in all aspects of work. Excellent communication and collaboration abilities Desire to help take Pattern to the next level through exploration and innovation Preferred Qualifications : Experience in deploying applications on ECS, Fargate with ELB/ALB and Auto Scaling Groups. Experience in deploying serverless applications with Lambda, API Gateway, Cognito, CloudFront. Experience in deploying applications built using JavaScript, Ruby, Go, Python. Experience with Infrastructure as Code (IaC) using Terraform. Experience with database administration for Snowflake, Postgres. AWS Certification would be a plus. A focus on adopting security best practices while building great tools.
Posted 1 week ago
10.0 - 13.0 years
20 - 25 Lacs
Mumbai
Work from Office
We are seeking an experienced and forward-thinking DevOps Architect to lead ourinfrastructure, deployment, and developer productivity initiatives. The idealcandidate will bring deep expertise in Kubernetes, Cloud-native architecture,CI/CD, GitOps, and DevSecOps, and will be responsible for enabling scalable andsecure delivery pipelines across cloud and on-premise environments. You willalso play a strategic role in improving developer experience, implementingDevOps governance, and establishing robust Observability frameworks. Here's what you will get toexplore: CI/CD Continuous Deployment Architect,implement, and maintain scalable CI/CD pipelines using GitLab CI/CD (orequivalent). Drivea continuous deployment culture with reliable, automated build and releaseworkflows. Enableprogressive delivery strategies such as blue-green, canary, and featureflag-based deployments. Integratetesting, quality gates, and approval workflows within the CI/CD pipeline. Containerization Orchestration Designand implement containerized solutions using Docker. Manageand scale microservices and applications on Kubernetes across cloud and on-premclusters. Buildand maintain Helm charts and reusable K8s deployment templates. Ensurehigh availability, fault tolerance, and performance of containerizedworkloads. Developer Experience GitOps LeadGitOps strategy using tools like ArgoCD or Flux to manage infrastructure andapp deployment via Git. Enhancedeveloper productivity with platform features such as self-service deployments,shared pipelines, and local dev tooling. Championinternal developer platforms to accelerate feedback cycles and reduceonboarding time. DevSecOps Governance ImplementDevSecOps practices: static code analysis, image scanning, secrets management,and compliance checks. Defineand enforce DevOps governance standards including branching strategies, namingconventions, and release processes. Enablepolicy-as-code and secrets automation to reduce manual risk. Hybrid Deployments Designconsistent and repeatable deployments across cloud (AWS/GCP/Azure) and on-premenvironments. Utilizeinfrastructure-as-code tools like Terraform, AWS CDK, or Pulumi to standardizeinfrastructure provisioning. Workwith SRE and Cloud teams to maintain environment parity and release consistency. Experience building or contributing to internal developer platforms. Familiaritywith service mesh (Istio, Linkerd), multi-cloud, or hybrid cloud architectures. Monitoring, Observability Incident Management Defineand implement a robust monitoring and logging strategy across environments. Standardizeuse of tools like Prometheus, Grafana, ELK, OpenTelemetry, or Datadog. Setup automated alerts and dashboards to support SLOs and proactive issueresolution. Collaboration Leadership Collaborate with Engineering, QA, product, and Cloud teams to align DevOps efforts with business goals. Mentor DevOps engineers and developers in modern DevOps, security, and automation practices. Participate in architecture reviews, production readiness assessments, and postmortems. We can see the next Entrepreneur At Seclore if you have: A technical degree (Engineering, MCA) 10+ years in DevOps, SRE, or Platform engineering with at least 2 years in a lead or architect role. Must-have hands-on experience with Docker and Kubernetes in production-grade environments. Strong expertise with GitLab CI/CD or similar pipeline tools. Proven track record of implementing continuous deployment workflows at scale. Production experience with GitOps tools like ArgoCD or Flux. Deep understanding of DevSecOps, including security automation in the CI/CD lifecycle. Solid knowledge of infrastructure-as-code tools like Terraform or AWS CDK. Experience with both cloud (AWS, Azure, or GCP) and on-prem infrastructure. Why do we call Seclorites Entrepreneurs not Employees We value and support those who take the initiative and calculate risks. We have an attitude of a problem solver and an aptitude that is tech agnostic. You get to work with the smartest minds in the business. We are thriving not living. At Seclore, it is not just about work but about creating outstanding employee experiences. Our supportive and open culture enables our team to thrive.
Posted 1 week ago
8.0 - 13.0 years
0 - 3 Lacs
Bengaluru
Hybrid
Role Brief: Infrastructure Application L3 Support Specialist (Morgan Stanley) Location: Bangalore/ Mumbai Overview: This is a senior (Level 3) support and Site Reliability Engineering (SRE) position Enterprise Computing Data Services organization. The team is responsible for managing, supporting, and troubleshooting in-house middleware applications and infrastructure tools such as Apache Zookeeper, API Proxy, Ansible Automation Platform, and Terraform. The role involves production support, escalation handling, automation, and close collaboration with engineering teams to ensure service stability and reduce manual effort. Key Responsibilities: Provide L3 support for middleware and infrastructure platforms. Troubleshoot and resolve incidents, manage changes, and handle escalations. Automate operational processes using scripting (Shell, Python) and automation tools (Ansible, Terraform). Monitor systems using Splunk, Grafana, Prometheus, or Loki. Collaborate with engineering teams for problem resolution and process automation. Work with Veritas Cluster Service, Load Balancers, and VMWare. Adhere to ITIL processes and best practices. Be available for weekend and on-call support. Required Skills: 8+ years of IT experience. Advanced Linux/Unix support. Strong Shell scripting and Python programming. Experience with Splunk or Grafana/Prometheus/Loki. Automation experience with Ansible and/or Terraform (preferably both). Knowledge of Veritas Cluster Service, Load Balancers, VMWare. Familiarity with ITIL. Strong communication, coordination, and organizational skills. Willingness to work weekends/on-call. Desired Skills: Experience in application support, code release and liaison with development teams highly desired. Knowledge on Dockers, Kubernetes/OpenShift highly desired. Experience in development tool chain such as git, bitbucket and CI/CD tools preferred. Experience in Agile methodologies preferred. Good knowledge on JVMs and its garbage collection mechanisms is preferred. Experience on relational databases and webservers / preferred application servers.
Posted 1 week ago
5.0 - 8.0 years
7 - 12 Lacs
Pune
Work from Office
Job Description: As a Senior Cloud Engineer at NCSi, you will play a critical role in designing, implementing, and managing cloud infrastructure that meets our clients' needs. You will work closely with cross-functional teams to architect solutions, optimize existing systems, and ensure security and compliance across cloud environments. This position requires strong technical skills, a deep understanding of cloud services, and an ability to mentor junior engineers. Responsibilities: - Design and implement scalable cloud solutions using AWS, Azure, or Google Cloud platforms. - Manage cloud infrastructure with a focus on security, compliance, and cost optimization. - Collaborate with development and operations teams to streamline CI/CD pipelines for cloud-based applications. - Troubleshoot and resolve cloud service issues and performance bottlenecks. - Develop and maintain documentation for cloud architectures, procedures, and best practices. - Mentor junior engineers and provide technical guidance on cloud technologies and services. - Stay up to date with the latest cloud technologies and industry trends, and recommend improvements for existing infrastructure. Skills and Tools Required: - Strong experience with cloud platforms such as AWS, Azure, or Google Cloud. - Proficiency in cloud infrastructure management tools like Terraform, CloudFormation, or Azure Resource Manager. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. - Knowledge of programming/scripting languages such as Python, Go, or Bash for automation purposes. - Experience with monitoring and logging tools like Prometheus, Grafana, or ELK Stack. - Understanding of security best practices for cloud deployments, including IAM, VPC configurations, and data encryption. - Strong problem-solving skills, attention to detail, and ability to work in a collaborative team environment. - Excellent communication skills, both verbal and written, to convey complex technical concepts to non-technical stakeholders. Preferred Qualifications: - Cloud certifications from AWS, Azure, or Google Cloud (e.g., AWS Certified Solutions Architect, Azure Solutions Architect Expert). - Experience with Agile methodologies and DevOps practices. - Familiarity with database technologies, both SQL and NoSQL, as well as serverless architectures. Roles and Responsibilities NA
Posted 1 week ago
7.0 - 10.0 years
15 - 30 Lacs
Gurugram, Delhi / NCR
Work from Office
Work Environment: This role involves rotational shifts on a weekly basis . Shift allowances will be provided as per company policy. Employees will also have the flexibility to work from home during night shifts to support convenience and continuity. Job Responsibilities: System Monitoring and Incident Management: Monitor the health and performance of critical systems, applications, and services. Respond to incidents, troubleshoot issues, and ensure timely resolution to minimize downtime and service disruptions. Automation and Scripting: Develop and maintain automation scripts and tools to streamline operational tasks, deployment processes, and infrastructure management. Infrastructure Management: Manage and scale the underlying infrastructure, including servers, cloud services, and network components. Implement best practices for configuration management, monitoring, and disaster recovery. Release Management: Collaborate with development teams to ensure smooth and reliable software releases. Participate in the design and implementation of deployment strategies. Performance Optimization: Identify performance bottlenecks and optimize the system to improve reliability and response times. Capacity Planning: Analyze system capacity and plan for future growth to meet increasing demands. Security and Compliance: Implement security best practices and ensure compliance with relevant industry standards and regulations. Collaboration and Documentation: Work closely with cross-functional teams, including developers, product managers, and operations, to ensure efficient communication and knowledge sharing. Document processes, procedures, and troubleshooting guides. On-Call Support: Participate in an on-call rotation to handle urgent issues and incidents outside regular business hours. Qualifications: Experience with Cloud Technologies: Proficiency in working with one or more cloud platforms like AWS, Google Cloud Platform, or Microsoft Azure. Programming and Scripting Skills: Strong knowledge of at least one programming language (e.g., Python, Java,) and experience with shell scripting. System Administration: Linux/Unix system hands on and good to have administration and networking concepts. Monitoring and Logging: Experience with monitoring tools such as Prometheus, Grafana, Nagios, and log management solutions like ELK stack. Infrastructure as Code (IaC): Knowledge of Infrastructure as Code tools like Terraform or CloudFormation. Automation and Configuration Management: Experience with tools like Ansible, Chef, or Puppet for automating infrastructure management. Version Control: Familiarity with version control systems like Git. Problem-Solving Skills: Ability to analyze and troubleshoot complex technical issues and can work with other teams to help and streamline Process. Communication Skills: Strong verbal and written communication skills to collaborate effectively with team members and stakeholders. KPI/Metrics: Understand Key SRE Metrics such as Availability, SLA/SLO, MTTA and MTTR Any hands on individual with BCA/MCA and B.Tech background.
Posted 1 week ago
4.0 - 8.0 years
7 - 12 Lacs
Pune
Work from Office
Job Summary: We are seeking a skilled and proactive Linux System Administrator with strong experience in Kubernetes (K8s) to join our IT infrastructure team. The ideal candidate will be responsible for managing Linux-based systems and ensuring the stability, performance, and security of our containerized applications. Key Responsibilities : Administer, configure, and maintain Linux systems (preferably Rocky Linux, RHEL, or Ubuntu). Deploy, manage, and troubleshoot Kubernetes clusters (on-prem and/or cloud). Automate system tasks and deployments using tools like Ansible, Python, or shell scripting. Monitor and improve system performance and reliability. Implement security measures, patch management, and compliance practices. Collaborate with DevOps and development teams to support CI/CD pipelines. Manage container orchestration using Kubernetes, Helm, and related tools. Maintain system documentation and standard operating procedures. Required Qualifications: Bachelors degree in Computer Science, IT, or related field (or equivalent experience). 5+ years of experience as a Linux Administrator. 3+ years of hands-on experience with Kubernetes in production environments. Strong knowledge of containers (Docker) and container orchestration. Experience with Linux shell scripting and automation tools. Familiarity with monitoring tools (e.g., Prometheus, Grafana, ELK stack). Familiarity with hypervisors tools (VMware , Openstack). Preferred Qualifications: Certifications such as CKA (Certified Kubernetes Administrator) and RHCE. Exposure to Infrastructure as Code (IaC) tools like Ansible. Understanding of networking concepts related to Kubernetes (Ingress, Services, DNS). Soft Skills: Strong troubleshooting and analytical skills. Excellent communication and collaboration abilities. Ability to work independently and in a team-oriented environment.
Posted 1 week ago
7.0 - 12.0 years
15 - 20 Lacs
Chennai
Work from Office
Broad Role Description This role is part ofAI/HPC engineering that specializes in Platform standardization initiatives,innovation, Testing and Optimization of different AI technologies. Specific role requires Installation,Administration, troubleshooting and analytical skills in the technology stackscovering Linux, Kubernetes, SLUM and Nvidia BCM OpenSource Infrastructure ToolsAnsible and scripting Candidate shouldbe B.E / B. Tech with over 7+ Years of experience in IT Infrastructureindustry, 7 to 8 years in HPC and or AItechnology with strong knowledge on Scripting and Linux with at least 2 years in Kubernetes. skills required. Managing,Installing, Configuring, Deploying, Troubleshooting and administration of opensourceHPC softwares like BCM, SLUM Ansible, ELK Good experiencein Linux OS with scripting Knowledge in BCM,Nvidia GPU, Cuda is preferred. Experience in Ansibleplaybook, managing HPC environment. Exposure toPython Scripting Knowledge in at least one of the LLM /Generative AI and GPU offering provided on public clouds like AWS /Azure/Google cloud Devops Tools Experience indeploying and managing tools like Jenkins, Git, SonarQube, Bugzilla, Harbor Registry Good to know. Networking: VLAN, VXLAN, InfiniBand, IPSubnetting, Routing, Firewall Storage: DDN, Parallel FS, Object storage and NFS Infrastructure: HP/Dell/ rack servers /GPU Management /Monitoring tools: Zabbix, PromotusGrafana and SNow,
Posted 1 week ago
4.0 - 8.0 years
5 - 8 Lacs
Chennai
Work from Office
The L2 Engineer is responsible for handling customer escalations, monitoring, troubleshooting, and resolving any issues that affect the availability and quality of content delivered through TATA's global network. This role requires investigative and troubleshooting skills to identify, isolate, and resolve routine issues. The engineer will collaborate cross-functionally with multiple technical teams to ensure that problems are resolved quickly and efficiently. An ideal candidate will be team-oriented and possess strong technical, communication, and organizational skills in a fast-paced and dynamic environment. Hands-on experience in a service-oriented organization, particularly within a Linux operations center, is preferred. Most importantly, the right individual will have a proven track record of being creative and flexible, demonstrating a strong work ethic, and enjoying the challenge of solving technical issues, as well as possessing solid knowledge of both Windows and Linux systems. Responsibilities: Demonstrating exceptional leadership qualities as a Shift Lead. Utilizing NOC-related tools and monitoring applications. Communicating with customers regarding escalations and coordinating with internal groups to report or resolve system-related or network issues. Tracking and documenting daily work tasks and issues, and sharing this information with the rest of the team. Installing and applying patch upgrades on servers. Reviewing tickets to ensure that quality resolutions are provided to customers on time. Providing customer support and monitoring a network environment, which includes routing equipment, UNIX-based operating systems, and proprietary software. Experience: Proficient in advanced Linux and Windows operating systems. Red Hat, Windows, or Cloud certifications are a plus. Strong understanding of TCP/IP and various Internet protocols, along with tools such as Ping, Traceroute, NS Lookup, DIG, and Netstat. Familiarity with package management tools like Yum and RPM, as well as configuration management tools such as Puppet. Experience with VMware, KVM, and cloud platforms is beneficial. Knowledge of server monitoring tools, including Nagios and Grafana. Proficient in remote access software, including SSH, Rsync, Rclone, FTP, and Telnet. Familiarity with media and video technologies is a plus, and experience with Content Delivery Networks (CDN) is appreciated. Knowledge of technologies such as Wowza, Varnish, Nginx, Samba, NFS, and NAS storage solutions. Exceptional interpersonal and communication skills, along with strong initiative and leadership capabilities. Effective analytical, planning, organizational, and documentation skills.
Posted 1 week ago
8.0 - 10.0 years
14 - 19 Lacs
Bengaluru
Work from Office
Role Purpose To ensure success as a technical architect, you should have extensive knowledge of enterprise networking systems, advanced problem-solving skills, and the ability to project manage. A top-class technical architect can design and implement any size system to perfectly meet the needs of the client. Do 1. Responsibilities: Meeting with the IT manager to determine the companys current and future needs. Determining whether the current system can be upgraded or if a new system needs to be installed. Providing the company with design ideas and schematics. Project managing the design and implementation of the system. Meeting with the software developers to discuss the system software needs. Troubleshooting systems issues as they arise. Overseeing all the moving parts of the system integration. Measuring the performance of the upgraded or newly installed system. Training staff on system procedures. Providing the company with post-installation feedback. 2. Technical Architect Requirements: Bachelors degree in information technology or computer science. Previous work experience as a technical architect. 3. Managerial experience. In-depth knowledge of enterprise systems, networking modules, and software integration. Knowledge of computer hardware and networking systems. Familiarity with programming languages, operating systems, and Office software. Advanced project management skills. Excellent communication skills. Ability to see big-picture designs from basic specifications. Ability to problem solve complex IT issues. Mandatory Skills: AIOPS Grafana Observability. Experience: 8-10 Years.
Posted 1 week ago
5.0 - 9.0 years
17 - 20 Lacs
Bengaluru
Work from Office
We are seeking a skilled and motivated DevOps Architect to join our dynamic IT team. The ideal candidate will be responsible for collaborating with software developers, system administrators, and other team members to streamline our development and deployment processes. You will play a key role in automating and optimizing our infrastructure and software delivery pipelines, ensuring reliability, scalability, and efficiency. Key Responsibilities Infrastructure Automation: Design, implement, and maintain infrastructure as code (IaC) using tools likeTerraform, Ansible, or similar. Automate the provisioning, configuration, and management of servers, databases, and networking components. Continuous Integration and Continuous Deployment (CI/CD): Develop and enhance CI/CD pipelines for smooth software delivery. Integrate code repositories, build tools, testing frameworks, and deployment mechanisms to achieve automated and reliable releases. Containerization and Orchestration: Utilize Docker and Kubernetes to containerize applications and manage their orchestration. Implement and optimize Kubernetes clusters for scalability, high availability, and performance. Monitoring and Logging: Implement monitoring solutions to track system performance, availability, and security. Set up log management tools to gather, store, and analyze logs for troubleshooting and insights. Security and Compliance: Collaborate with security teams to implement best practices in securing infrastructure and applications. Ensure compliance with industry standards and regulations. Environment Management: Maintain development, testing, and production environments, ensuring consistency across different stages of the software development lifecycle. Collaboration: Work closely with cross-functional teams to understand their needs and provide technical solutions. Collaborate with software developers to optimize code for deployment and troubleshoot issues. Scripting and Automation: Develop scripts and tools to automate routine tasks and processes. Enhance efficiency by eliminating manual interventions wherever possible. Backup and Recovery: Design and implement backup and disaster recovery strategies to ensure data integrity and system availability. Technical Documentation: Create and maintain technical documentation, including system diagrams, configurations, and procedures. Education and Certification Bachelors degree in computer science, Information Technology, or related field (Masters preferred). Knowledge and Skills Experience: 10+ years Proven experience as a DevOps Engineer or in a similar role. Strong proficiency in cloud platforms, preferably GCP Expertise in infrastructure as code (IaC) tools like Terraform or Ansible. Hands-on experience with containerization using Docker and orchestration with Kubernetes. Proficiency in scripting languages like Python, Bash, or PowerShell. Familiarity with CI/CD tools like Jenkins, GitLab CI/CD, or Travis CI. Experience with version control systems like Git. Solid understanding of networking, security, and system administration concepts. Knowledge of monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack). Strong problem-solving skills and the ability to troubleshoot complex issues. Excellent communication and collaboration skills. Relevant certifications (e.g., AWS Certified DevOps Engineer, Certified Kubernetes Administrator) are a plus
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough