Jobs
Interviews

1154 Prometheus Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 11.0 years

0 Lacs

pune, maharashtra

On-site

As a Systems Developer Testing Admin for the SAS Cloud Delivery Services team in the Cloud and Information Services (CIS) division, you will seek technical excellence and pursue opportunities to expand test automation to ensure timely and high-quality delivery of SAS Cloud service environments through robust test automation. Every day will be challenging but rewarding, with opportunities to learn something new. You will empower the team to improve quality, efficiency, and test automation. A keen eye for continuous improvement and a passion to drive it in all aspects of the job is required. Individuals who excel in this role understand software development & how to read/write code or scripts, are familiar with deployment methodologies and best practices such as CI/CD and Waterfall, have worked in complex infrastructure environments with a variety of services working both independently and together, and understand the interdependencies between multiple systems and software within a computing infrastructure. Additionally, they have knowledge of security, performance, server design, cross-platform architectures, SAS products, SAS solutions, storage, networking, and enterprise hardware. In this role, you will develop and maintain high-quality, scalable test automation frameworks for SAS Cloud environments ensuring comprehensive test coverage that reflects real-world customer access and usage. You will leverage automation tools like Playwright, Selenium, and others to streamline the testing process, improve efficiency, and deliver exceptional quality. Furthermore, you will protect users from escaped defects, participate in lesson-learned, and value customer experience. You will support deployment administrator's test execution including test setup, test reviews, and participating in the build test support channel. Collaboration with build deployment administrators, other test teams, technical communication team, and delivery managers is crucial to ensure base build functionality requirements are satisfied for environment delivery. Clear and efficient communication with internal and external stakeholders highlighting any risks or blockers with recommendations to remediate and work around identified issues is key. Other responsibilities include creating test cases for the base functionality of SAS Cloud Hosted and Remote Managed Service environments, participating in test cases, automation, test tools, and frameworks code, working flexible business hours as required by global customers/business needs, staying current on SAS offerings, technologies, and industry trends, and participating in a 24x7x365 on-call rotation. Required qualifications for this position include being curious, passionate, authentic, and accountable, 7+ years of experience as a Development Engineer in Test or equivalent position, a bachelor's degree in computer science or a related quantitative field, strong knowledge in related programming languages, operating systems, development tools, procedures, and methodologies applicable to test automation such as Python, Java, JUnit, Playwright, Selenium, etc. Hands-on experience developing and maintaining test automation frameworks, experience with manual test case/test scenario development & execution, fluency with REST Interfaces, OpenAPI specs, experience with Agile software development principles and practices, experience with TestRail, Jira, and Confluence, experience with SQL programming or SQL databases, experience with CI/CD pipelines and associated tools such as Jenkins, GIT, Gerrit, Gradle, and hands-on experience with progressive test development techniques like the Page Object Model (POM) and test parallelization. Technologies you will work with on day one include SAS 9.4 & Viya Product Suites, Azure, AWS, and VMware-based cloud computing environments (compute, storage, networking, etc.), external and integrated databases Oracle, Postgres, MSSQL, containerized software applications and orchestration (Docker, Kubernetes), API-driven tools and software, ITIL management software - ServiceNow, CMDB, Jira, Git, middleware/web-based applications (tcserver, apache, JBoss, etc.), commercial, opensource, and custom monitoring/alerting tools Zabbix, Prometheus, Grafana. Preferences for this position include SAS 9.4, Viya awareness, testing experience in Microsoft Azure Cloud, know-how in authentication techniques and their integration along with debugging, working knowledge of Python and Ansible for automation, ITIL v3/v4 Foundation Certification, ITSM Technologies (JIRA, ServiceNow) awareness and usage, ability to travel up to 10% of the time, and multi-lingual proficiency (English language is required).,

Posted 3 days ago

Apply

4.0 - 8.0 years

9 - 14 Lacs

Chennai

Work from Office

Job Description Immediate A deep understanding of Observability Dynatrace preferably (or other tools if they are well versed), Provisioning and setup metric in any observability tool Dynatrace, Prometheus, Thanos, or Grafana, alerts and silences Development work (not just support and running scripts but actual development) done on: Chef (basic syntax, recipes, cookbooks) or Ansible (basic syntax, tasks, playbooks) or Terraform basic syntax and GitLab CI/CD configuration, pipelines, jobs Proficiency in scripting Python, PowerShell, Bash etc This becomes the enabler for automation, Proposes ideas and solutions within the Infrastructure Department to reduce the workload through automation, Cloud resources provisioning and configuration through CLI/API specially Azure and GCP AWS experience is also ok Troubleshooting SRE approach, SRE mindset, Provides emergency response either by being on-call or by reacting to symptoms according to monitoring and escalation when needed Improves documentation all around, either in application documentation, or in runbooks, explaining the why, not stopping with the what, Root cause analysis and corrective actions Strong Concepts around Scale & Redundancy for design, troubleshooting, implementation Mid Term Kubernetes basic understanding, CLI, service re-provisioning Operating system (Linux) configuration, package management, startup and troubleshooting System Architecture & Design Plan, design and execute solutions to reach specific goals agreed within the team, Long Term Block and object storage configuration Networking VPCs, proxies and CDNs At DXC Technology, we believe strong connections and community are key to our success Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances Were committed to fostering an inclusive environment where everyone can thrive, Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf More information on employment scams is available here, Show

Posted 4 days ago

Apply

7.0 - 12.0 years

40 - 70 Lacs

Bengaluru

Work from Office

Hiring Now: Technical Program Manager & Agile Coach / Scrum Master Location: Bangalore, India Company: HiLabs Innovating healthcare data using cutting-edge AI & SaaS Role: Lead global scrum teams focused on software & data engineering Responsibilities: Drive cross-functional program execution & Agile coaching Agile Expertise: Scrum, SAFe, sprint planning, backlog refinement Tools: Proficient in JIRA for workflow, backlog, and metrics tracking Program Management: Define scope, timelines, risk & dependency management Agile Improvement: Conduct training, workshops, & retrospectives to boost team maturity Collaboration: Work with Product Managers, Engineers & stakeholders for smooth delivery Experience Required: 5+ years Scrum/Agile, JIRA, customer-facing role Certifications: Certified Scrum Master (CSM) mandatory; SAFe preferred Educational Background: Engineering degree required; Masters/MBA a plus Domain Knowledge: Experience or interest in US healthcare domain is advantageous Perks: Competitive salary, stock options, professional development, and great work culture Interested? Apply now to join HiLabs and be part of transforming healthcare data! #Hiring #AgileCoach #ScrumMaster #TechnicalProgramManager #BangaloreJobs #HealthcareTech #HiLabs Join HiLabs as a Scrum Master shaping healthcare AI solutionsLead agile teams to improve healthcare data quality and reduce costsWork with cutting-edge explainable AI and healthcare ontologiesCollaborate with healthcare experts, data scientists, and engineersHelp develop and implement HiLabs core platform, MCheck Be a part of a team that harnesses advanced AI, ML, and big data technologies to develop cutting-edge healthcare technology platform, delivering innovative business solutions. Job Title : Technical Project Manager (TPM) - Scrum Master & Agile Coach Job Location : Bangalore, India Job summary: We are seeking a highly skilled Technical Project Manager (TPM) with strong hands-on experience in full-stack development and cloud infrastructure to lead the successful planning, execution, and delivery of technical projects. The ideal candidate will have a strong background in React, Java, Spring Boot, Python, and AWS , and will work closely with cross-functional teams including developers, QA, DevOps, and product stakeholders. As a TPM, you will play a critical role in bridging technical and business objectives, ensuring timelines, quality, and scalability across complex software projects. Responsibilities : Own and drive the end-to-end lifecycle of technical projectsfrom initiation to deployment and post-launch support. Collaborate with development teams and stakeholders to define project scope, goals, deliverables, and timelines. Act as a hands-on contributor when needed, with the ability to guide and review code and architecture decisions. Coordinate cross-functional teams across front-end (React), back-end (Java/Spring Boot, Python), and AWS cloud infrastructure. Manage risk, change, and issue resolution in a fast-paced agile environment. Ensure projects follow best practices around version control, CI/CD, testing, deployment, and monitoring. Deliver detailed status updates, sprint reports, and retrospectives to leadership and stakeholders. Desired Profile: Strong hands-on expertise in React , Java & Spring Boot , and Python . Extensive experience with AWS services such as EC2, S3, Lambda, CloudWatch, and others. Proven ability to lead agile/Scrum teams with a solid understanding of the software development lifecycle (SDLC). Excellent communication, organizational, and interpersonal skills to collaborate effectively with diverse teams and stakeholders. Preferred Qualifications: Experience designing and managing Microservices architectures . Familiarity with messaging systems like Kafka or equivalent platforms. Knowledge of CI/CD pipelines , deployment strategies, and application monitoring tools such as Prometheus , Grafana , and CloudWatch . Practical experience with containerization tools like Docker and orchestration platforms such as Kubernetes .

Posted 4 days ago

Apply

6.0 - 11.0 years

10 - 20 Lacs

Pune

Hybrid

We are hiring "GCP Devops" For one of our "IT Services & Consulting-MNc" Exp-6+ Years Mode-Permanent Location-Pune Skills: Cloud Infrastructure & Automation: Design and implement scalable, cloud-native solutions on Google Cloud Platform (GCP) . Develop and manage Infrastructure as Code (IaC) using tools like Terraform , Ansible , or CloudFormation . Automate CI/CD pipelines using Jenkins , GitLab CI , or Travis CI . Containerization & Orchestration: Implement Docker containers and manage orchestration with Kubernetes . Monitoring & Logging: Use tools like Prometheus , Grafana , and ELK stack for system monitoring and logging. Security & Optimization: Strengthen cloud security practices and optimize services across GCP , AWS , and Azure .

Posted 4 days ago

Apply

6.0 - 10.0 years

25 - 30 Lacs

Bengaluru

Work from Office

6 to 10 years of experience as Machine Learning Researcher or Data Scientist Graduate in Engineering,Technology along with good business skills Good applied statistics skills,such as distributions, statistical testing, regression, etc. Excellent understanding ofmachine learning techniques and algorithms including knowledge about LLM Experience with NLP. Good scripting and programmingskills in Python Basic understanding of NoSQLdatabases, such as MongoDB, Cassandra Nice to have Exposure to financial researchdomain Experience with JIRA,Confluence Understanding of scrum andAgile methodologies Experience with datavisualization tools, such as Grafana, GGplot, etc. Soft skills Oral and written communicationskills Good problem solving andnegotiation skills Passion, curiosity andattention to detail

Posted 4 days ago

Apply

6.0 - 11.0 years

9 - 13 Lacs

Bengaluru

Work from Office

We are looking for a Senior Cloud Operations DBA to manage, optimize, and ensure the reliability of cloud-based databases in a 24/7 production environment. The ideal candidate will have strong experience in AWS RDS, PostgreSQL, MySQL, and NoSQL databases, with a focus on performance tuning, high availability, backup strategies, and disaster recovery. Key Responsibilities Manage, monitor, and maintain cloud-based databases for high availability, security, and performance. Analyze and optimize database queries, indexing, and configuration for better efficiency. Implement and maintain robust backup and disaster recovery strategies. Automate repetitive database operations using scripting languages (Python, Shell, SQL). Ensure database compliance with ISO 27001, SOC 2, and other security/audit requirements. Troubleshoot and resolve issues, collaborating with CloudOps, DevOps, and Engineering teams. Plan and execute database upgrades, schema migrations, and replication strategies. Set up and manage proactive monitoring using Grafana, Prometheus, CloudWatch, and Splunk. Requirements 6+ years of experience in database administration, with a strong cloud-based background. Hands-on experience with AWS RDS (PostgreSQL, MySQL, DynamoDB). Proficient in SQL performance tuning, indexing, and debugging. Experience with Infrastructure as Code (IaC) tools like Terraform for database management. Strong scripting skills in Python, Bash, or PowerShell. Willingness to work APAC and EMEA shifts with on-call rotation. Expertise in high availability, clustering, and replication technologies. Understanding of cloud networking, IAM roles, and security best practices. Excellent troubleshooting and problem-solving abilities in cloud environments. Preferred Skills Experience with NoSQL databases (MongoDB, DynamoDB, Cassandra). Exposure to Kubernetes, containerization, and serverless architecture. Experience integrating databases with CI/CD pipelines and DevOps workflows. Knowledge of observability tools for database monitoring and proactive alerting. Cloud cost optimization and advanced performance tuning skills. Familiarity with incident response and resolution best practices.

Posted 4 days ago

Apply

12.0 - 15.0 years

7 - 11 Lacs

Bengaluru

Work from Office

This role combines leadership in managing cloud infrastructure with customer-focused incident response in a SaaS environment. The ideal candidate has a strong background in AWS cloud platforms, containerized workloads, and leading customer support teams. Youll also act as the primary escalation point for infrastructure and application performance issues. Cloud Operations Ensure 99.9%+ uptime for AWS-hosted SaaS platforms. Manage and maintain cloud infrastructure, including incident response and disaster recovery planning. Collaborate with DevOps, Engineering, IT, and Security teams to deploy, monitor, and optimize services. Proactively resolve issues related to infrastructure and application scalability and reliability. Establish strong operational practices: incident management, root cause analysis, and preventive action planning. Technical Support Lead a support operations team focused on infrastructure and application-related technical issues. Act as the point of escalation for complex, high-priority customer incidents. Ensure SLAs and KPIs are met or exceeded. Continuously improve support processes: ticket handling, escalation paths, and customer responsiveness. Work closely with Customer Success and Professional Services for a unified customer experience. Leadership and Strategy Manage, mentor, and grow a team of support engineers and cloud operations specialists. Continuously assess and improve tooling, operational processes, and technologies. Provide regular operations updates to senior leadership, highlighting KPIs and key trends. Translate business and customer needs into operational improvements. Qualifications Required Bachelors degree in Computer Science, IT, or related field or equivalent experience. 12+ years of relevant experience, including 3+ years in a managerial role. Expertise in AWS and SaaS architecture. Hands-on experience with monitoring tools (Datadog, Prometheus, Grafana, etc.) and incident management systems (ServiceNow, Zendesk, PagerDuty, Opsgenie). Proficient in SQL and experience with databases. Strong understanding of DevOps, CI/CD, and infrastructure-as-code (Terraform, Ansible). Proven track record of achieving high uptime, SLA adherence, and customer satisfaction. Experience managing 24x7 cloud operations in remote or hybrid environments. Strong problem-solving skills and ability to thrive in high-pressure situations. Excellent communication skills across technical and non-technical stakeholders. Willingness to work in APAC and EMEA time zones. Preferred Certifications AWS Professional Certifications Linux System Administration Certifications ITIL Certifications Kubernetes Administrator Certifications What We Offer Comprehensive health and wellness plans Paid time off and company holidays Shift allowances Flexible and remote-friendly work options

Posted 4 days ago

Apply

6.0 - 10.0 years

4 - 8 Lacs

Bengaluru

Work from Office

As a Cloud Operations Engineer, you will play a crucial role in managing and supporting the infrastructure required to host our products on the AWS cloud. Your ability to resolve product-related issues, automate tasks, and optimize performance will ensure seamless and efficient hosting operations. Product Issue Resolution: Resolve technical challenges identified via monitoring tools or reported by customers. Address product hosting, infrastructure, networking, and security issues. Collaborate with development teams and clients to maintain high availability and performance. Collaboration: Work cross-functionally with DevOps, product, and client teams. Participate in incident, change, and problem management using ITIL practices. Infrastructure Setup & Automation: Set up and maintain infrastructure components on multi-vendor cloud platforms. Use GIT, Jenkins, Terraform, and scripting to automate deployments and configurations. Cloud Optimization: Monitor and enhance system health using tools like DataDog, NewRelic, Splunk, and Prometheus. Troubleshoot and resolve performance issues to ensure uptime and scalability. Task Automation: Use scripting and automation tools to handle backups, scaling, monitoring, and routine maintenance tasks. Documentation & Reporting: Maintain accurate records of infrastructure, configs, best practices. Share regular reports on performance, issues, and improvements with stakeholders. Qualifications, Strengths, and Skills Experience: 6 to 10 years in cloud operations or related roles. Linux Expertise: Strong command-line and Linux admin skills. Experience with JVMs, heap dumps, patching, performance tuning, and installations/upgrades. Cloud Support Ops: Hands-on experience in technical customer support, issue resolution, and Root Cause Analysis (RCA) documentation. SaaS & AWS Experience: Deep knowledge of AWS services (EC2, ECS, EKS, IAM, S3, CloudWatch, RDS, etc.) and hosting enterprise-grade SaaS applications. Troubleshooting: Ability to quickly identify and resolve infrastructure or hosting issues. Cloud Monitoring: Experience with monitoring tools like Datadog, Splunk, Prometheus, and Grafana. Infrastructure Management: Familiarity with CI/CD tools (Jenkins), Infrastructure-as-Code (Terraform), and scripting (Bash, Python, etc.). Kubernetes: Hands-on experience with Kubernetes and containerization tools like Docker is highly desirable.

Posted 4 days ago

Apply

1.0 - 6.0 years

8 - 13 Lacs

Pune

Work from Office

Cloud Observability Administrator JOB_DESCRIPTION.SHARE.HTML CAROUSEL_PARAGRAPH JOB_DESCRIPTION.SHARE.HTML Pune, India India Enterprise IT - 22685 about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. Cloud Observability Administrator ZS is looking for a Cloud Observability Administrator to join our team in Pune. As a Cloud Observability Administrator, you will be working on configuration of various Observability tools and create solutions to address business problems across multiple client engagements. You will leverage information from requirements-gathering phase and utilize past experience to design a flexible and scalable solution; Collaborate with other team members (involved in the requirements gathering, testing, roll-out and operations phases) to ensure seamless transitions. What Youll Do: Deploying, managing, and operating scalable, highly available, and fault tolerant Splunk architecture. Onboarding various kinds of log sources like Windows/Linux/Firewalls/Network into Splunk. Developing alerts, dashboards and reports in Splunk. Writing complex SPL queries. Managing and administering a distributed Splunk architecture. Very good knowledge on configuration files used in Splunk for data ingestion and field extraction. Perform regular upgrades of Splunk and relevant Apps/add-ons. Possess a comprehensive understanding of AWS infrastructure, including EC2, EKS, VPC, CloudTrail, Lambda etc. Automation of manual tasks using Shell/PowerShell scripting. Knowledge of Python scripting is a plus. Good knowledge of Linux commands to manage administration of servers. What Youll Bring: 1+ years of experience in Splunk Development & Administration, Bachelor's Degree in CS, EE, or related discipline Strong analytic, problem solving, and programming ability 1-1.5 years of relevant consulting-industry experience working on medium-large scale technology solution delivery engagements; Strong verbal, written and team presentation communication skills Strong verbal and written communication skills with ability to articulate results and issues to internal and client teams Proven ability to work creatively and analytically in a problem-solving environment Ability to work within a virtual global team environment and contribute to the overall timely delivery of multiple projects Knowledge on Observability tools such as Cribl, Datadog, Pagerduty is a plus. Knowledge on AWS Prometheus and Grafana is a plus. Knowledge on APM concepts is a plus. Knowledge on Linux/Python scripting is a plus. Splunk Certification is a plus. Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying? At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At

Posted 4 days ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad

Work from Office

We are looking for an experienced Senior DevOps Engineer to join our innovative and fast-paced team. The ideal candidate will have a strong background in cloud infrastructure, CI/CD pipelines, and automation. This role offers the opportunity to work with cutting-edge tools and technologies such as AWS, Docker, Kubernetes, Terraform, and Jenkins, while driving the operational efficiency of our development processes in a collaborative environment. Key Responsibilities: Infrastructure as Code: Design, implement, and manage scalable, secure infrastructure using tools like Terraform, Ansible, and CloudFormation. Cloud Management: Deploy and manage applications on AWS, leveraging cloud-native services for performance, cost efficiency, and reliability. CI/CD Pipelines: Develop, maintain, and optimize CI/CD pipelines using tools like Jenkins, GitLab CI, or CircleCI to ensure smooth software delivery. Containerization & Orchestration: Build and manage containerized environments using Docker and orchestrate deployments with Kubernetes. Monitoring & Logging: Implement monitoring and logging solutions (e.g., Prometheus, Grafana, ELK stack) to ensure system reliability and quick troubleshooting. Automation: Develop scripts and tools to automate routine operational tasks, focusing on efficiency and scalability. Security & Compliance: Ensure infrastructure and applications meet security best practices and compliance standards. Collaboration: Work closely with development teams to align infrastructure and deployment strategies with business needs. Incident Management: Troubleshoot production issues, participate in on-call rotations, and ensure high availability of systems. Documentation: Maintain clear and comprehensive documentation for infrastructure, processes, and configurations. Required Qualifications: Extensive experience in DevOps or Site Reliability Engineering (SRE) roles. Strong expertise with AWS or other major cloud platforms. Proficiency in building and managing CI/CD pipelines. Hands-on experience with Docker and Kubernetes. In-depth knowledge of Infrastructure as Code (IaC) tools like Terraform or Ansible. Familiarity with monitoring tools such as Prometheus, Grafana, or New Relic. Strong scripting skills in Python, Bash, or similar languages. Understanding of network protocols, security best practices, and system architecture. Experience in scaling infrastructure to support high-traffic, mission-critical applications. Preferred Skills: Knowledge of multi-cloud environments and hybrid cloud setups. Experience with service mesh technologies (e.g., Istio, Consul). Familiarity with database management in cloud environments. Strong problem-solving skills and a proactive mindset. Ability to mentor junior team members and lead by example. Experience working in Agile/Scrum environments. Skills : - AWS, CI/CD pipelines, Jenkins, Git, ELK, Docker, Kubernetes,Terraform

Posted 4 days ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Noida, Bengaluru

Work from Office

Description: We’re looking for a seasoned DevOps Engineer who thrives in a fast-paced, highly collaborative environment and can help us scale and optimize our infrastructure. You’ll play a critical role in building robust CI/CD pipelines, automating cloud operations, and ensuring high availability across our services. If you’re AWS certified, hands-on with Chef, and passionate about modern DevOps practices, we want to hear from you. Requirements: • 8–12 years of hands-on DevOps/Infrastructure Engineering experience. • Proven expertise in Chef for configuration management. • AWS Certified Solutions Architect – Associate or Professional (Required). • Strong scripting skills in Python, Shell, YAML, or Unix scripting. • In-depth experience with Terraform for infrastructure as code (IAC). • Docker and Kubernetes production-grade implementation experience. • Deep understanding of CI/CD processes in microservices environments. • Solid knowledge of monitoring and logging frameworks (e.g., ELK, Prometheus). Job Responsibilities: • Design and implement scalable CI/CD pipelines using modern DevOps tools and microservices architecture. • Automate infrastructure provisioning and configuration using Terraform, Chef, and CloudFormation (if applicable). • Work closely with development teams to streamline build, test, and deployment processes. • Manage and monitor infrastructure using tools like ELK Stack, Prometheus, Grafana, and New Relic. • Maintain and scale Docker/Kubernetes environments for high-availability applications. • Support cloud-native architectures with Lambda, Step Functions, and DynamoDB. • Ensure secure, compliant, and efficient cloud operations within AWS. What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!

Posted 4 days ago

Apply

3.0 - 5.0 years

6 - 10 Lacs

Bengaluru

Work from Office

We also recognize the importance of closing the 4-million-person cybersecurity talent gap. We aim to create a home for anyone seeking a meaningful future in cybersecurity and look for candidates across industries to join us in soulful work. More at . Role Overview: Trellix is looking for SDETs who are self-driven and passionate to work on Endpoint Detection and Response (EDR) line of products. Tasks range from manual and, automated testing (including automation development), non-functional (performance, stress, soak), solution, security testing and much more. Be part of the vision to ship top-class EDR solutions for On-Prem, Cloud or hybrid Customers. Job Description About the role: Peruse requirements documents thoroughly and thus design relevant test cases that cover new product functionality and the impacted areas. Execute new feature and regression cases manually, as needed for a product release. Identify critical issues and communicate them effectively in a timely manner. Familiarity with bug tracking platforms such as JIRA, Bugzilla, etc. is helpful. Filing defects effectively, i.e., noting all the relevant details that reduces the back-and-forth, and aids quick turnaround with bug fixing is an essential trait for this job. Identify cases that are automatable, and within this scope segregate cases with high ROI from low impact areas to improve testing efficiency. Hands-on with automation programming languages such asnPython, Java, etc. is needed.Execute, monitor and debug automation runs Author automation code to improve coverage across the board. Willing to explore and increase understanding on On-prem infrastructure. About you: 3-5 years of experience in a SDET role with a relevant degree in Computer Science or Information Technology is required. Show ability to quickly learn a product or concept, viz., its feature set, capabilities, functionality and nitty-gritty. Familiarity with Unix/Linux (preferably Redhat variants). Proficiency in Python, PyTest, Behave, Robot Framework, Selenium, Bash/Shell Scripting. CI/CD pipeline integration (Jenkins, GitLab CI, GitHub Actions) Tools: JMeter, Gatling, Locust, or custom scripts to simulate high-volume telemetry data. Ability to work with security engineers, DevOps, and developers to define test criteria. Creating clear test plans, bug reports, and reproducibility steps. The following are good-to-have: Familiarity with parsing/ingesting data formats like JSON, Syslog etc Familiarity with Virtualization technologies (e.g., Vagrant, VirtualBox). Familiarity with Cloud environment's like: AWS/GCP. Understanding of container technologies (docker, docker compose etc) Ability to design/test search queries, dashboards, and alerting (OpenSearch Dashboards/Kibana). Experience validating cluster health, scalability, and performance under load. Experience with on-prem environments (networking, firewalls, hardware constraints) Experience with tools like Prometheus, Grafana, or ELK/OpenSearch for monitoring pipelines.

Posted 4 days ago

Apply

3.0 - 5.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Role Overview: Trellix is looking for SDETs who are self-driven and passionate to work on Endpoint Detection and Response (EDR) line of products. Tasks range from manual and, automated testing (including automation development), non-functional (performance, stress, soak), solution, security testing and much more. Be part of the vision to ship top-class EDR solutions for On-Prem, Cloud or hybrid Customers. About the role: Peruse requirements documents thoroughly and thus design relevant test cases that cover new product functionality and the impacted areas Execute new feature and regression cases manually, as needed for a product release Identify critical issues and communicate them effectively in a timely manner Familiarity with bug tracking platforms such as JIRA, Bugzilla, etc. is helpful. Filing defects effectively, i.e., noting all the relevant details that reduces the back-and-forth, and aids quick turnaround with bug fixing is an essential trait for this job Identify cases that are automatable, and within this scope segregate cases with high ROI from low impact areas to improve testing efficiency Hands-on with automation programming languages such as Python, Java, etc. is needed. Execute, monitor and debug automation runs Author automation code to improve coverage across the board Willing to explore and increase understanding on On-prem infrastructure About you: 3-5 years of experience in a SDET role with a relevant degree in Computer Science or Information Technology is required Show ability to quickly learn a product or concept, viz., its feature set, capabilities, functionality and nitty-gritty. Familiarity with Unix/Linux (preferably Redhat variants) Proficiency in Python, PyTest, Behave, Robot Framework, Selenium, Bash/Shell Scripting. CI/CD pipeline integration (Jenkins, GitLab CI, GitHub Actions) ToolsJMeter, Gatling, Locust, or custom scripts to simulate high-volume telemetry data . Ability to work with security engineers, DevOps, and developers to define test criteria. Creating clear test plans, bug reports, and reproducibility steps. The following are good-to-have Familiarity with parsing/ingesting data formats like JSON, Syslog etc Familiarity with Virtualization technologies (e.g., Vagrant, VirtualBox). Familiarity with Cloud environment's likeAWS/GCP. Understanding of container technologies (docker, docker compose etc) Ability to design/test search queries, dashboards, and alerting (OpenSearch Dashboards/Kibana). Experience validating cluster health, scalability, and performance under load. Experience with on-prem environments (networking, firewalls, hardware constraints) Experience with tools like Prometheus, Grafana, or ELK/OpenSearch for monitoring pipelines. Company Benefits and Perks: We believe that the best solutions are developed by teams who embrace each other's unique experiences, skills, and abilities. We work hard to create a dynamic workforce where we encourage everyone to bring their authentic selves to work every day. We offer a variety of social programs, flexible work hours and family-friendly benefits to all of our employees. Retirement Plans Medical, Dental and Vision Coverage Paid Time Off Paid Parental Leave Support for Community Involvement

Posted 4 days ago

Apply

9.0 - 14.0 years

6 - 9 Lacs

Pune, Bengaluru

Work from Office

Role Overview: Trellix is looking for quality engineers who are self-driven and passionate to work on on-prem/cloud products that cover SIEM, EDR, and XDR technologies. This job involves manual, automated testing (including automation development), non-functional (performance, stress, soak), security testing and much more. Work smartly by using cutting edge technologies and AI driven solutions. About the role: Peruse requirements documents thoroughly and thus design relevant test cases that cover new product functionality and the impacted areas Mentors and leads junior engineers and drives the team to meet and beat expected outcomes. Continuously looks for optimization in automation cycles, come up with solutions for gap areas Works closely with developers/architects and sets the quality bar for the team. Identify critical issues and communicate them effectively in a timely manner Familiarity with bug tracking platforms such as JIRA, Bugzilla, etc. is must. Filing defects effectively, i.e., noting all the relevant details that reduces the back-and-forth, and aids quick turnaround with bug fixing is an essential trait for this job Automate using popular frameworks suitable for backend code, APIs and frontend. Hands-on with automation programming languages (Python, go, Java, etc) is a must Execute, monitor and debug automation runs Willing to explore and increase understanding on Cloud/ On-prem infrastructure About you: 9-15 years of experience in a SDET role with a relevant degree in Computer Science or Information Technology is required Show ability to quickly learn a product or concept, viz., its feature set, capabilities, and functionality. Solid fundamentals in any programming language (preferably, Python or go). Sound knowledge of popular automation frameworks such as selenium, Playwright, Postman, PyTest, etc. Hands-on with any of the popular CI/CD tools such as Teamcity, Jenkins or similar is a must RESTful API testing using tools such as Postman or similar is a must Familiarity and exposure to AWS and its offerings, such as S3, EC2, EBS, EKS, IAM, etc., is an added advantage. Exposure to Kubernetes, Docker, helm, GitOps is a must. strong foundational knowledge in working on Linux based systems. Hands-on with non-functional testing, such as, performance and load, is desirable. Some proficiency with prometheus, grafana, service metrics and analysis is highly desirable. Understanding of cyber security concepts would be helpful. Company Benefits and Perks: We believe that the best solutions are developed by teams who embrace each other's unique experiences, skills, and abilities. We work hard to create a dynamic workforce where we encourage everyone to bring their authentic selves to work every day. We offer a variety of social programs, flexible work hours and family-friendly benefits to all of our employees. Retirement Plans Medical, Dental and Vision Coverage Paid Time Off Paid Parental Leave Support for Community Involvement

Posted 4 days ago

Apply

7.0 - 11.0 years

0 Lacs

maharashtra

On-site

As a Node.js Engineer based in Bangalore or Mumbai, India, with at least 7 years of experience, you will be responsible for developing and implementing complex Node.js applications for web and mobile applications. Your primary focus will be on building back-end applications, designing scalable APIs, and integrating third-party APIs. You must have proficiency in Node.js, JavaScript, and PostgreSQL, along with experience in monitoring tools like Grafana, Docker, and PM2 for deployment and process management. Familiarity with Azure cloud services, Kafka, Redis, and basic Linux commands is essential. Your objectives in this role will include developing high-performing applications using Node.js, collaborating with front-end developers, and ensuring the application's performance and scalability. Strong problem-solving skills, excellent communication abilities, and a bachelor's degree in Software Engineering or Computer Science are required. You should also have experience with front-end technologies, database technologies, and web development frameworks. Preferred qualifications include relevant Node.js certifications, experience with cloud-based infrastructure, familiarity with front-end development frameworks, and knowledge of test-driven development. As a Node.js developer, you will design and implement server-side applications, collaborate with front-end developers, and write maintainable code. You will work on building back-end applications, designing APIs, integrating them with third-party services, and ensuring seamless code integration based on client requirements. Overall, your role will involve creating fast, robust, scalable, and high-performance web applications using Node.js frameworks. You will play a crucial part in developing applications that power web and mobile platforms, ensuring optimal performance and scalability.,

Posted 4 days ago

Apply

4.0 - 8.0 years

0 Lacs

maharashtra

On-site

As an Observability SME, you play a crucial role that demands a combination of technical expertise, leadership skills, and a deep understanding of observability practices. Your primary responsibility is to lead a team of skilled engineers in the development and maintenance of cutting-edge observability solutions. These solutions enable comprehensive monitoring and analysis of systems and applications. Your role involves working closely with application teams, enhancing the observability platform (Dynatrace SAS) by integrating metrics, errors, logs, and traces from various sources to derive predictive intelligence from the data. Collaboration is key in your role. You will collaborate with cross-functional teams to understand their value chain and to evaluate and adopt emerging technologies and best practices that enhance system observability. Defining and implementing comprehensive monitoring and alerting strategies for complex distributed systems is a crucial aspect of your role. You will work with tools teams and application teams to establish and enforce best practices for logging, tracing, and monitoring. Selecting and implementing observability tools and technologies that align with the organization's goals and requirements is another significant responsibility. Staying updated with industry trends and advancements in observability ensures that our systems leverage the latest innovations. Identifying and addressing performance bottlenecks and inefficiencies in collaboration with development and operations teams is essential for enhancing system reliability and responsiveness. In incident response and troubleshooting, you will collaborate with incident response teams to diagnose and resolve production issues related to observability. Developing and maintaining incident response playbooks streamlines troubleshooting processes. As a mentor to your team, fostering a culture of continuous learning is crucial. Providing mentorship and training to enhance team members" technical skills and knowledge contributes to the team's overall growth. To excel in this role, you should have a strong background in distributed systems, cloud technologies, and proficiency in tools like Dynatrace, Prometheus, Grafana, ELK stack, or similar. In-depth knowledge of distributed systems, microservices architecture, and cloud platforms is essential. Exceptional communication skills are crucial for effectively conveying complex technical concepts to both technical and non-technical stakeholders. Expertise in scripting and programming languages (e.g., Python, Go, Java) is required. Experience with containerization and orchestration technologies (Docker, Kubernetes) is considered a plus. Proficiency in monitoring tools, incident management, and other relevant technologies is expected. Strong communication skills to collaborate with diverse teams and convey technical information to non-technical stakeholders are essential. A problem-solving mindset with the ability to make sound decisions under pressure is highly valued in this role.,

Posted 4 days ago

Apply

5.0 - 12.0 years

0 Lacs

noida, uttar pradesh

On-site

ExertSense Solutions is a team of software professionals with over a decade of IT experience, dedicated to providing robust, scalable, reliable, secured, and affordable technology solutions globally. We offer services such as Enterprise Software Development, Mobile App Development, System and Server Migration, Dedicated Software Development teams, and IT Consultancy. This is a full-time hybrid role for a Generative Artificial Intelligence Architect at HCL Tech located in Noida, with flexibility for some remote work. As the Gen AI Architect, you will be responsible for implementing AI solutions, developing algorithms, analyzing data, and creating machine learning models to drive business outcomes. You should have a minimum of 12 years of experience in IT. Experience in cloud and AI architecture design for at least 5 years and software development for 3 years is mandatory for this role. Proficiency in Python, Java (and/or Golang), and Spring is required, along with expertise in AWS, Azure, Google Cloud, Kubernetes, and containerization. You should have AI expertise in machine learning model development, GenAI models (e.g., GPT, BERT, DALL-E, GEMINI), and NLP techniques. Additionally, experience with Hadoop and Spark, knowledge of AI ethics and governance, and familiarity with Agile and Scrum project management methodologies are essential. Desired skills and experience include API development and integration, knowledge of data storage solutions (SQL, NoSQL), AI model optimization and scaling, model evaluation and validation, monitoring and logging (Prometheus, ELK Stack), GenAI model utilization and MLOps, proficiency in Data Engineering for AI, including preprocessing, feature engineering, and pipeline creation, expertise in AI model fine-tuning, evaluation, and bias mitigation, and understanding of serverless computing, distributed systems, deep learning frameworks (TensorFlow, PyTorch), and emerging technology trends.,

Posted 4 days ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Systems Engineer specializing in Data DevOps/MLOps, you will play a crucial role in our team by leveraging your expertise in data engineering, automation for data pipelines, and operationalizing machine learning models. This position requires a collaborative professional who can design, deploy, and manage CI/CD pipelines for data integration and machine learning model deployment. You will be responsible for building and maintaining infrastructure for data processing and model training using cloud-native tools and services. Your role will involve automating processes for data validation, transformation, and workflow orchestration, ensuring seamless integration of ML models into production. You will work closely with data scientists, software engineers, and product teams to optimize performance and reliability of model serving and monitoring solutions. Managing data versioning, lineage tracking, and reproducibility for ML experiments will be part of your responsibilities. You will also identify opportunities to enhance scalability, streamline deployment processes, and improve infrastructure resilience. Implementing security measures to safeguard data integrity and ensure regulatory compliance will be crucial, along with diagnosing and resolving issues throughout the data and ML pipeline lifecycle. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field, along with 4+ years of experience in Data DevOps, MLOps, or similar roles. Proficiency in cloud platforms like Azure, AWS, or GCP is required, as well as competency in using Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible. Expertise in containerization and orchestration technologies like Docker and Kubernetes is essential, along with a background in data processing frameworks such as Apache Spark or Databricks. Skills in Python programming, including proficiency in data manipulation and ML libraries like Pandas, TensorFlow, and PyTorch, are necessary. Familiarity with CI/CD tools such as Jenkins, GitLab CI/CD, or GitHub Actions, as well as understanding version control tools like Git and MLOps platforms such as MLflow or Kubeflow, will be valuable. Knowledge of monitoring, logging, and alerting systems (e.g., Prometheus, Grafana), strong problem-solving skills, and the ability to contribute independently and within a team are also required. Excellent communication skills and attention to documentation are essential for success in this role. Nice-to-have qualifications include knowledge of DataOps practices and tools like Airflow or dbt, an understanding of data governance concepts and platforms like Collibra, and a background in Big Data technologies like Hadoop or Hive. Qualifications in cloud platforms or data engineering would be an added advantage.,

Posted 4 days ago

Apply

10.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

As a Java Developer specializing in observability and telemetry, your role will involve designing, developing, and maintaining Java-based microservices and applications. Your focus will be on implementing best practices for instrumenting, collecting, analyzing, and visualizing telemetry data to monitor and troubleshoot system behavior and performance. Collaboration with cross-functional teams will be key as you integrate observability solutions into the software development lifecycle, including CI/CD pipelines and automated testing frameworks. By driving improvements in system reliability, scalability, and performance through data-driven insights and continuous feedback loops, you will play a crucial role in ensuring our systems remain innovative. Staying up-to-date with emerging technologies and industry trends in observability, telemetry, and distributed systems is essential to keep our systems at the forefront of innovation. Moreover, mentoring junior developers and providing technical guidance and expertise in observability and telemetry practices will be part of your responsibilities. To excel in this role, you should have a Bachelor's or master's degree in computer science, engineering, or a related field, along with over 10 years of professional experience in software development with a strong emphasis on Java programming. Expertise in observability and telemetry tools such as Prometheus, Grafana, Jaeger, ELK stack (Elasticsearch, Logstash, Kibana), and distributed tracing is required. A solid understanding of microservices architecture, containerization (Docker, Kubernetes), and cloud-native technologies (AWS, Azure, GCP) is essential. You should also demonstrate proficiency in designing and implementing scalable, high-performance, and fault-tolerant systems, coupled with strong analytical and problem-solving skills to troubleshoot complex issues effectively. Excellent communication and collaboration skills are paramount for success in this role, enabling you to work efficiently in a fast-paced, agile environment. Experience with Agile methodologies and DevOps practices would be advantageous in fulfilling your responsibilities effectively.,

Posted 4 days ago

Apply

2.0 - 6.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Support Engineer at ServerGuy, located in Noida, you will be a crucial part of our team providing enterprise-level support to our expanding client base. Your role will involve delivering exceptional customer support through email/helpdesk and live chat channels. You will also be responsible for server monitoring, incident response, deployment, automation, troubleshooting services, and migrating applications to our platform. Additionally, you will play a key role in building, operating, and supporting AWS Cloud environments while ensuring the development of documentation such as Playbooks and providing maintenance and upgrade reports. To excel in this role, you should have prior experience as a Hosting Support Specialist or in a similar position. Proficiency in Linux-based servers, virtualization, and cPanel/WHM Control Panel is essential. An aptitude for understanding new technologies quickly, coupled with experience in the WordPress/Magento stack (PHP, MySQL), will be beneficial. Moreover, having expertise in writing Bash, Python, and Ansible scripts, familiarity with AWS, and knowledge of security and PenTesting will be advantageous. Working at ServerGuy offers you the opportunity to engage with cutting-edge technologies, access medical insurance, gain unparalleled learning experiences, and be a part of a rapidly growing international organization where your contributions directly impact our success.,

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a Full-Stack Developer with 5+ years of experience in the MERN stack, you will be responsible for proficiently handling backend development tasks using Node.js, Express.js, and AWS Lambda. Your strong hands-on experience with MongoDB, AWS Neptune, Redis, and other databases will be essential for the successful execution of projects. Additionally, your expertise in front-end development using React.js, HTML, CSS, and JavaScript (ES6+) will play a crucial role in delivering high-quality user interfaces. Your familiarity with AWS services such as Lambda, API Gateway, S3, CloudFront, IAM, and DynamoDB will be advantageous for integrating and deploying applications effectively. Experience with DevOps tools like GitHub Actions, Jenkins, and AWS CodePipeline will be required to streamline the development process. Proficiency in Git-based workflows and hands-on experience with Agile methodologies and tools like JIRA will be necessary for collaborative and efficient project management. In terms of technical skills development, you should possess expertise in React.js with Redux, Context API, or Recoil, along with HTML5, CSS3, JavaScript (ES6+), and TypeScript. Knowledge of Material UI, Tailwind CSS, Bootstrap, and performance optimization techniques will be crucial for creating responsive and visually appealing web applications. Your proficiency in Node.js & Express.js, AWS Lambda, RESTful APIs & GraphQL, and authentication & authorization mechanisms like JWT, OAuth, and AWS Cognito will be key for building robust server-side applications. Moreover, your familiarity with Microservices, event-driven architecture, MongoDB & Mongoose, AWS Neptune, Redis, and AWS S3 for object storage will be essential for developing scalable and efficient applications. Understanding Cloud & DevOps concepts such as AWS services, Infrastructure as Code (IaC), CI/CD Pipelines, and Monitoring & Logging tools will be necessary for deploying and maintaining applications in a cloud environment. Your soft skills, including strong problem-solving abilities, excellent communication skills, attention to detail, and the ability to mentor junior developers will be crucial for collaborating with cross-functional teams and providing technical guidance. Your adaptability to learn and work with new technologies in a fast-paced environment will be essential for staying updated and delivering innovative solutions effectively.,

Posted 4 days ago

Apply

10.0 - 14.0 years

0 Lacs

hyderabad, telangana

On-site

At Franklin Templeton, the primary focus is on delivering better client outcomes through close collaboration with clients, understanding their strategic needs, and providing innovative solutions. With over 9,500 employees in 34 countries, we are dedicated to servicing investment solutions for clients in more than 160 countries. Our success over the past 70 years is attributed to the talent, skills, and dedication of our people. We are currently seeking qualified candidates to join our team. The Cloud Engineering team at FTT organization is responsible for offering Public Cloud platforms (AWS and Azure) to enable Franklin Templeton developers to accelerate time-to-market, drive innovation, and simplify integrations. We prioritize a high-quality engineering culture to design platforms with a customer-centric approach, scalability for large enterprises, and resilience to support market innovation. As a Senior Engineer at Franklin Templeton Technology, you will play a crucial role in developing and implementing the multi-cloud strategy. We are looking for individuals who are passionate about cloud technologies and leveraging them to address complex business challenges for our customers and Application Development teams. If you thrive in a collaborative environment, possess strong technical skills, and enjoy tackling significant challenges, we believe you will find our team rewarding to work with. Key Responsibilities of a Senior Engineer include: - Designing cloud-based solutions aligned with business requirements, collaborating with application teams for cloud service deployment - Providing expert guidance on Cloud design decisions, standards, and operational practices - Engaging in the Cloud Center of Excellence to establish and enforce best practices in cloud platform engineering, operations, application development, and governance - Selecting and implementing new cloud services and tools to progress the cloud roadmap - Maintaining blueprints and reference implementations of cloud products - Collaborating with Information Security teams to incorporate secure app patterns into Cloud platforms - Offering guidance on cloud platforms to teams dealing with high application complexity, escalating risks and issues when necessary - Facilitating discussions across key stakeholders to address challenges Qualifications and Experience: - Bachelor's degree in computer science or related field - 10+ years of expertise in leading Cloud platforms such as AWS and Azure - Proficiency in delivering large-scale distributed enterprise platforms focusing on performance, scale, security, reliability, and cost optimization - Experience in DevOps and GitOps models with technologies like Terraform - Familiarity with on-premises Private Cloud and Public Cloud platforms, including Azure and AWS - Proficiency in native CSP orchestration stacks and container-native technologies like Kubernetes - Experience with cloud-native logging, monitoring, and operations tools such as Datadog and Prometheus - Expertise in areas like Cloud IAM, network and security design, cloud-native Kubernetes services, and configuration management and automation tools This role is at the Individual Contributor level with work shift timings from 2:00 PM to 11:00 PM IST.,

Posted 4 days ago

Apply

6.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

We are searching for a Senior Infrastructure Software Engineer to become a part of our Cloud Engineering Services team. This opportunity presents an excellent platform to contribute towards the design, development, and enhancement of large-scale infrastructure for various foundational cloud services. If you boast a profound understanding of cloud infrastructure and distributed systems and excel in a stimulating, innovative atmosphere, this position might be the perfect fit for you! In this role, you will collaborate with product engineering teams to gain an in-depth comprehension of their infrastructure use cases. You will effectively communicate design trade-offs and build scalable systems to cater to their individual requirements. Additionally, you will craft advanced tooling to automate the construction and deployment of microservices and infrastructure components, thereby enhancing efficiency and productivity. Proactively identifying bottlenecks in the daily utilization of core infrastructure and implementing robust solutions to address them will also be a key responsibility. Automation will play a pivotal role in reducing manual labor and augmenting operational efficiency. Moreover, you will be responsible for monitoring the infrastructure to promptly alert on significant events, ensuring optimal system performance and reliability levels. The ideal candidate for this role will possess a Bachelor's, Master's, or Ph.D. degree in Computer Science or a related field, or possess equivalent experience. A minimum of 6 years of hands-on experience in designing and constructing infrastructure to support large-scale, fault-tolerant distributed services is required. Proficiency in cloud infrastructure platforms like AWS, Azure, or Google Cloud is essential, along with a strong command of infrastructure as code (IaC) and configuration management tools such as Terraform. Expertise in the administration, operation, and configuration of Kubernetes and Envoy is highly desirable. Demonstrated experience with Continuous Integration/Continuous Delivery (CI/CD) tools like Gitlab and the GitOps model is a must. Proficiency in monitoring tools such as Prometheus, Grafana, Cloudwatch, and Thanos is also expected. A solid background in one or more general-purpose programming languages like Go and Python will be advantageous. To distinguish yourself in this role, you can establish guidelines and standards for the design, development, lifecycle, and management of HTTP APIs and gRPC services. Strong knowledge of API specifications such as OpenAPI, Swagger, Protocol buffers, JSON Schema, AsyncAPI, and GraphQL schemas will be beneficial. Experience in API Management solutions, data interchange formats, and delivering scalable APIs will set you apart from the competition.,

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As a Software Engineer at UBS, you will have the opportunity to design and build next-generation developer platforms on the cloud using a variety of technologies. Your role will involve iteratively refining user requirements and writing code to deliver sustainable solutions. You will be part of the DevCloud team, providing developers with the necessary tooling and compute resources to engineer solutions effectively. Key Responsibilities: - Build solutions for developers to ensure they have the right tooling and compute resources at their disposal. - Design, write, and deliver sustainable solutions using modern programming languages such as Typescript and Go. - Provide technology solutions and automation to solve business application problems and enhance our digital leadership in financial services. - Write and create applications and platform delivery requirements for the entire IT organization. - Conduct code reviews, test your code as needed, participate in application architecture and design, and other phases of the SDLC. - Implement proper operational controls and procedures to facilitate the transition from testing to production. Your Expertise: - Strong programming experience in Golang and Typescript. - Proficiency in front-end technologies like React, API building, and server-side work. - Experience with Linux containers, Kubernetes, TCP/IP, and networking concepts. - Knowledge of Azure, AWS, and/or Google Cloud. - Understanding of microservice architecture and experience in building RESTful services. - Bias towards automation and hands-on experience with Terraform. - Familiarity with metrics, alerting, and modern monitoring tools such as InfluxDB, Prometheus, Datadog, Grafana, etc. - Knowledge of Continuous Integration and Continuous Deployment, with experience in building pipelines in GitLab, ADO. About UBS: UBS is the world's largest and the only truly global wealth manager, operating through four business divisions in over 50 countries. We offer flexible working arrangements and embrace a purpose-led culture that fosters collaboration and agile ways of working to meet business needs. Join #teamUBS to make a meaningful impact and grow professionally within a diverse and inclusive environment. Please note that as part of the application process, you may be required to complete one or more assessments to showcase your skills and expertise.,

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

The ideal candidate for this role would have 3 to 6 years of experience and be located in Kandanchavadi, OMR, Chennai (Onsite). You should be available to work between 2pm to 11pm IST and be able to join immediately or within 15 days. As part of this role, you will be responsible for developing dashboards from various sources that generate system-related data, particularly from monitoring systems. You should be proficient in basic operations of monitoring and capable of scripting for minimal automations required in this domain. Your responsibilities will include developing dashboards using tools such as Power BI, Graphane, and Perses. You should also have a good understanding of monitoring systems like LogicMonitor, Prometheus, or any other open-source monitoring system. Being able to correlate data streams and create dashboards to enhance business efficiency is a key aspect of this role. The candidate we are looking for must possess a strong working knowledge of Power BI and Graphane, with familiarity in other open-source dashboard solutions being an added advantage. You should also have a good understanding of Prometheus, an open-source monitoring tool, as well as experience in managing dockers and Linux. While experience in DevOps is beneficial, it is not mandatory. A solid grasp of infrastructure concepts, good communication skills, and the ability to work with stakeholders across different geographies are essential. Additionally, you should be willing to learn and adapt to new requirements as needed.,

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies