Jobs
Interviews

479 Opentelemetry Jobs - Page 13

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

An extraordinarily talented group of individuals work together every day to drive TNS' success, from both professional and personal perspectives. Come join the excellence! Overview TNS is looking for an Observability Engineer to support the design, implementation, and evolution of our observability stack. This role is critical in ensuring the reliability, performance, and scalability of our systems by providing deep visibility into infrastructure and application behavior. You will collaborate with cross-functional teams to define observability standards and drive adoption of best practices across the organization. Responsibilities Responsibilities Lead the design, implementation, and continuous improvement of the observability stack, including monitoring, logging, and tracing systems. Define and enforce observability standards and best practices across engineering teams to ensure consistent instrumentation and visibility. Build scalable monitoring solutions that provide real-time insights into system health, performance, and availability. Develop and maintain dashboards, alerts, and automated responses to proactively detect and resolve issues before they impact users. Collaborate with development, infrastructure, and SRE teams to integrate observability into CI/CD pipelines and production workflows. Conduct root cause analysis and post-incident reviews to identify observability gaps and drive improvements. Evaluate and implement tools such as Splunk, Splunk Observability Cloud, Netreo to support monitoring and alerting needs. Champion a culture of data-driven decision-making by enabling teams to access and interpret observability data effectively. Automating observability pipelines and alerting mechanisms. Qualifications Qualifications 5+ years of experience in Site Reliability Engineering, DevOps, or Observability roles. 3+ years of experience in SRE/DevOps. Demonstrated success in deploying and managing monitoring tools and observability solutions at scale. Hands-on experience with monitoring and observability platforms such as Splunk, Splunk Observability Cloud (O11y), Grafana, Prometheus, Datadog, Proven ability to design and implement SLOs/SLIs, dashboards, and alerting strategies that align with business and operational goals. Familiarity with incident response, alert tuning, and postmortem analysis. Strong scripting or programming skills (e.g., Python, Go, Bash). Excellent communication and collaboration skills, with a focus on knowledge sharing and mentorship. Desired Strong understanding of distributed tracing tools like OpenTelemetry, Jaeger, or Zipkin Experience integrating observability into CI/CD pipelines and Kubernetes environments. Contributions to open-source observability tools or frameworks. Strong understanding of distributed tracing tools like OpenTelemetry, Jaeger, or Zipkin. Strong knowledge of cloud platforms (AWS, Azure, or GCP) and container orchestration (Kubernetes). If you are passionate about technology, love personal growth and opportunity, come see what TNS is all about! TNS is an equal opportunity employer. TNS evaluates qualified applicants without regard to race, color, religion, gender, national origin, age, sexual orientation, gender identity or expression, protected veteran status, disability/handicap status or any other legally protected characteristic. Show more Show less

Posted 1 month ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Additional Information: McDonald’s is committed to providing qualified individuals with disabilities with reasonable accommodations to perform the essential functions of their jobs. McDonald’s provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to sex, sex stereotyping, pregnancy (including pregnancy, childbirth, and medical conditions related to pregnancy, childbirth, or breastfeeding), race, color, religion, ancestry or national origin, age, disability status, medical condition, marital status, sexual orientation, gender, gender identity, gender expression, transgender status, protected military or veteran status, citizenship status, genetic information, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. McDonald’s Capability Center India Private Limited (“McDonald’s in India”) is a proud equal opportunity employer and is committed to hiring a diverse workforce and sustaining an inclusive culture. At McDonald’s in India, employment decisions are based on merit, job requirements, and business needs, and all qualified candidates are considered for employment. McDonald’s in India does not discriminate based on race, religion, colour, age, gender, marital status, nationality, ethnic origin, sexual orientation, political affiliation, veteran status, disability status, medical history, parental status, genetic information, or any other basis protected under state or local laws. Nothing in this job posting or description should be construed as an offer or guarantee of employment. Position Summary: Director, Data Engineering: The Data Engineering sub-family is responsible for designing, building, and maintaining scalable data & analytics capabilities that enable data-driven decision-making across the organization. This job family ensures the availability, quality, and accessibility of data & analytics to support business objectives and innovation. Oversees the development and delivery of data products, ensuring alignment with business objectives and technical standards. Who we’re looking for: Primary Responsibilities: Strategically oversee data product development, ensuring alignment with business objectives and technical roadmaps. Drive innovation in data engineering practices to deliver scalable and high-impact data solutions. Lead and mentor multiple teams, fostering collaboration and knowledge sharing across stakeholders. Ensure data governance and compliance with industry standards and regulations. Evaluate emerging technologies and integrate them into the data engineering strategy to enhance capabilities Provides some input to help influence the definition of the functional strategy and develops operational plans to execute the functional strategy within a sub-function Leads a large team of engineers, analysts, QA and scrum masters Primary Audience: Internal Nature of Interaction: Engages in communication with individuals who may hold differing perspectives but share a common interest, primarily motivated by a shared goal of finding a resolution Level of Skill: Change significantly by enhancing entire existing processes, systems and/or products Complexity: Innovation is multi-dimensional; solutions to problems and issues have a direct impact on all dimensions (operational, financial, human capital) Knowledge: Requires mastery of a specific professional discipline combining deep knowledge of theory & organizational practice Skills & Experience Required: 12+ years of experience in data platforms, cloud infrastructure, or platform engineering. Deep expertise in cloud platforms (GCP, AWS, Azure) and cloud-native technologies (Kubernetes, Terraform, Serverless). Strong background in distributed systems, high-performance computing, and data infrastructure. Hands-on experience with big data frameworks (Apache Spark, Flink, Beam) and modern data lakehouses. Proficiency in programming languages such as SQL, Python, Go, Java, or Scala, with expertise in API development and microservices. Experience building and scaling data platforms, pipelines, and real-time streaming architectures. Strong understanding of Infrastructure as Code (IaC) tools (Terraform) and CI/CD pipelines (Github, Jenkins). Expertise in containerization, orchestration, and automation for platform reliability. Experience implementing monitoring, logging, and observability (Prometheus, Grafana, OpenTelemetry). Strong understanding of data security, compliance, and governance best practices. Experience with identity and access management (IAM), encryption, and data masking techniques. Knowledge of GDPR, CCPA and industry-specific data regulations. Proven ability to architect, optimize, and troubleshoot large-scale distributed systems. Ability to drive innovation, evaluate emerging technologies, and lead proof-of-concepts (PoCs). Excellent system design and scalability expertise, ensuring product resilience. Strong ability to influence and collaborate with business, product, platform, data science, DevOps, and leadership teams. Experience mentoring senior engineers and technical teams, fostering a culture of innovation and excellence. Effective communicator with the ability to simplify complex technical concepts for business stakeholders. Work location: Hyderabad, India Work hours: Work pattern: Full time role. Work mode: Hybrid. Show more Show less

Posted 1 month ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary As the Engineer, Central Platform Development, you will play a critical role in making the internal state of the bank's application and infrastructure services visible to stakeholders for troubleshooting, performance analysis, capacity planning, and reporting through the Central Monitoring and Observability Platform. You will contribute to the develop the bank’s central monitoring and observability platform and tooling to enable product owners, developers, and operators to efficiently trace performance problems to their source and map their application performance to business objectives. You will contribute to the backend cloud services development of the central observability platform for the applications / Infrastructure including Platform, Database, Integration, Reliability, and Cloud Operations teams, as well as product owner and developer / engineering teams. Our ideal candidate should have overall minimum of 8+ years of IT experience out of which 3+ years in Bachelor Degree in Computer Science or Information Systems or equivalent applicable experience Software development domain and principles, including design patterns, code structure, programming languages, continuous integration (Git/SVN), continuous deployment (Azure Pipelines), and deployment orchestration (Chef, puppet, or equivalent) Demonstrated ability using and administering (core to advanced knowledge) of 2 or more of the following technologies: AWS EC2 / EKS / AKS deployments Confluent and/or Apache Kafka administration. ADO / Devops tools Unix / Windows Administration OpenTelemetry Metrics, Logs, Tracing Prometheus / Alert Manager Synthetic Monitoring libraries APM tools such as Elastic APM or others Experience with Shell scripting, Java, Python or Ruby Experience with Web Technologies (Apache, HTML, JavaScript, HTTP, XML) Experience with network protocols and certificate management Intermediate understanding of the IT & Network infrastructure Intermediate troubleshooting knowledge Experience with Agile and Lean methodologies a big plus to produce in a fast-paced environment. Excellent communication skills both written and verbal and presentation skills ITOM/ITSM Integration experience. ServiceNow ITOM (Event Mgmt. & Operational Intelligence) experience Strong people management experience Nice to have AIOps (Artificial Intelligence Ops) strategy practice, implementation or on depth awareness. Key Responsibilities Strategy Awareness and understanding of the TTO’25 business strategy and model appropriate to the role. Support and the enablement of the Central Monitoring & Observability strategy, goals and objectives by developing prioritized features aligned to the Catalyst and Tech Simplification programmes. Business The Monitoring & Observability Platform team is a global team ensuring the design, development, delivery & support of the bank’s central monitoring and observability services for all TTO teams (technology domains). The ideal candidate will possess a deep understanding of in one or more of the platform technologies (Elastic Observability, Grafana Observability or ITRS Geneos) and its other required capabilities, such as Kafka messaging, database management, enabling the design, development, implementation, and management of the central solution, integrating advanced technological tools and techniques, and overseeing large-scale enterprise-level implementations. Processes As the Engineer, Central Platform Development, you will play a crucial role in ensuring the stability, reliability, and performance of our applications and platform, thereby enabling our organization to deliver exceptional services to our internal stakeholders by adhering to the Enterprise SDLC (eSDLC) framework and guidelines. People & Talent Actively engaging in stakeholders’ conversations, providing timely, clear and actionable feedback to deliver solution within timeline. Risk Management The ability to interpret the Group’s technical and security (ICS) control requirements and information to identify potential risks and key issues based on this information and put in place appropriate controls and measures to mitigate or minimize risk to the central monitoring & observability platform delivery. Governance Awareness and understanding of the eSDLC framework, in which the TTO software delivery operates, and the requirements and expectations relevant to the role. Responsible for adhering to the effectiveness of the central monitoring and observability platform deliver governance, based on oversight and controls of the eSDLC framework. Regulatory & Business Conduct Display exemplary conduct and live by the Group’s Values and Code of Conduct. Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct. Effectively and collaboratively identify, escalate, mitigate and resolve risk, conduct and compliance matters. Key stakeholders TTO CIO Development teams TTO Product Owners TTO SRE / PSS TTO Cloud Engineering ET Foundation Service Owners Other Responsibilities Embed Here for good and Group’s brand and values in the Observability Platform Team; Perform other responsibilities assigned under Group, Country, Business or Functional policies and procedures; Multiple functions (double hats); [List all responsibilities associated with the role] Participate in solution architecture / design consulting, platform management, and capacity planning activities Create sustainable solutions and services through automation and service uplifts within monitoring and observability disciplines Daily tasks include providing Level 2 / Level 3 support to delivered solutions. This means solving incidents and problems and applying changes according to the bank’s defined processes. Skills And Experience Agile Delivery Application Delivery Process Software Engineering Software Product Technical Knowledge Software Quality Assurance Cloud Computing Cloud Resource Management Qualifications EDUCATION Degree TRAINING Agile Delivery, DevOps CERTIFICATIONS Any Monitoring or Observability product certifications, such as ElasticSearch, Grafana or ITRS Geneos. Any of the following platform certifications: Certified Kubernetes Administrator (CKA) Kubernetes and Cloud Native Associate (KCNA) Certified Administrator for Apache Kafka Red Hat Certified Specialist in Event-Driven Development with Kafka AWS Certified SysOps Administrator - Associate LANGUAGES English About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together We Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What We Offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential. Show more Show less

Posted 1 month ago

Apply

8.0 years

5 - 8 Lacs

Chennai

On-site

Job ID: 31702 Location: Chennai, IN Area of interest: Technology Job type: Regular Employee Work style: Office Working Opening date: 13 Jun 2025 Job Summary As the Engineer, Central Platform Development, you will play a critical role in making the internal state of the bank's application and infrastructure services visible to stakeholders for troubleshooting, performance analysis, capacity planning, and reporting through the Central Monitoring and Observability Platform. You will contribute to the develop the bank’s central monitoring and observability platform and tooling to enable product owners, developers, and operators to efficiently trace performance problems to their source and map their application performance to business objectives. You will contribute to the backend cloud services development of the central observability platform for the applications / Infrastructure including Platform, Database, Integration, Reliability, and Cloud Operations teams, as well as product owner and developer / engineering teams. Our ideal candidate should have overall minimum of 8+ years of IT experience out of which 3+ years in Bachelor Degree in Computer Science or Information Systems or equivalent applicable experience Software development domain and principles, including design patterns, code structure, programming languages, continuous integration (Git/SVN), continuous deployment (Azure Pipelines), and deployment orchestration (Chef, puppet, or equivalent) Demonstrated ability using and administering (core to advanced knowledge) of 2 or more of the following technologies: o AWS EC2 / EKS / AKS deployments o Confluent and/or Apache Kafka administration. o ADO / Devops tools o Unix / Windows Administration o OpenTelemetry Metrics, Logs, Tracing o Prometheus / Alert Manager o Synthetic Monitoring libraries o APM tools such as Elastic APM or others Experience with Shell scripting, Java, Python or Ruby Experience with Web Technologies (Apache, HTML, JavaScript, HTTP, XML) Experience with network protocols and certificate management Intermediate understanding of the IT & Network infrastructure Intermediate troubleshooting knowledge Experience with Agile and Lean methodologies a big plus to produce in a fast-paced environment. Excellent communication skills both written and verbal and presentation skills ITOM/ITSM Integration experience. ServiceNow ITOM (Event Mgmt. & Operational Intelligence) experience Strong people management experience Nice to have AIOps (Artificial Intelligence Ops) strategy practice, implementation or on depth awareness. Key Responsibilities Strategy Awareness and understanding of the TTO’25 business strategy and model appropriate to the role. Support and the enablement of the Central Monitoring & Observability strategy, goals and objectives by developing prioritized features aligned to the Catalyst and Tech Simplification programmes. Business The Monitoring & Observability Platform team is a global team ensuring the design, development, delivery & support of the bank’s central monitoring and observability services for all TTO teams (technology domains). The ideal candidate will possess a deep understanding of in one or more of the platform technologies (Elastic Observability, Grafana Observability or ITRS Geneos) and its other required capabilities, such as Kafka messaging, database management, enabling the design, development, implementation, and management of the central solution, integrating advanced technological tools and techniques, and overseeing large-scale enterprise-level implementations. Processes As the Engineer, Central Platform Development, you will play a crucial role in ensuring the stability, reliability, and performance of our applications and platform, thereby enabling our organization to deliver exceptional services to our internal stakeholders by adhering to the Enterprise SDLC (eSDLC) framework and guidelines. People & Talent Actively engaging in stakeholders’ conversations, providing timely, clear and actionable feedback to deliver solution within timeline. Risk Management The ability to interpret the Group’s technical and security (ICS) control requirements and information to identify potential risks and key issues based on this information and put in place appropriate controls and measures to mitigate or minimize risk to the central monitoring & observability platform delivery. Governance Awareness and understanding of the eSDLC framework, in which the TTO software delivery operates, and the requirements and expectations relevant to the role. Responsible for adhering to the effectiveness of the central monitoring and observability platform deliver governance, based on oversight and controls of the eSDLC framework. Regulatory & Business Conduct Display exemplary conduct and live by the Group’s Values and Code of Conduct. Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct. Effectively and collaboratively identify, escalate, mitigate and resolve risk, conduct and compliance matters. Key stakeholders TTO CIO Development teams TTO Product Owners TTO SRE / PSS TTO Cloud Engineering ET Foundation Service Owners Other Responsibilities Embed Here for good and Group’s brand and values in the Observability Platform Team; Perform other responsibilities assigned under Group, Country, Business or Functional policies and procedures; Multiple functions (double hats); [List all responsibilities associated with the role] Participate in solution architecture / design consulting, platform management, and capacity planning activities Create sustainable solutions and services through automation and service uplifts within monitoring and observability disciplines Daily tasks include providing Level 2 / Level 3 support to delivered solutions. This means solving incidents and problems and applying changes according to the bank’s defined processes. Skills and Experience Agile Delivery Application Delivery Process Software Engineering Software Product Technical Knowledge Software Quality Assurance Cloud Computing Cloud Resource Management Qualifications EDUCATION Degree TRAINING Agile Delivery, DevOps CERTIFICATIONS Any Monitoring or Observability product certifications, such as ElasticSearch, Grafana or ITRS Geneos. Any of the following platform certifications: Certified Kubernetes Administrator (CKA) Kubernetes and Cloud Native Associate (KCNA) Certified Administrator for Apache Kafka Red Hat Certified Specialist in Event-Driven Development with Kafka AWS Certified SysOps Administrator - Associate LANGUAGES English About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together we: Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What we offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential. www.sc.com/careers

Posted 1 month ago

Apply

8.0 years

0 Lacs

India

Remote

Apply at https://www.gravityer.com/jobs/full-time/lead-devops-engineer The Lead DevOps Engineer will assume a pivotal role in propelling the growth and prosperity of our organization. We are seeking a skilled and proactive DevOps Engineer to join our team. In this role, you will develop and maintain GCP infrastructure, automate deployment and scaling using Kubernetes, and collaborate with the software development team. This position offers an exciting opportunity to monitor system performance, implement Infrastructure as Code practices, ensure high levels of performance and security, and operate effectively in an Agile, start-up environment. Responsibilities Design and maintain highly available, fault-tolerant systems on GCP using SRE best practices. Implement SLIs/SLOs, monitor error budgets, and lead post-incident reviews with RCA documentation. Automate infrastructure provisioning (Terraform/Deployment Manager) and CI/CD workflows. Operate and optimize Kubernetes (GKE) clusters including autoscaling, resource tuning, and HPA policies. Integrate observability across microservices using Prometheus, Grafana, Stackdriver, and OpenTelemetry. Manage and fine-tune databases (MySQL/Postgres/BigQuery/Firestore) for performance and cost. Improve API reliability and performance through Apigee (proxy tuning, quota/policy handling, caching). Drive container best practices including image optimization, vulnerability scanning, and registry hygiene. Participate in on-call rotations, capacity planning, and infrastructure cost reviews. Qualifications Minimum 8 years of total experience, with at least 3 years in SRE, DevOps, or Platform Engineering roles. Strong expertise in GCP services (GKE, IAM, Cloud Run, Cloud Functions, Pub/Sub, VPC, Monitoring). Advanced Kubernetes knowledge: pod orchestration, secrets management, liveness/readiness probes. Experience in writing automation tools/scripts in Python, Bash, or Go. Solid understanding of incident response frameworks and runbook development. CI/CD expertise with GitHub Actions, Cloud Build, or similar tools. Skills: mysql,go,kubernetes,postgres,gcp,ansible,grafana,terraform,monitoring tools,opentelemetry,prometheus,ci/cd,apigee,database,bash,scripting language,stackdriver,firestore,bigquery,devops,cloud,senior reliability engineer,python,docker Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Gurgaon Rural, Haryana, India

On-site

🚨 We're Hiring – Senior Consultant (Java | Apache Camel | Kafka) 🚨 📍 Location: Gurgaon (On-site at Customer Location) Experience : 5+ Years Notice period : immediate / 1 Month We are looking for highly skilled Java Spring Boot Developers to join our team for an exciting client project. You’ll build event-driven applications using Java, Apache Camel, Kafka, and deploy on Red Hat OpenShift. If you're passionate about scalable systems, real-time data, and modern enterprise integrations — we want to hear from you! 🔧 Key Responsibilities: Develop and deploy Spring Boot applications on OpenShift Design real-time data pipelines using Apache Kafka Build integration flows with Apache Camel & Enterprise Integration Patterns (EIPs) Work with HTTP, JMS, SQL/NoSQL, and stream processing tools (e.g., Flink) Integrate observability (Prometheus, Grafana, ELK, OpenTelemetry) Collaborate with AI/ML teams for AI-driven applications Ensure high code quality with CI/CD, testing, and performance tuning ✅ Requirements: 5+ years of Java development experience Strong in Spring Boot, Apache Kafka (producers, consumers, schema registry) Hands-on with Apache Camel and integration patterns Understanding of various protocols (HTTP, JMS, etc.) Experience with SQL/NoSQL databases Familiar with OpenShift/Kubernetes Exposure to observability tools and AI concepts is a plus 📩 Interested? Apply directly here or share your resume at zalak.bhavsar@reve.cloud or whtsapp on 8788887473 Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

JD: Job Title: Sr. consultant(Java, Camel, Kafka) Experience: 5+ Years Locations: Gurgaon Job Summary : We are seeking a highly skilled and motivated Java Spring Boot Developer to join our engineering team. This role focuses on developing and deploying scalable, event-driven applications on OpenShift , with data ingestion from Apache Kafka and transformation logic written in Apache Camel . The ideal candidate should possess a strong understanding of enterprise integration patterns , stream processing , and protocols, and have experience with observability tools and concepts in AI-enhanced applications . Key Responsibility : Design, develop, and deploy Java Spring Boot applications on Red Hat OpenShift . Build robust data pipelines with Apache Kafka for high-throughput ingestion and real-time processing. Implement transformation and routing logic using Apache Camel and Enterprise Integration Patterns (EIPs) . Develop components that interface with various protocols including HTTP , JMS , and database systems (SQL/NoSQL). Utilize Apache Flink or similar tools for complex event and stream processing where necessary. Integrate observability solutions (e.g., Prometheus, Grafana, ELK, OpenTelemetry) to ensure monitoring, logging, and alerting. Collaborate with AI/ML teams to integrate or enable AI-driven capabilities within applications. Write unit and integration tests, participate in code reviews, and support CI/CD practices. Troubleshoot and optimize application performance and data flows in production environments Required Skills & Qualification 5+ years of hands-on experience in Java development with strong proficiency in Spring Boot . Solid experience with Apache Kafka (consumer/producer patterns, schema registry, Kafka Streams is a plus). Proficient in Apache Camel and understanding of EIPs (routing, transformation, aggregation, etc.). Strong grasp of various protocols (HTTP, JMS, TCP) and messaging paradigms . In-depth understanding of database concepts – both relational and NoSQL. Experience with stream processing technologies such as Apache Flink , Kafka Streams, or Spark Streaming. Familiarity with OpenShift or similar container platforms (Kubernetes, Docker). Knowledge of observability tools and techniques – logging, metrics, tracing. Exposure to AI concepts (basic understanding of ML model integration, AI-driven decisions, etc.). Troubleshoot and optimize application performance and data flows in production environments Show more Show less

Posted 1 month ago

Apply

0.0 - 6.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Job Information Department Name Development Work Experience 2 - 6 Years Date Opened 13/06/2025 Industry IT Services Job Type Full time City Bangalore Province Karnataka Country India Postal Code 560100 Job Description We are looking for an experienced Full Stack Developer to join our team. The ideal candidate should have strong expertise in both frontend and backend development, especially using React and Python, with experience integrating AI/ML models, working with telemetry tools, and building scalable web applications. This role demands strong problem-solving skills, a collaborative mindset, and a keen interest in innovative technologies like simulation modelling and AI observability. Responsibilities Develop and maintain scalable full-stack web applications using React (frontend) and Python (backend) Design, build, and integrate RESTful APIs and microservices Collaborate with AI/ML engineers to integrate machine learning models into production environments Work with telemetry and observability tools to monitor system performance and user behavior Contribute to simulation modelling for testing, optimization, or ML model evaluation Participate in code reviews, unit testing, and CI/CD pipelines Coordinate with cross-functional teams to ensure successful delivery of AI-driven features Write clean, maintainable, and well-documented code Ensure security, scalability, and performance of systems across the stack Requirements Proficiency in React.js , JavaScript , and/or TypeScript . Experience with Redux or Recoil for state management. Strong understanding of responsive UI design using HTML5 , CSS3 , and SCSS . Experience with frontend frameworks like Bootstrap or Tailwind CSS . Strong hands-on experience with Python . Familiarity with backend frameworks like Django , FastAPI , or Flask . Good knowledge of REST API design and development. Working knowledge of machine learning models . Familiarity with ML libraries such as scikit-learn , TensorFlow , or PyTorch . Experience integrating AI/ML models into web applications (preferred). Exposure to monitoring and logging tools like Prometheus , Grafana , OpenTelemetry , or Sentry . Understanding of observability concepts, including metrics , logs , and traces . Basic knowledge of simulation tools like SimPy , AnyLogic , or custom simulation logic in Python. Experience using simulation modelling for testing or optimization. Good understanding of Docker , Git , and DevOps workflows . Experience working with databases like MongoDB or MySQL . Strong communication , analytical , and problem-solving skills. Ability to work effectively in agile and cross-functional team environments. Experience and Qualification 3–6 years of relevant experience in full-stack or ML-integrated application development. Bachelor’s or Master’s degree in Computer Science , Engineering , or related field. Strong ownership mindset and ability to deliver independently.

Posted 1 month ago

Apply

8.0 years

0 Lacs

Greater Kolkata Area

On-site

Work Location : PAN India Duration : 12 Months (Extendable) Shift : Rotational shifts including night shifts and weekend availability Years of Experience : 8+ Years 🔧 Job Summary We are seeking an experienced and versatile Site Reliability Engineer (SRE) / Observability Engineer to join our project delivery team. The ideal candidate will bring a deep understanding of modern cloud infrastructure, monitoring tools, and automation practices to ensure system uptime, scalability, and performance across a distributed environment. 🎯 Key Responsibilities Site Reliability Engineering Design, build, and maintain scalable, reliable infrastructure. Automate provisioning/configuration using tools like Terraform, Ansible, Chef, or Puppet. Develop automation tools/scripts in Python, Go, Java, or Bash. Administer and optimize Linux/Unix systems and network components (TCP/IP, DNS, load balancers). Deploy and manage infrastructure on AWS or Kubernetes platforms. Build and maintain CI/CD pipelines (e.g., Jenkins, ArgoCD). Monitor production systems with tools such as Prometheus, Grafana, Nagios, Datadog. Conduct postmortems and define SLAs/SLOs to ensure high system reliability. Plan and implement capacity management, failover systems, and auto-scaling mechanisms. Observability Engineering Instrument services for metrics/logs/traces using OpenTelemetry, Prometheus, Jaeger, etc. Manage observability stacks (e.g., Grafana, ELK Stack, Splunk, Datadog, Honeycomb). Work with time-series databases (e.g., InfluxDB, Prometheus) and log aggregation tools. Build actionable alerts and dashboards to reduce alert fatigue and increase insight. Advocate for observability best practices with developers and define performance KPIs. ✅ Required Skills & Qualifications Proven experience as an SRE or Observability Engineer in production environments. Strong Linux/Unix and cloud infrastructure skills (especially AWS, Kubernetes). Proficient in scripting and automation (Python, Go, Bash, Java). Expertise in observability, monitoring, and alerting systems. Experience in Infrastructure as Code (IaC) and modern CI/CD practices. Strong troubleshooting skills and ability to respond to live production issues. Comfortable with rotational shifts, including nights and weekends. 🔍 Mandatory Technical Skills Ansible AWS Automation Services AWS CloudFormation AWS CodePipeline AWS CodeDeploy AWS DevOps Services Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 7+ years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 7+ years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

Kolkata, West Bengal, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 7+ years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 7+ years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

Kanayannur, Kerala, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 7+ years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 7+ years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 7+ years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

Remote

eTip eTip is the leading digital tipping platform for the hospitality and service industry, empowering businesses with tools to attract, retain, and motivate their hardworking staff. Trusted by thousands of leading hotels, restaurants, and management companies, eTip stands out due to its commitment to customer security, product customization, dedication to customer service, and to its numerous partnerships including with Visa. Your Calling As a Senior DevOps Engineer, you will own and drive the DevOps strategy for our cloud-native tech stack built on top of AWS, Kubernetes, and Karpenter. You’ll design, implement, and optimize scalable, secure, and highly available infrastructure that processes millions of dollars while fostering a culture of automation, observability, and CI/CD excellence. What You’ll Do Infrastructure & Cloud Leadership Architect, deploy, and manage AWS cloud infrastructure (EKS, EC2, VPC, IAM, RDS, S3, Lambda, etc). Lead Kubernetes (EKS) cluster design, scaling, and optimization using Karpenter for cost-efficient autoscaling. Optimize cloud costs while ensuring performance and reliability. CI/CD & Automation Develop & maintain Github Action CI/CD pipeline workflows for backend, web frontend, & Android/iOS. Observability & Reliability Develop & maintain logging (Loki), monitoring (Prometheus, Grafana), and alerting to ensure system health. Security & Compliance Harden Kubernetes clusters (RBAC, network policies, OPA/Gatekeeper). Ensure compliance with SOC2, ISO 27001, or other security frameworks. Application development When infrastructure work is down, develop application features on backend/frontend depending on where your strengths/interests fit. Skills You Bring 5+ years of DevOps/SRE experience for SaaS companies. Deep expertise in AWS & Kubernetes. Proficiency in Karpenter, Helm and other Kubernetes operators. Strong development skills (Kotlin, Python, Go, or Bash). Experience with observability tools (Prometheus, Grafana, OpenTelemetry). Security-first mindset with knowledge of networking and cost optimization. Why You’ll Love Working Here Own and shape DevOps for a cutting-edge cloud-native stack. Work alongside very passionate & talented engineers. Work on a very high impact product that processes millions of dollars. Remote first, flexible work environment. Growth opportunities in a small, collaborative, high-impact team. Participate in yearly off-sites that take place all around the world. Eager to build the future of tipping with us? 💪 Apply today! 🚀 Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description We are looking for a talented Software Engineer (Go) to join our dynamic team. In this role, you will play a crucial part in developing high-performance back-end services that support financial service applications. Your focus will be on collaborating with cross-functional teams to create innovative solutions for complex problems in the asset management space. This position offers the flexibility of hybrid working, allowing you to balance your work and personal life effectively. We are particularly seeking candidates who are proficient in integrating AI tools into their daily development cycle to improve productivity, code quality, and problem-solving. Key Responsibilities Design and develop highly scalable and reliable services in GO language. Collaborating with cross-functional teams to design, develop, and test software solutions. Kafka integration and implementation with Go services. Leverage the corporate AI assistant and other strategic coding tools to enhance development workflows. Actively use AI tools to support code generation, debugging, documentation, and testing. Ensure that all microservices are highly available and fault tolerant. Troubleshooting and debugging issues as they arise. Keeping up to date with emerging trends, AI-assisted development practices, and best practices in front-end development. Participating in code reviews and contributing to a positive team culture. Ensure all code written has the appropriate level of unit test coverage. Requirements & Qualifications (Go Developer) Bachelor's degree in computer science, Software Engineering, or a related field. Proven experience as a Go Developer or in a similar back-end development role. Strong proficiency in the Go programming language and its standard library. Experience building scalable, high-performance backend services and APIs. Familiarity with RESTful and gRPC API design and implementation. Understanding of concurrency patterns and goroutine-based architecture in Go. Knowledge of modern Go development tools such as go mod, go test, and golangci-lint. Experience working with databases (SQL and NoSQL), e.g., PostgreSQL, MySQL, MongoDB. Hands-on experience with version control systems such as Git. Demonstrated ability to leverage AI tools (e.g., GitHub Copilot, ChatGPT, AI-powered testing/linting tools) to boost development productivity and code quality. Excellent problem-solving skills and keen attention to detail. Ability to work independently and collaboratively in a fast-paced environment. Strong verbal and written communication skills. Familiarity with cloud platforms such as AWS, GCP, or Azure, and infrastructure tools like Docker and Kubernetes. Experience with CI/CD pipelines and tools like GitHub Actions, CircleCI, or Jenkins. Knowledge of observability practices and tools such as Prometheus, Grafana, and OpenTelemetry. Understanding of security best practices in backend development. Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Site Reliability Engineer I Job Summary Site Reliability Engineers (SRE's) cover the intersection of Software Engineer and Systems Administrator. In other words, they can both create code and manage the infrastructure on which the code runs. This is a very wide skillset, but the end goal of an SRE is always the same: to ensure that all SLAs are met, but not exceeded, so as to balance performance and reliability with operational costs. As a Site Reliability Engineer I, you will be learning our systems, improving your craft as an engineer, and taking on tasks that improve the overall reliability of the VP platform. Key Responsibilities Design, implement, and maintain robust monitoring and alerting systems. Lead observability initiatives by improving metrics, logging, and tracing across services and infrastructure. Collaborate with development and infrastructure teams to instrument applications and ensure visibility into system health and performance. Write Python scripts and tools for automation, infrastructure management, and incident response. Participate in and improve the incident management and on-call process, driving down Mean Time to Resolution (MTTR). Conduct root cause analysis and postmortems following incidents, and champion efforts to prevent recurrence. Optimize systems for scalability, performance, and cost-efficiency in cloud and containerized environments. Advocate and implement SRE best practices, including SLOs/SLIs, capacity planning, and reliability reviews. Required Skills & Qualifications 3+ years of experience in a Site Reliability Engineer or similar role. Proficiency in Python for automation and tooling. Hands-on experience with monitoring and observability tools such as Prometheus, Grafana, Datadog, New Relic, OpenTelemetry, etc. Experience with log aggregation and analysis tools like ELK Stack (Elasticsearch, Logstash, Kibana) or Fluentd. Good understanding of cloud platforms (AWS, GCP, or Azure) and container orchestration (Kubernetes). Familiarity with infrastructure-as-code (Terraform, Ansible, or similar). Strong debugging and incident response skills. Knowledge of CI/CD pipelines and release engineering practices. Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

Hyderābād

On-site

Company: Qualcomm India Private Limited Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: Job Summary: Qualcomm is seeking a seasoned Staff Engineer, DevOps to join our central software engineering team. In this role, you will lead the design, development, and deployment of scalable cloud-native and hybrid infrastructure solutions, modernize legacy systems, and drive DevOps best practices across products. This is a hands-on architectural role ideal for someone who thrives in a fast-paced, innovation-driven environment and is passionate about building resilient, secure, and efficient platforms. Key Responsibilities: Architect and implement enterprise-grade AWS cloud solutions for Qualcomm’s software platforms. Design and implement CI/CD pipelines using Jenkins, GitHub Actions, and Terraform to enable rapid and reliable software delivery. Develop reusable Terraform modules and automation scripts to support scalable infrastructure provisioning. Drive observability initiatives using Prometheus, Grafana, Fluentd, OpenTelemetry, and Splunk to ensure system reliability and performance. Collaborate with software development teams to embed DevOps practices into the SDLC and ensure seamless deployment and operations. Provide mentorship and technical leadership to junior engineers and cross-functional teams. Manage hybrid environments, including on-prem infrastructure and Kubernetes workloads supporting both Linux and Windows. Lead incident response, root cause analysis, and continuous improvement of SLIs for mission-critical systems. Drive toil reduction and automation using scripting or programming languages such as PowerShell, Bash, Python, or Go. Independently drive and implement DevOps/cloud initiatives in collaboration with key stakeholders. Understand software development designs and compilation/deployment flows for .NET, Angular, and Java-based applications to align infrastructure and CI/CD strategies with application architecture. Required Qualifications: 10+ years of experience in IT or software development, with at least 5 years in cloud architecture and DevOps roles. Strong foundational knowledge of infrastructure components such as networking, servers, operating systems, DNS, Active Directory, and LDAP. Deep expertise in AWS services including EKS, RDS, MSK, CloudFront, S3, and OpenSearch. Hands-on experience with Kubernetes, Docker, containerd, and microservices orchestration. Proficiency in Infrastructure as Code using Terraform and configuration management tools like Ansible and Chef. Experience with observability tools and telemetry pipelines (Grafana, Prometheus, Fluentd, OpenTelemetry, Splunk). Experience with agent-based monitoring tools such as SCOM and Datadog. Solid scripting skills in Python, Bash, and PowerShell. Familiarity with enterprise-grade web services (IIS, Apache, Nginx) and load balancing solutions. Excellent communication and leadership skills with experience mentoring and collaborating across teams. Preferred Qualifications: Experience with api gateway solutions for API security and management. Knowledge on RDBMS, preferably MSSQL/Postgresql is good to have. Proficiency in SRE principles including SLIs, SLOs, SLAs, error budgets, chaos engineering, and toil reduction. Experience in core software development (e.g., Java, .NET). Exposure to Azure cloud and hybrid cloud strategies. Bachelor’s degree in Computer Science or a related field Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 4+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. 2+ years of work experience with Programming Language such as C, C++, Java, Python, etc. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Matillion is The Data Productivity Cloud. We are on a mission to power the data productivity of our customers and the world, by helping teams get data business ready, faster. Our technology allows customers to load, transform, sync and orchestrate their data. We are looking for passionate, high-integrity individuals to help us scale up our growing business. Together, we can make a dent in the universe bigger than ourselves. With offices in the UK, US and Spain, we are now thrilled to announce the opening of our new office in Hyderabad, India. This marks an exciting milestone in our global expansion, and we are now looking for talented professionals to join us as part of our founding team. About the Role Are you ready to shape the future of reliability at scale? At Matillion, we’re looking for a Principal Engineer - Reliability to lead our cloud architecture and observability strategy across mission-critical systems. This high-impact role puts you at the heart of our cloud-native engineering team, designing resilient distributed systems that power data workloads across the globe. You’ll work cross-functionally with engineering, product, and leadership, helping to scale our platform as we continue our journey of global growth. We value in-person collaboration here at Matillion, therefore this role will work from our central Hyderabad office. What you'll be doing Leading the design and architecture of scalable, cloud-native systems that prioritise reliability and performance Owning observability and infrastructure strategy to ensure global uptime and rapid incident response Driving automation, sustainable incident practices, and blameless postmortems across teams Collaborating with engineering and product to shape scalable solutions from ideation to delivery Coaching and mentoring engineers, fostering a culture of technical excellence and innovation What we are looking for Deep expertise in Kubernetes and modern tooling like Linkerd, ArgoCD, or Traefik Pro-level programming skills (Go, Java or Python preferred) and familiarity with the broader ecosystem Proven experience building large-scale distributed systems in public cloud (AWS or Azure) Hands-on knowledge of observability tools like Prometheus, Grafana, OpenTelemetry, or Datadog Experience with messaging systems (e.g., Kafka) and secrets management (Vault, AWS Secrets Manager) A collaborative leader with strong communication skills and a passion for scalability, availability, and innovation Matillion has fostered a culture that is collaborative, fast-paced, ambitious, and transparent, and an environment where people genuinely care about their colleagues and communities. Our 6 core values guide how we work together and with our customers and partners. We operate a truly flexible and hybrid working culture that promotes work-life balance, and are proud to be able to offer the following benefits: - Company Equity - 27 days paid time off - 12 days of Company Holiday - 5 days paid volunteering leave - Group Mediclaim (GMC) - Enhanced parental leave policies - MacBook Pro - Access to various tools to aid your career development More about Matillion Thousands of enterprises including Cisco, DocuSign, Slack, and TUI trust Matillion technology to load, transform, sync, and orchestrate their data for a wide range of use cases from insights and operational analytics, to data science, machine learning, and AI. With over $300M raised from top Silicon Valley investors, we are on a mission to power the data productivity of our customers and the world. We are passionate about doing things in a smart, considerate way. We’re honoured to be named a great place to work for several years running by multiple industry research firms. We are dual headquartered in Manchester, UK and Denver, Colorado. We are keen to hear from prospective Matillioners, so even if you don’t feel you match all the criteria please apply and a member of our Talent Acquisition team will be in touch. Alternatively, if you are interested in Matillion but don't see a suitable role, please email talent@matillion.com. Matillion is an equal opportunity employer. We celebrate diversity and we are committed to creating an inclusive environment for all of our team. Matillion prohibits discrimination and harassment of any type. Matillion does not discriminate on the basis of race, colour, religion, age, sex, national origin, disability status, genetics, sexual orientation, gender identity or expression, or any other characteristic protected by law. Show more Show less

Posted 1 month ago

Apply

15.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

At BairesDev®, we've been leading the way in technology projects for over 15 years. We deliver cutting-edge solutions to giants like Google and the most innovative startups in Silicon Valley. Our diverse 4,000+ team, composed of the world's Top 1% of tech talent, works remotely on roles that drive significant impact worldwide. When you apply for this position, you're taking the first step in a process that goes beyond the ordinary. We aim to align your passions and skills with our vacancies, setting you on a path to exceptional career development and success. DevOps Engineer - AWS at BairesDev We are looking for a DevOps Engineer with expertise in infrastructure as code using TypeScript with AWS CDK, and experience in deploying and managing cloud-native applications. This role focuses on automating cloud infrastructure, supporting CI/CD workflows, and maintaining observability and data integrity in production systems. The ideal candidate will bring solid skills in DevOps practices, IaC, and experience working with systems built on Amazon Web Services. You will work cross-functionally with developers, data teams, and SREs to support deployments, maintain CI/CD pipelines in Jenkins, and monitor environments using observability tools like Splunk and OpenTelemetry. What You'll Do: - Implement infrastructure as code using AWS CDK with TypeScript. - Deploy and manage cloud-native applications on AWS using Lambda, ECS Tasks, and S3. - Support and maintain CI/CD pipelines using Jenkins. - Collaborate with development teams to automate deployment processes. - Perform database operations and basic SQL tasks with Amazon Aurora and PostgreSQL. - Monitor production systems using observability tools. - Work with cross-functional teams to ensure system reliability and performance. What we are looking for: - 3+ years of experience in DevOps engineering roles. - Experience with AWS CDK using TypeScript. - Knowledge of AWS Cloud services, particularly Lambda, ECS Tasks, and S3. - Familiarity with database tasks and basic SQL, experience with Amazon Aurora and PostgreSQL. - Experience with automation and CI/CD using Jenkins. - Understanding of infrastructure as code principles. - Experience working with production systems. - Good communication skills and ability to work in cross-functional teams. - Advanced level of English. How we do make your work (and your life) easier: - 100% remote work (from anywhere). - Excellent compensation in USD or your local currency if preferred - Hardware and software setup for you to work from home. - Flexible hours: create your own schedule. - Paid parental leaves, vacations, and national holidays. - Innovative and multicultural work environment: collaborate and learn from the global Top 1% of talent. - Supportive environment with mentorship, promotions, skill development, and diverse growth opportunities. Apply now and become part of a global team where your unique talents can truly thrive! Show more Show less

Posted 1 month ago

Apply

15.0 years

0 Lacs

Agra, Uttar Pradesh, India

Remote

At BairesDev®, we've been leading the way in technology projects for over 15 years. We deliver cutting-edge solutions to giants like Google and the most innovative startups in Silicon Valley. Our diverse 4,000+ team, composed of the world's Top 1% of tech talent, works remotely on roles that drive significant impact worldwide. When you apply for this position, you're taking the first step in a process that goes beyond the ordinary. We aim to align your passions and skills with our vacancies, setting you on a path to exceptional career development and success. DevOps Engineer - AWS at BairesDev We are looking for a DevOps Engineer with expertise in infrastructure as code using TypeScript with AWS CDK, and experience in deploying and managing cloud-native applications. This role focuses on automating cloud infrastructure, supporting CI/CD workflows, and maintaining observability and data integrity in production systems. The ideal candidate will bring solid skills in DevOps practices, IaC, and experience working with systems built on Amazon Web Services. You will work cross-functionally with developers, data teams, and SREs to support deployments, maintain CI/CD pipelines in Jenkins, and monitor environments using observability tools like Splunk and OpenTelemetry. What You'll Do: - Implement infrastructure as code using AWS CDK with TypeScript. - Deploy and manage cloud-native applications on AWS using Lambda, ECS Tasks, and S3. - Support and maintain CI/CD pipelines using Jenkins. - Collaborate with development teams to automate deployment processes. - Perform database operations and basic SQL tasks with Amazon Aurora and PostgreSQL. - Monitor production systems using observability tools. - Work with cross-functional teams to ensure system reliability and performance. What we are looking for: - 3+ years of experience in DevOps engineering roles. - Experience with AWS CDK using TypeScript. - Knowledge of AWS Cloud services, particularly Lambda, ECS Tasks, and S3. - Familiarity with database tasks and basic SQL, experience with Amazon Aurora and PostgreSQL. - Experience with automation and CI/CD using Jenkins. - Understanding of infrastructure as code principles. - Experience working with production systems. - Good communication skills and ability to work in cross-functional teams. - Advanced level of English. How we do make your work (and your life) easier: - 100% remote work (from anywhere). - Excellent compensation in USD or your local currency if preferred - Hardware and software setup for you to work from home. - Flexible hours: create your own schedule. - Paid parental leaves, vacations, and national holidays. - Innovative and multicultural work environment: collaborate and learn from the global Top 1% of talent. - Supportive environment with mentorship, promotions, skill development, and diverse growth opportunities. Apply now and become part of a global team where your unique talents can truly thrive! Show more Show less

Posted 1 month ago

Apply

8.0 years

0 Lacs

India

On-site

This is an incredible opportunity to be part of a company that has been at the forefront of AI and high-performance data storage innovation for over two decades. DataDirect Networks (DDN) is a global market leader renowned for powering many of the world's most demanding AI data centers, in industries ranging from life sciences and healthcare to financial services, autonomous cars, Government, academia, research and manufacturing. "DDN's A3I solutions are transforming the landscape of AI infrastructure." – IDC “The real differentiator is DDN. I never hesitate to recommend DDN. DDN is the de facto name for AI Storage in high performance environments” - Marc Hamilton, VP, Solutions Architecture & Engineering | NVIDIA DDN is the global leader in AI and multi-cloud data management at scale. Our cutting-edge data intelligence platform is designed to accelerate AI workloads, enabling organizations to extract maximum value from their data. With a proven track record of performance, reliability, and scalability, DDN empowers businesses to tackle the most challenging AI and data-intensive workloads with confidence. Our success is driven by our unwavering commitment to innovation, customer-centricity, and a team of passionate professionals who bring their expertise and dedication to every project. This is a chance to make a significant impact at a company that is shaping the future of AI and data management. Our commitment to innovation, customer success, and market leadership makes this an exciting and rewarding role for a driven professional looking to make a lasting impact in the world of AI and data storage. As a Lead/Sr. Lead Software Engineer - L4 , you’ll be the final escalation point for the most complex and critical issues affecting enterprise and hyperscale environments. This hands-on role is ideal for a deep technical expert who thrives under pressure and has a passion for solving distributed system challenges at scale. You’ll collaborate with Engineering, Product Management, and Field teams to drive root cause resolutions, define architectural best practices, and continuously improve product resiliency. Leveraging AI tools and automation, you’ll reduce time-to-resolution, streamline diagnostics, and elevate the support experience for strategic customers. Key Responsibilities Technical Expertise & Escalation Leadership Own critical customer case escalations end-to-end, including deep root cause analysis and mitigation strategies. Act as the highest technical escalation point for Infinia support incidents — especially in production-impacting scenarios. Lead war rooms, live incident bridges, and cross-functional response efforts with Engineering, QA, and Field teams. Utilize AI-powered debugging, log analysis, and system pattern recognition tools to accelerate resolution. Product Knowledge & Value Creation Become a subject-matter expert on Infinia internals: metadata handling, storage fabric interfaces, performance tuning, AI integration, etc. Reproduce complex customer issues and propose product improvements or workarounds. Author and maintain detailed runbooks, performance tuning guides, and RCA documentation. Feed real-world support insights back into the development cycle to improve reliability and diagnostics. Customer Engagement & Business Enablement Partner with Field CTOs, Solutions Architects, and Sales Engineers to ensure customer success. Translate technical issues into executive-ready summaries and business impact statements. Participate in post-mortems and executive briefings for strategic accounts. Drive adoption of observability, automation, and self-healing support mechanisms using AI/ML tools. Required Qualifications 8+ years in enterprise storage, distributed systems, or cloud infrastructure support/engineering. Deep understanding of file systems (POSIX, NFS, S3), storage performance, and Linux kernel internals. Proven debugging skills at system/protocol/app levels (e.g., strace, tcpdump, perf). Hands-on experience with AI/ML data pipelines, container orchestration (Kubernetes), and GPU-based architectures. Exposure to RDMA, NVMe-oF, or high-performance networking stacks. Exceptional communication and executive reporting skills. Experience using AI tools (e.g., log pattern analysis, LLM-based summarization, automated RCA tooling) to accelerate diagnostics and reduce MTTR. Preferred Qualifications Experience with DDN, VAST, Weka, or similar scale-out file systems. Strong scripting/coding ability in Python, Bash, or Go. Familiarity with observability platforms: Prometheus, Grafana, ELK, OpenTelemetry. Knowledge of replication, consistency models, and data integrity mechanisms. Exposure to Sovereign AI, LLM model training environments, or autonomous system data architectures. This position requires participation in an on-call rotation to provide after-hours support as needed. Show more Show less

Posted 1 month ago

Apply

1.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Company Overview At Zuora, we do Modern Business. We’re helping people subscribe to new ways of doing business that are better for people, companies and ultimately the planet. It’s an approach resulting from the shift to the Subscription Economy that puts customers first by building recurring relationships instead of one-time product sales and focuses on sustainable growth. Through our leading expertise and multi-product suite, we are transforming all industries and working with the world’s most innovative companies to monetize new business models, nurture subscriber relationships and optimize their digital experiences. The Team & Role Join Zuora’s high-impact Operations team, where you’ll be instrumental in maintaining the reliability, scalability, and performance of our SaaS platform. This role involves proactive service monitoring, incident response, infrastructure service management, and ownership of internal and external shared services to ensure optimal system availability and performance. You will work alongside a team of skilled engineers dedicated to operational excellence through automation, observability, and continuous improvement. In this cross-functional role, you’ll collaborate daily with Product Engineering & Management, Customer Support, Deal Desk, Global Services, and Sales teams to ensure a seamless and customer-centric service delivery model. As a core member of the team, you’ll have the opportunity to design and implement operational best practices, contribute to service provisioning strategies, and drive innovations that enhance the overall platform experience. If you’re driven by solving complex problems in a fast-paced environment and are passionate about operational resilience and service reliability, we’d love to hear from you. Our Tech Stack: Linux Administration, Python, Docker, Kubernetes, MySQL, Kafka, ActiveMQ, Tomcat App & Web, Oracle, Load Balancers, REDIS Cache, Debezium, AWS, WAF, LBs, Jenkins, GitOps, Terraform, Ansible, Puppet, Prometheus, Grafana, Open Telemetry In this role you’ll get to Architect and implement intelligent automation workflows for infrastructure lifecycle management, including self-healing systems, automated incident remediation, and configuration analomy detection using Infrastructure as Code (IaC) and AI-driven tooling. Leverage predictive monitoring and anomaly detection techniques powered by AI/ML to proactively assess system health, optimize performance, and preemptservice degradation or outages. Lead complex incident response efforts, applying deep root cause analysis (RCA) and postmortem practices to drive long-term stability, while integrating automated detection and remediation capabilities. Partner with development and platform engineering teams to build resilient CI/CD pipelines, enforce infrastructure standards, and embed observability and reliability into application deployments. Identify and eliminate reliability bottlenecks through automated performance tuning, dynamic scaling policies, and advanced telemetry instrumentation. Maintain and continuously evolve operational runbooks by incorporating machine learning insights, updating playbooks with AI-suggested resolutions, and identifying automation opportunities for manual steps. Stay abreast of emerging trends in AI for IT operations (AIOps), distributed systems, and cloud-native technologies to influence strategic reliability engineering decisions and tool adoption. Who we’re looking for Hands-on experience with Linux Servers Administration and Python Programming. Deep experience with containerization and orchestration using Docker and Kubernetes, managing highly available services at scale. Working with messaging systems like Kafka and ActiveMQ, databases like MySQL and Oracle, and caching solutions like REDIS. Understands and applies AI/ML techniques in operations, including anomaly detection, predictive monitoring, and self-healing systems. Has a solid track record in incident management, root cause analysis, and building systems that prevent recurrence through automation. Is proficient in developing and maintaining CI/CD pipelines with a strong emphasis on observability, performance, and reliability. Monitoring and observability using Prometheus, Grafana, and OpenTelemetry, with a focus on real-time anomaly detection and proactive alerting. Is comfortable writing and maintaining runbooks and enjoys enhancing them with automation and machine learning insights. Keeps up-to-date with industry trends such as AIOps, distributed systems, SRE best practices, and emerging cloud technologies. Brings a collaborative mindset, working cross-functionally with engineering, product, and operations teams to align system design with business objectives. 1+ years of experience working in a SaaS environment. Nice To Have Red Hat Certified System Administrator (RHCSA) – Red Hat AWS Certification Certified Associate in Python Programming (PCAP) – Python Institute Docker Certified Associate (DCA) or Certified Kubernetes Administrator (CKA) Good knowledge of Jenkins #ZEOLife at Zuora Advanced certifications in SRE or related fields As an industry pioneer, our work is constantly evolving and challenging us in new ways that require us to think differently, iterate often and learn constantly—it’s exciting. Our people, whom we refer to as “ZEOs” are empowered to take on a mindset of ownership and make a bigger impact here. Our teams collaborate deeply, exchange different ideas openly and together we’re making what’s next possible for our customers, community and the world. As part of our commitment to building an inclusive, high-performance culture where ZEOs feel inspired, connected and valued, we support ZEOs with: Competitive compensation, corporate bonus program, performance rewards and retirement programs Medical insurance Generous, flexible time off Paid holidays, “wellness” days and company wide end of year break 6 months fully paid parental leave Learning & Development stipend Opportunities to volunteer and give back, including charitable donation match Free resources and support for your mental wellbeing Specific benefits offerings may vary by country and can be viewed in more detail during your interview process. Location & Work Arrangements Organizations and teams at Zuora are empowered to design efficient and flexible ways of working, being intentional about scheduling, communication, and collaboration strategies that help us achieve our best results. In our dynamic, globally distributed company, this means balancing flexibility and responsibility — flexibility to live our lives to the fullest, and responsibility to each other, to our customers, and to our shareholders. For most roles, we offer the flexibility to work both remotely and at Zuora offices. Our Commitment to an Inclusive Workplace Think, be and do you! At Zuora, different perspectives, experiences and contributions matter. Everyone counts. Zuora is proud to be an Equal Opportunity Employer committed to creating an inclusive environment for all. Zuora does not discriminate on the basis of, and considers individuals seeking employment with Zuora without regards to, race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, politicalviews or activity, or other applicable legally protected characteristics. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us by sending an email to assistance(at)zuora.com. Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies