Home
Jobs

1325 Datadog Jobs - Page 16

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Lucknow, Uttar Pradesh, India

On-site

Linkedin logo

About The Job Job Description : We are seeking a highly skilled and customer-focused Technical Support Engineer to join our team. This role is responsible for delivering high-quality technical support to our customers to troubleshoot complex technical issues and collaborating with cross-functional teams to ensure customer success. Technical Support Engineer is expected to provide advanced technical support on Data based Software Product to a global client base and partners. This role requires deep technical expertise, strong problem-solving skills, and the ability to communicate complex technical information effectively. The primary responsibility is to troubleshoot and resolve technical issues, support product adoption, and ensure customer satisfaction. TSE must have experience working with databases, specifically graph databases, and possess a strong background in Linux, networking, and scripting (bash/python). They work collaboratively with engineering teams to escalate and resolve complex issues when necessary (i.e. code change required, first time seeing a behavior). Roles And Responsibilities Respond to customer inquiries and provide in-depth technical support via multiple communication channels. Collaborate with core engineering and solution engineering teams to diagnose and resolve complex technical problems. Create and maintain public documentation, internal knowledge base articles, and FAQs. Monitoring and meeting SLAs. Timely triage varying issues based on error messages, log files, threads dumps, stack traces, sample code, and other available data points. Efficiently troubleshoot cluster issues across multiple servers, data centers, and regions, in a variety of clouds (AWS, Azure, GCP, etc), virtual, and bare metal environments. Candidate to work during EMEA time zone (2PM to 10 PM shift) Requirements Must Have Skills : Education : B.Tech in computer engineering, Information Technology, or related field. Experience GraphDB experience is must 5+ years of experience in a Technical Support Role on Data based Software Product at least L3 level. Linux Expertise : 4+ years with in-depth understanding of Linux, including filesystem, process management, memory management, networking, and security. Graph Databases : 3+ years of experience with Neo4j, or similar graph database systems. SQL Expertise : 3+ years of experience in SQL for database querying, performance tuning, and debugging. Data Streaming & Processing : 2+ years hands-on experience with Kafka, Zookeeper, and Spark. Scripting & Automation : 2+ years with strong skills in Bash scripting and Python for automation, task management, and issue resolution. Containerization & Orchestration : 1+ year proficiency in Docker, Kubernetes, or other containerization technologies is essential. Monitoring & Performance Tools : Experience with Grafana, Datadog, Prometheus, or similar tools for system and performance monitoring. Networking & Load Balancing : Proficient in TCP/IP, load balancing strategies, and troubleshooting network-related issues. Web & API Technologies : Understanding of HTTP, SSL, REST APIs for debugging and troubleshooting API-related issues. Nice To Have Skills Familiarity with Data Science or ML will be an edge. Experience with LDAP, SSO, OAuth authentication. Strong understanding of database internals and system architecture. Cloud certification (at least DevOps Engineer level) (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Technical Skills And Experience Proficient in Java 17 or other JVM languages. At least 4 years of software development At least 1-2 years of Kubernetes experience and AWS experience with deep understanding of Docker, Kubernetes, Minikube and AWS. Experience building high volume, high performance, stable and scalable systems that have been shipped to customers. Good working understanding of asynchronous messaging frameworks like Kafka. Great understanding of distributed systems challenges, micro-service-based architectures and asynchronized communication Experience and good understanding of implement alerting, metrics, and logging using tools like Prometheus, CloudWatch, Datadog, Splunk or Kibana. Practical knowledge of contract first development model and ability to design API contracts before starting development. Practical knowledge of persistence and caching solutions such as Mysql, PostgreSQL, Redis, ElasticSearch, Caffeine. Good understanding of database modelling and fine tune database queries for optimal performance. Good understanding with asynchronous, non-blocking, functional/reactive style of programming. Hands-on experience with frameworks such as Spring WebFlux, Vert.x, Desired Payment Domain Skills Experience building and operating a subscription-based service Experience or familiarity with PSP integrations (Stripe, Adyen, Braintree, dLocal, etc) Experience or familiarity with eWallet Integrations (ApplePay, GooglePay, Paypal) Working understanding of the steps executing during a typical web purchase flow Soft Skills Ability to clearly and effectively communicating ideas both verbally and in writing in an global team setting. Willingness to proactively collaborate within your team, within the commerce org and reaching out across teams where necessary with Product, Finance and other stakeholders Strong sense of ownership both for the quality of software but also for the project outcomes and impact on the business. Openness to new ideas and ability to pick them up and put them into practice quickly. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

15.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

We are seeking a Senior Full Stack Engineer with deep expertise in modern JavaScript ecosystems and cloud architecture. You'll be working on complex application modernization initiatives, focusing on transforming legacy systems into scalable, cloud-native applications. Core Technical Stack Frontend : React.js (with Hooks, Context API), Next.js 14+, Redux/RTK, TypeScript, Tailwind CSS, Material-UI/Chakra UI Backend : Node.js, NestJS, Express.js, GraphQL (Apollo Server), WebSocket Cloud & Infrastructure AWS Services : ECS, Lambda, API Gateway, S3, CloudFront, RDS, DynamoDB, SQS/SNS, ElastiCache Infrastructure as Code : Terraform, CloudFormation Containerization : Docker, Kubernetes, ECS Databases & Caching MongoDB PostgreSQL Redis Elasticsearch Authentication & Security : OAuth2.0/OIDC JWT AWS Cognito SAML2.0 Testing & Quality : Jest React Testing Library Cypress CI/CD & Monitoring GitHub Actions Jenkins AWS CloudWatch DataDog Key Technical Responsibilities System Architecture & Development (70%) : Design and implement microservices architectures using Node.js/NestJS, focusing on scalability and performance Build reusable component libraries and establish frontend architecture patterns using React.js and Next.js Implement real-time features using WebSocket/Socket.io for live data updates and notifications Design and optimize database schemas, write complex queries, and implement caching strategies Develop CI/CD pipelines with automated testing, deployment, and monitoring Create and maintain infrastructure as code using Implement security best practices and compliance requirements (SOC2, GDPR) Examples Of Current Projects Modernizing a monolithic PHP application into microservices using NestJS and React Implementing event-driven architecture using AWS EventBridge and SQS Building a real-time analytics dashboard using WebSocket and Time-series databases Optimizing application performance through caching strategies and CDN implementation Developing custom hooks and components for shared functionality across applications Technical Leadership (30%) : Conduct code reviews and provide technical mentorship Contribute to technical decision-making and architecture discussions Document technical designs and maintain development standards Collaborate with product teams to define technical requirements Guide junior developers through complex technical challenges Required Technical Experience Expert-level proficiency in JavaScript/TypeScript and full-stack development Deep understanding of React.js internals, hooks, and performance optimization Extensive experience with Node.js backend development and microservices Strong background in cloud architecture and AWS services Hands-on experience with container orchestration and infrastructure automation Proven track record of implementing authentication and authorization systems Experience with monitoring, logging, and observability tools Preferred Qualifications Technical Expertise : Advanced degree in Computer Science, Engineering, or related field Experience with cloud-native development and distributed systems patterns Proficiency in additional programming languages (Rust, Go, Python) Deep understanding of browser internals and web performance optimization Experience with streaming data processing and real-time analytics Architecture & System Design Experience designing event-driven architectures at scale Knowledge of DDD (Domain-Driven Design) principles Background in implementing CQRS and Event Sourcing patterns Experience with high-throughput, low-latency systems Understanding of distributed caching strategies and implementation Cloud & DevOps AWS Professional certifications (Solutions Architect, DevOps) Experience with multi-region deployments and disaster recovery Knowledge of service mesh implementations (Istio, Linkerd) Familiarity with GitOps practices and tools (ArgoCD, Flux) Experience with chaos engineering practices Security & Compliance Understanding of OWASP security principles Experience with PCI-DSS compliance requirements Knowledge of cryptography and secure communication protocols Background in implementing Zero Trust architectures Experience with security automation and DevSecOps practices Development & Testing Experience with TDD/BDD methodologies Knowledge of performance testing tools (k6, JMeter) Background in implementing continuous testing strategies Experience with contract testing (Pact, Spring Cloud Contract) Familiarity with mutation testing concepts About Us TechAhead is a global digital transformation company with a strong presence in the USA and India. We specialize in AI-first product design thinking and bespoke development solutions. With over 15 years of proven expertise, we have partnered with Fortune 500 companies and leading global brands to drive digital innovation and deliver excellence. At TechAhead, we are committed to continuous learning, growth and crafting tailored solutions that meet the unique needs of our clients. Join us to shape the future of digital innovation worldwide and drive impactful results with cutting-edge AI tools and strategies! (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Software Development Engineers are entry/mid-level professionals that design, build, develop and deploy software applications, tools, platforms and services that enable current and future needs of our customers. About The Role Ability to work in a fast paced agile environment, daily stand ups, sprint planning, retrospectives, and sprint demos Learns and adapts; bounces back from setbacks. Scripting for infrastructure continuous build and delivery automation Ensure consistency with cloud architectural guiding principles for assigned projects About You Experience with Microsoft Azure and Amazon cloud providers, focusing on Amazon and/or Azure services, tools, and processes,Familiarity with cloud monitoring tools (Amazon and/or Azure Monitor, DataDog) Understanding of SaaS, PaaS, and IaaS solutions Proficiency in programming languages such as C# and Python Experience with scripting languages (Bash, PowerShell) Knowledge of distributed data stores (Amazon and/or Azure Storage, CosmosDB, Amazon and/or Azure SQL, Redis cache) Container technology expertise (Docker, AWS ECS, Amazon and/or Azure Kubernetes Service) Experience with test automation frameworks (C# Unit Testing, Postman API testing) What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com. Show more Show less

Posted 1 week ago

Apply

6.0 - 11.0 years

20 - 25 Lacs

Hyderabad, Ahmedabad

Hybrid

Naukri logo

Hi Aspirant, Greetings from TechBlocks - IT Software of Global Digital Product Development - Hyderabad !!! About us : TechBlocks is a global digital product engineering company with 16+ years of experience helping Fortune 500 enterprises and high-growth brands accelerate innovation, modernize technology, and drive digital transformation. From cloud solutions and data engineering to experience design and platform modernization, we help businesses solve complex challenges and unlock new growth opportunities. Job Title: Senior DevOps Site Reliability Engineer (SRE) Location : Hyderabad & Ahmedabad Employment Type: Full-Time Work Model - 3 Days from office Job Overview Dynamic, motivated individuals deliver exceptional solutions for the production resiliency of the systems. The role incorporates aspects of software engineering and operations, DevOps skills to come up with efficient ways of managing and operating applications. The role will require a high level of responsibility and accountability to deliver technical solutions. Summary: As a Senior SRE, you will ensure platform reliability, incident management, and performance optimization. You'll define SLIs/SLOs, contribute to robust observability practices, and drive proactive reliability engineering across services. Experience Required: 610 years of SRE or infrastructure engineering experience in cloud-native environments. Mandatory: Cloud : GCP (GKE, Load Balancing, VPN, IAM) Observability: Prometheus, Grafana, ELK, Datadog Containers & Orchestration : Kubernetes, Docker Incident Management: On-call, RCA, SLIs/SLOs IaC : Terraform, Helm Incident Tools: PagerDuty, OpsGenie Nice to Have : GCP Monitoring, Skywalking Service Mesh, API Gateway GCP Spanner, Scope: Drive operational excellence and platform resilience Reduce MTTR, increase service availability Own incident and RCA processes Roles and Responsibilities: Define and measure Service Level Indicators (SLIs), Service Level Objectives ( SLOs), and manage error budgets across services. Lead incident management for critical production issues drive Root Cause Analysis (RCA) and postmortems. Create and maintain runbooks and standard operating procedures for high availability services. Design and implement observability frameworks using ELK, Prometheus, and Grafana ; drive telemetry adoption. Coordinate cross-functional war-room sessions during major incidents and maintain response logs. Develop and improve automated System Recovery, Alert Suppression, and Escalation logic. Use GCP tools like GKE, Cloud Monitoring, and Cloud Armor to improve performance and security posture. Collaborate with DevOps and Infrastructure teams to build highly available and scalable systems. Analyze performance metrics and conduct regular reliability reviews with engineering leads. Participate in capacity planning, failover testing, and resilience architecture reviews. If you are interested , then please share me your updated resume to kranthikt@tblocks.com Warm Regards, Kranthi Kumar kranthikt@tblocks.com Contact: 8522804902 Senior Talent Acquisition Specialist Toronto | Ahmedabad | Hyderabad | Pune www.tblocks.com

Posted 1 week ago

Apply

8.0 years

0 Lacs

Udaipur, Rajasthan, India

On-site

Linkedin logo

We are seeking a Senior Full Stack Engineer with strong problem-solving skills and end-to-end development experience to join a high-performing engineering squad. This is a senior-level role that requires a self-driven engineer capable of ramping up quickly, navigating complex codebases with minimal guidance, and making sound technical and product decisions from day one. Key Responsibilities Quickly develop a working understanding of client’s platform and existing codebase to deliver value with minimal hand-holding. Lead feature development across the stack for projects such as: Dashboard CRUD and Widget-level metric selection Benchmarking tools enhancement Portfolio reporting tools using ComposeSDK Collaborate closely with internal engineers, product leads, and stakeholders to ensure alignment between technical delivery and business goals. Write clean, maintainable, well-tested code with a strong focus on performance and scalability. Participate in code reviews, technical discussions, and architecture design. Work independently and collaboratively to meet timelines while maintaining high engineering standards. Technology Stack Frontend: React, TypeScript Backend: Node.js Database: PostgreSQL Infrastructure & DevOps: AWS, Docker, GitHub, CI/CD Monitoring: DataDog Collaboration & Tools: Jira, Slack Required Qualifications 8+ years of experience as a Full Stack Engineer, with a proven track record of delivering complex web applications end-to-end. Deep expertise in React and Node.js, with strong proficiency in TypeScript. Solid understanding of relational databases, preferably PostgreSQL. Experience working in Agile product development environments and independently navigating large, complex codebases. Strong communication skills and the ability to collaborate effectively in distributed teams. Demonstrated ability to think creatively and deliver scalable, maintainablesolutions that balance technical and business needs. A product mindset, able to connect engineering work to user value and business outcomes. Excited to build scalable, end-to-end solutions and take your full stack expertise to the next level? Click the Apply button below and become an Arcgatian! Apply Show more Show less

Posted 1 week ago

Apply

6.0 - 9.0 years

35 - 50 Lacs

Bengaluru

Work from Office

Naukri logo

What Youll Do: Partner with your business stakeholders to provide them with transparency, data, and resources to make informed decisions Be a technical leader within and across the teams you work with. Drive high impact architectural decisions and hands-on development, including inception, design, execution, and delivery following good design and coding practices. Obsessively focus on production readiness for the team including testing, monitoring, deployment, documentation and proactive troubleshooting. Identify risks and gaps in technical approaches and propose solutions to meet team and project goals. Create proposals and action plans to garner support across the organization. Influence and contribute to the teams strategy and roadmap. Tenacity for learning - curious, and constantly pushing the boundary of what is possible. We Are a Match Because You Have: 6-9 years of experience in backend software engineering architecting and implementing robust, distributed web applications Bachelors degree in Computer Science, Computer Engineering or equivalent combination of education and experience Track-record of technical leadership for teams following software development best practices (e.g. SOLID, TDD, GRASP, YAGNI, etc). Track-record of being a hands-on developer efficiently building technically sound systems. Experience building web services with java and springboot. Experience with Continuous Integration (CI/CD) practices and tools (Buildkite, Jenkins, etc.). Experience architecting solutions leveraging distributed infrastructure (e.g. Docker, Kubernetes, etc). Experience with Microsoft SQL Server, Aerospike, Redis. Experience leveraging monitoring and logging technologies (e.g. DataDog, Elasticsearch, InfluxDB, etc) PS: This role is with one of our Clients who is a leading name in the Retail Industry.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Budget- 30 LPA Education: Engineering degree (B. Tech. or B. E.) in Computer Science or Information Technology Relevant certifications (Node.js, Kafka, etc.). Job Summary: We are seeking a skilled and experienced Senior Node.js Developer who possesses a strong understanding of infrastructure, particularly in CI/CD pipelines and Kafka deployments. The ideal candidate will be instrumental in designing, developing, and maintaining scalable and robust backend services, while also playing a key role in automating deployments, managing our messaging infrastructure, and ensuring the operational excellence of our applications. This role requires a blend of deep Node.js expertise and hands-on experience with modern DevOps practices. Responsibilities: Design, develop, and maintain high-performance, scalable, and secure RESTful APIs and microservices using Node.js and related frameworks Write clean, maintainable, and well-documented code following best practices and architectural patterns. Implement and manage CI/CD pipelines (e.g., Jenkins, GitLab CI, GitHub Actions) to automate software builds, testing, and deployments. Configure, deploy, and manage Kafka clusters and related components for high-throughput, real-time data streaming. Develop and maintain infrastructure as code (IaC) scripts using tools like Terraform, CloudFormation, or Ansible for provisioning and managing cloud resources. Monitor application performance, identify bottlenecks, and implement solutions for optimization and scalability. Collaborate with front-end developers, product managers, and other stakeholders to define requirements and deliver high-quality solutions. Participate in code reviews, contribute to architectural discussions, and mentor junior developers. Troubleshoot and resolve production issues, ensuring high availability and reliability of services. Stay up-to-date with emerging technologies and industry best practices in Node.js development and DevOps. Required Skills and Qualifications: 5+ years of experience in backend development with a strong focus on Node.js. Proficiency in JavaScript/TypeScript and deep understanding of asynchronous programming paradigms. Extensive experience with Node.js frameworks like Express.js, NestJS, or similar. Solid understanding of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Redis). Demonstrable experience in designing, implementing, and maintaining CI/CD pipelines. Hands-on experience with Kafka deployment, configuration, and management for high-volume data streams. Familiarity with containerization technologies (Docker) and orchestration tools (Kubernetes). Knowledge of monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack, Datadog). Strong understanding of version control systems (Git). Experience with microservices architecture and event-driven systems. Experience with testing frameworks (e.g., Jest, Mocha, Chai). Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Title : AWS, SQL, Snowflake, ControlM, ServiceNow - Operational Engineer (Weekend on call) Req ID: 325686 We are currently seeking a AWS, SQL, Snowflake, ControlM, ServiceNow - Operational Engineer (Weekend on call) to join our team in Bangalore, Karntaka (IN-KA), India (IN). Minimum Experience on Key Skills - 5 to 10 years Skills: AWS, SQL, Snowflake, ControlM, ServiceNow - Operational Engineer (Weekend on call) We looking for operational engineer who is ready to work on weekends for oncall as primary criteria. Skills we look for AWS cloud (SQS, SNS, , DynomoDB, EKS), SQL (postgress, cassendra), snowflake, ControlM/Autosys/Airflow, ServiceNow, Datadog, Splunk, Grafana, python/shell scripting.

Posted 1 week ago

Apply

7.0 - 12.0 years

8 - 13 Lacs

Pune

Work from Office

Naukri logo

Req ID: 327560 We are currently seeking a Cloud Solution Delivery Advisor to join our team in Pune, Mahrshtra (IN-MH), India (IN). Please find JD below, for Senior Engineer position. please share good profiles of 7+yrs experience. Senior Engineer Digital Experience monitoring (RUM and Synthetics) - preferably New Relic but can be similar Basically, they want to have an assessment of several apps in Azure PaaS to improve the user experience Skills: RUM Expertise Deep understanding of browser behavior, frontend instrumentation, and user session tracking Experience with RUM tools: New Relic, Dynatrace, Datadog, Elastic RUM, or AppDynamics Proficiency in instrumenting SPAs (React, Angular, Vue) using RUM agents Ability to correlate frontend metrics with backend traces (Full-stack traceability) Synthetic Monitoring Setup and maintenance of Synthetic monitors: Uptime, availability, SLA tracking Scripted journeys for login flows, payment pages, etc.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Thane, Maharashtra, India

On-site

Linkedin logo

Line of Service Internal Firm Services Industry/Sector Not Applicable Specialism Operations Management Level Senior Associate Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. As a business application consulting generalist at PwC, you will provide consulting services for a wide range of business applications. You will leverage a broad understanding of various software solutions to assist clients in optimising operational efficiency through analysis, implementation, training, and support. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: The Application Administrator will be an integral part of the installation, configuration and support of many critical applications at PwC. The Application Administrator will install, test, implement and optimize multiple applications at PwC. Interfacing with internal & external users to troubleshoot issues will be a high priority. The individual should work well within a team environment and be committed to building strong relationships with others. Responsibilities: Experience executing SQL queries and having basic understanding of SQL query language is needed. At least 2 years of experience in the installation, configuration and support of the databases in use by the application. Experience with Azure Devops, and software release life cycles into controlled environments is desired. Other experience should include Windows & linux based application support, experience with Rancher\Rafay or other Kubernetes systems is desired.The role will require support of go-live activities, enhancements, and application release efforts. computer programming acumen; and the ability to analyze systems and determine how these systems can meet client needs. Navigating Windows & Linux systems via command line. Scripting experience with powershell, BASH, and the ability to interpret scripts. Usage of support automation tools such as Ansible. Experience supporting applications in an Azure IaaS environment. Azure PaaS experience. Ability to triage and troubleshoot ETL processes Background with network troubleshooting, usage of monitoring tools (Datadog), Airflow, or system administration a plus. Must be dedicated to sharing knowledge, i.e., documentation, training, etc. Mandatory Skill Sets: Microsoft Azure services, particularly Azure SQL Database. SQL, Azure, Microsoft Azure service Experience in Devops ,Azure kubernetes Preferred Skill Sets: Must be self-driven to learn new technologies Must be a go-getter to jump in and help others. Must be curious Take ownership of projects, issues, etc. Proven ability to diagnose and troubleshoot complex application problems and leverage tools and resources appropriately to identify solutions that may or may not be already documented. Ability to manage multiple conflicting deadlines and competing priorities. Years Of Experience Required: 4 years of experience Education Qualification: Bachelor’s degree in information technology Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree Degrees/Field Of Study Preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure, Structured Query Language (SQL) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Analytical Thinking, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Communication, Creativity, Documentation Development, Embracing Change, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Performance Assessment, Performance Management Software {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

Ascentt is building cutting-edge data analytics & AI/ML solutions for global automotive and manufacturing leaders. We turn enterprise data into real-time decisions using advanced machine learning and GenAI. Our team solves hard engineering problems at scale, with real-world industry impact. We’re hiring passionate builders to shape the future of industrial intelligence. AWS DevOps Engineer 5+ years of experience Job Description Cloud Infrastructure Management: Design, implement, and manage cloud-based infrastructure on AWS and Azure, ensuring optimal scalability, performance, and security. CI/CD Pipeline Development: Develop and maintain CI/CD pipelines using GitHub Actions for automated code deployments and testing. System Monitoring And Incident Management Implement and configure Datadog for comprehensive system monitoring. Develop and maintain Datadog dashboards to visualize system performance and metrics. Set up proactive alerts in Datadog to detect and respond to incidents swiftly, ensuring high system reliability and uptime. Conduct root cause analysis of incidents and implement corrective actions using Datadog insights. Collaboration with AI Teams: Work closely with AI teams to support the operational aspects of LLMs, including deployment strategies and performance tuning. Infrastructure as Code (IaC): Implement IaC using tools like Terraform or AWS CloudFormation to automate infrastructure provisioning and management. Container Orchestration: Manage container orchestration systems such as Kubernetes or AWS ECS. Operational Support for LLMs: Provide operational support for LLMs, focusing on performance optimization and reliability. Scripting and Automation: Utilize scripting languages such as Python and Bash for automation and task management. Security and Compliance: Ensure compliance with security standards and best practices, implementing robust security measures. Documentation: Document system configurations, procedures, and best practices for internal and external stakeholders. DevOps Collaboration: Work with development teams to optimize deployment workflows, introduce best practices for DevOps, and improve overall efficiency. Technology and Industry Awareness: Stay up-to-date with emerging technologies and industry trends to suggest improvements and upgrades. Qualifications And Skills Required Extensive experience with AWS and Azure cloud platforms. Proficiency in developing CI/CD pipelines using GitHub Actions. Strong experience with Datadog for system monitoring, including implementation, configuration, and maintenance. Demonstrated ability to create and maintain Datadog dashboards for performance visualization. Proven expertise in setting up alerts and conducting incident response with Datadog. Hands-on experience with container orchestration systems such as Kubernetes or AWS ECS. Proficiency in Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation. Familiarity with operational aspects of Large Language Models (LLMs) is highly desirable. Strong scripting skills in Python, Bash, or similar languages. In-depth knowledge of security standards and best practices. Excellent documentation skills. Proven ability to work collaboratively with development and AI teams. Commitment to staying current with industry trends and emerging technologies Education Graduate (CS) Certifications/Licenses AWS Certifications is a plus Technical Skills AWS ECS, S3, Lambda, CloudFront, Github actions, Git, Python, Terraform, VPC, API Gateway Show more Show less

Posted 1 week ago

Apply

10.0 - 15.0 years

2 - 3 Lacs

Hyderabad

Work from Office

Naukri logo

Hiring: Backend Developer (Python) Location: Hyderabad (Work From Office All 5 Days) Shift: 2 PM 11 PM IST Experience: 10+ Years CTC: Up to 36 LPA Max Notice Period: Immediate to 15 Days We are seeking a seasoned Backend Developer with strong expertise in Python, Datadog, and modern API development frameworks to join our growing team! Must-Have Skills: • Strong experience in building API endpoints using Flask / Django / FastAPI • Proficiency with Asynchronous programming (asyncio) • Experience with Logging Libraries and Datadog • Familiar with PostgreSQL / MongoDB • API testing using Postman or any API management tool • Basic exposure to S3 / AWS (optional) • Experience in GRPC (optional) • Python Libraries: NumPy, Pandas (optional) • Good communication and team collaboration

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

About Zeller At Zeller, we’re champions for businesses of all sizes, and proud to be a fast-growing Australian scale-up taking on the ambitious goal of reimagining business banking and payments. We believe in a level playing field, where all businesses benefit from access to smarter payments and financial services solutions that accelerate their cash flow, help them get paid faster, and give them a better understanding of their finances. So we’re hard at work building the tools to make it happen. Zeller is growing fast, backed by leading VCs, and brings together a global team of passionate payment and tech industry professionals. With an exciting roadmap of innovative new products under development, we are building a high performing team to take on the outdated banking solutions. If you are passionate about innovation, thrive in fast-paced environments, embrace a challenge, hate bureaucracy, and can’t think of anything more exciting than disrupting the status-quo, then read on to learn more. About The Role The Zeller product engineering team owns the software, infrastructure and customer experience that enables more than 85,000 Australian businesses to accept payments and access the financial services they need to run their businesses. As a Senior Application Support Engineer you will be a leading member of the team that shapes and owns Zeller’s commitment to excellent and highly available service delivery. What you’ll be doing Deliver projects that improve the service delivery of Zeller’s Application Support team We are looking for someone to be a senior member of a small team, but is still principally involved in hands-on service delivery. Be a primary point of contact for escalated product issues from Zeller’s account and customer success teams. Own and orchestrate the triage, investigation and resolution of complex technical issues driving the pace of resolution and communicating well-thought out and reliable direction. Be an expert in the products and workflows you support and promote and share that knowledge to our partner teams. Using your technical expertise, participate in application monitoring using logs, data stores, internal tools and dashboards. Be a part of our incident response team, responding to alerts and bearing some on-call responsibilities. What Skills And Experience We Are Looking For Zeller is a product driven startup with a deep care for the quality of service we provide. Experience in software companies with a customer facing product is highly valued. You have the ability to manage multiple, competing tasks & priorities with ease in a fast-moving environment. A strong technical background with excellent troubleshooting, analytical and data skills. This should include familiarity with AWS services (or similar), an active SQL skill set, experience with release management toolset and service reporting tools (datadog or similar). Excellent communication skills and the ability to build strong partnerships with engineering, QA, and customer facing teams. Demonstrated experience participating in change management and incident response processes Payments experience is highly valued but not required Excitement and drive to work in a product company that delivers mission critical financial services The tools Zeller uses to get the work done Familiarity with these services or close equivalents is appreciated but we do not expect you to have used all of them. Hubspot is our principal CRM and where we track our support tickets. We also use Jira in conjunction with our engineering teams. The systems we support run in browsers, mobile applications, and payment terminals. The backend systems we support use AWS and are principally written in Typescript on a lambda, postgres, DynamoDB stack and using an event driven architecture. We monitor our products using tools and dashboardings in products like Datadog and Sentry Zeller’s payment services integrate with many third parties, particularly point of sale systems. Familiarity with POS, or managing issues with third party partners is valued. Like the rest of our team, you will benefit from Competitive remuneration A balanced, progressive, and supportive work environment; Excellent parental leave and other leave entitlements; Fully remote role Annual get together with the team Endless learning and development opportunities; Plenty of remote friendly fun and social opportunities - we love to come together as a team; An ability to influence and shape the future of Zeller as our company scales both domestically and globally; Being part of one of Australia’s most exciting scale-ups. Show more Show less

Posted 1 week ago

Apply

56.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

You will be part of a dynamic team that provides 24X7 support to BFS and end to end support for all the monitoring tools in a supportive and inclusive environment. Our team works closely with key stakeholders, providing monitoring solutions using a variety of modern technologies. At Macquarie, our advantage is bringing together diverse people and empowering them to shape all kinds of possibilities. We are a global financial services group operating in 31 markets and with 56 years of unbroken profitability. You’ll be part of a friendly and supportive team where everyone - no matter what role - contributes ideas and drives outcomes. What role will you play? In this role, you will be responsible for monitoring the core software platforms, analysing and troubleshooting any issues and automating the manual processes. You will collaborate with key stakeholders to investigate issues, implement solutions and drive improvements in reliability and performance of software systems in the organization. What You Offer 4 to 9 years of industry experience working as Site Reliability Engineer with good exposure to production support and incident management; Experience with APM tools like Dynatrace, AppDynamics, DataDog, etc., and log monitoring tools such as Sumo Logic, and Splunk; Good programming skills in any high-level programming languages like Java, python or golang; and Familiarity with public cloud platforms such as AWS GCP is highly desirable. Amenable to follow a hybrid work setup with standard schedule of 6:30am - 3:30pm IST with 1 week mandatory night shift (work-from-home setup) - either from 3pm - 12am or 10pm - 7am IST in every 1.5 - 2 months as per requirement. We love hearing from anyone inspired to build a better future with us, if you're excited about the role or working at Macquarie we encourage you to apply. About Technology Technology enables every aspect of Macquarie, for our people, our customers and our communities. We’re a global team that is passionate about accelerating the digital enterprise, connecting people and data, building platforms and applications and designing tomorrow’s technology solutions. Our commitment to diversity, equity and inclusion We are committed to fostering a diverse, equitable and inclusive workplace. We encourage people from all backgrounds to apply and welcome all identities, including race, ethnicity, cultural identity, nationality, gender (including gender identity or expression), age, sexual orientation, marital or partnership status, parental, caregiving or family status, neurodiversity, religion or belief, disability, or socio-economic background. We welcome further discussions on how you can feel included and belong at Macquarie as you progress through our recruitment process. Our aim is to provide reasonable adjustments to individuals who may need support during the recruitment process and through working arrangements. If you require additional assistance, please let us know in the application process. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

Company Profile LSEG (London Stock Exchange Group) is a world-leading financial markets infrastructure and data business. We are dedicated, open-access partners with a commitment to excellence in delivering services across Data & Analytics, Capital Markets, and Post Trade. Backed by three hundred years of experience, innovative technologies, and a team of over 23,000 people in 70 countries, our purpose is driving financial stability, empowering economies, and enabling customers to create sustainable growth. We are evolving our Cloud Site Reliability Engineering team to move beyond support and operations. As a Lead Cloud SRE Engineer, you will form part of a diverse and inclusive organization that has full ownership of the availability, performance, and scalability of one of the most impactful platforms at LSEG. Role Profile In this role, you will be joining our Cloud SRE team within Cloud & Productivity Engineering as a Lead SRE Engineer. This team focuses on applying software Engineering practices to IT operations tasks to maintain and improve the availability, scalability and reliability of our Cloud platform hosting LSEG applications. We strive to improve automation and increase the systems' self-healing capabilities. We monitor, measure and optimize the platform’s performance, pushing our capabilities forward, exceeding our customer needs. We also work alongside architects, developers, and engineers to ensure efficient enterprise scale AWS Landing Zone platforms and products, while playing an active involvement in decision making areas such as automation, scalability, capacity, reliability, business continuity, disaster recovery and governance Tech Profile/Essential Skills BS/MS degree in Computer Science, Software Engineering or related STEM degree, or meaningful professional experience. Proven 5 years' experience in Site Reliability Engineering with a focus on Cloud Platform Landing Zones and services Proven leadership skills with experience in mentoring and guiding engineering teams. Relevant Cloud certifications such as Azure Administrator Associate (AZ-104). Ability to work in a fast-paced, dynamic environment and adapt to changing priorities Experience with DevSecOps practices, including automation, continuous integration, continuous delivery, and infrastructure as code using tools such as Terraform and Gitlab. 5 years demonstrable experience with creating and maintaining CI/CD pipelines and repositories. Experience working in Agile environments, with a demonstrable experience of Agile principles, ceremonies and practices. Experience implementing and managing platform and product observability including dashboarding, logging, monitoring, alerting and tracing with Datadog or Cloud native tooling. Strong problem-solving skills, root cause analysis, and incident/service management Excellent verbal and written communication skills, with the ability to collaborate effectively with multi-functional teams. Preferred Skills and Experience Solid working knowledge in setting up enterprise scale Azure Landing Zones and hands on experience with Microsoft’s Cloud Adoption Framework. Proven experience deploying AWS Landing Zones in accordance with the AWS Well-Architected Framework. Proficiency in programming languages such as Python, Java, Go, etc. Sound understanding of financial institutes and markets. Education and Professional Skills Relevant Professional qualifications. BS/MS degree in Computer Science, Software Engineering or related STEM degree. Detailed Responsibilities Lead, engineer, maintain, and optimize hybrid Cloud Platforms and Services, focusing on automation, reliability, scalability, and performance. Lead and mentor peers, providing guidance and support to ensure high performance and professional growth within the team. Be accountable for the team's work, ensuring high standards and successful project outcomes. Collaborate with Cloud Platform engineering teams, architects, and other cross-functional teams to enhance reliability in the build and release stages for the cloud platform and products. Develop and deploy automation tools and frameworks to reduce toil. Provide multi-functional teams guidance and mentorship on best-practices for Cloud products and services. Adhere to DevSecOps best practices, industry standards to optimize the platform release strategy. Continuously seek opportunities for automation and customer self-service to solve technical issues, reduce toil and providing innovative solutions. Participate in Agile ceremonies and activities to meet engineering and business goals. Create and maintain up-to-date comprehensive documentation for landing zone components, processes, and procedures. Foster a culture of customer excellence and continuous improvement for the SRE function. Follow and adhere to established ITSM processes and procedures (Incident, Request, Change and Problem Management) Benefits We are looking for intellectually curious people, passionate about the bigger picture of how technology industry is evolving, ready to ask difficult questions and deal with complicated scenarios! If you are creative and a problem solver, this is the place to be as will be supporting you to fast-forward your career! We enhance each employee’s potential through personal development through a wide range of learning tools both formal and informal. LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership, Excellence and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision making and everyday actions. Working with us means that you will be part of a dynamic organisation of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days and wellbeing initiatives. We are proud to be an equal opportunities employer. This means that we do not discriminate on the basis of anyone’s race, religion, colour, national origin, gender, sexual orientation, gender identity, gender expression, age, marital status, veteran status, pregnancy or disability, or any other basis protected under applicable law. Conforming with applicable law, we can reasonably accommodate applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership, Excellence and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision making and everyday actions. Working with us means that you will be part of a dynamic organisation of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days and wellbeing initiatives. We are proud to be an equal opportunities employer. This means that we do not discriminate on the basis of anyone’s race, religion, colour, national origin, gender, sexual orientation, gender identity, gender expression, age, marital status, veteran status, pregnancy or disability, or any other basis protected under applicable law. Conforming with applicable law, we can reasonably accommodate applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what it’s used for, and how it’s obtained, your rights and how to contact us as a data subject . If you are submitting as a Recruitment Agency Partner, it is essential and your responsibility to ensure that candidates applying to LSEG are aware of this privacy notice.

Posted 1 week ago

Apply

10.0 years

8 - 10 Lacs

Hyderābād

On-site

GlassDoor logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. The ideal candidate will have a solid background in observability tools (Datadog, Splunk, Kibana), cloud-native infrastructure (AWS, EKS), Python scripting, and GitHub Actions workflows. Experience in healthcare data interoperability and analytics is essential. Primary Responsibilities: Design, implement, and maintain scalable, reliable, and secure infrastructure on AWS and EKS Develop and manage observability and monitoring solutions using Datadog, Splunk, and Kibana Collaborate with development teams to ensure high availability and performance of microservices-based applications Automate infrastructure provisioning, deployment, and monitoring using Infrastructure as Code (IaC) and CI/CD pipelines Build and maintain GitHub Actions workflows for continuous integration and deployment Troubleshoot production issues and lead root cause analysis to improve system reliability Ensure compliance with healthcare data standards and regulations (e.g., HIPAA, HL7, FHIR) Work closely with data engineering and analytics teams to support healthcare data pipelines and analytics platforms Mentor junior engineers and contribute to SRE best practices and culture Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor’s degree in Engineering (B.Tech) or equivalent in Computer Science, Information Technology, or a related field 10+ years of experience in Site Reliability Engineering, DevOps, or related roles Hands-on experience with AWS services, EKS, and container orchestration Experience with healthcare technology solutions, health data interoperability standards (FHIR, HL7), and healthcare analytics Experience with GitHub Actions or similar CI/CD tools Solid expertise in Datadog, Splunk, Kibana, and other observability tools Deep understanding of microservices architecture and distributed systems Proficiency in Python for scripting and automation Solid scripting and automation skills (e.g., Bash, Terraform, Ansible) Proven excellent problem-solving, communication, and collaboration skills Preferred Qualifications: Certifications in AWS, Kubernetes, or healthcare IT (e.g., AWS Certified DevOps Engineer, Certified Kubernetes Administrator) Experience with security and compliance in healthcare environments At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 week ago

Apply

3.0 years

1 - 5 Lacs

Hyderābād

On-site

GlassDoor logo

JOB DESCRIPTION There’s nothing more exciting than being at the center of a rapidly growing field in technology and applying your skillsets to drive innovation and modernize the world's most complex and mission-critical systems. As a Site Reliability Engineer III at JPMorgan Chase within the Consumer & Community Banking, you will solve complex and broad business problems with simple and straightforward solutions. Through code and cloud infrastructure, you will configure, maintain, monitor, and optimize applications and their associated infrastructure to independently decompose and iteratively improve on existing solutions. You are a significant contributor to your team by sharing your knowledge of end-to-end operations, availability, reliability, and scalability of your application or platform. Job responsibilities Guides and assists others in the areas of building appropriate level designs and gaining consensus from peers where appropriate Collaborates with other software engineers and teams to design and implement deployment approaches using automated continuous integration and continuous delivery pipelines Collaborates with other software engineers and teams to design, develop, test, and implement availability, reliability, scalability, and solutions in their applications Implements infrastructure, configuration, and network as code for the applications and platforms in your remit Collaborates with technical experts, key stakeholders, and team members to resolve complex problems Understands service level indicators and utilizes service level objectives to proactively resolve issues before they impact customers Supports the adoption of site reliability engineering best practices within your team Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 3+ years applied experience Proficient in site reliability culture and principles and familiarity with how to implement site reliability within an application or platform Proficient in at least one programming language such as Python, Java/Spring Boot, and .Net Proficient knowledge of software applications and technical processes within a given technical discipline (e.g., Cloud, artificial intelligence, Android, etc.) Experience in observability such as white and black box monitoring, service level objective alerting, and telemetry collection using tools such as Grafana, Dynatrace, Prometheus, Datadog, Splunk, and others Experience with continuous integration and continuous delivery tools like Jenkins, GitLab, or Terraform Familiarity with container and container orchestration such as ECS, Kubernetes, and Docker Familiarity with troubleshooting common networking technologies and issues Ability to contribute to large and collaborative teams by presenting information in a logical and timely manner with compelling language and limited supervision Ability to proactively recognize road blocks and demonstrates interest in learning technology that facilitates innovation Ability to identify new technologies and relevant solutions to ensure design constraints are met by the software team Ability to initiate and implement ideas to solve business problems ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. ABOUT THE TEAM Our Consumer & Community Banking division serves our Chase customers through a range of financial services, including personal banking, credit cards, mortgages, auto financing, investment advice, small business loans and payment processing. We’re proud to lead the U.S. in credit card sales and deposit growth and have the most-used digital solutions – all while ranking first in customer satisfaction.

Posted 1 week ago

Apply

3.0 years

1 - 5 Lacs

Hyderābād

On-site

GlassDoor logo

There’s nothing more exciting than being at the center of a rapidly growing field in technology and applying your skillsets to drive innovation and modernize the world's most complex and mission-critical systems. As a Site Reliability Engineer III at JPMorgan Chase within the Consumer & Community Banking, you will solve complex and broad business problems with simple and straightforward solutions. Through code and cloud infrastructure, you will configure, maintain, monitor, and optimize applications and their associated infrastructure to independently decompose and iteratively improve on existing solutions. You are a significant contributor to your team by sharing your knowledge of end-to-end operations, availability, reliability, and scalability of your application or platform. Job responsibilities Guides and assists others in the areas of building appropriate level designs and gaining consensus from peers where appropriate Collaborates with other software engineers and teams to design and implement deployment approaches using automated continuous integration and continuous delivery pipelines Collaborates with other software engineers and teams to design, develop, test, and implement availability, reliability, scalability, and solutions in their applications Implements infrastructure, configuration, and network as code for the applications and platforms in your remit Collaborates with technical experts, key stakeholders, and team members to resolve complex problems Understands service level indicators and utilizes service level objectives to proactively resolve issues before they impact customers Supports the adoption of site reliability engineering best practices within your team Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 3+ years applied experience Proficient in site reliability culture and principles and familiarity with how to implement site reliability within an application or platform Proficient in at least one programming language such as Python, Java/Spring Boot, and .Net Proficient knowledge of software applications and technical processes within a given technical discipline (e.g., Cloud, artificial intelligence, Android, etc.) Experience in observability such as white and black box monitoring, service level objective alerting, and telemetry collection using tools such as Grafana, Dynatrace, Prometheus, Datadog, Splunk, and others Experience with continuous integration and continuous delivery tools like Jenkins, GitLab, or Terraform Familiarity with container and container orchestration such as ECS, Kubernetes, and Docker Familiarity with troubleshooting common networking technologies and issues Ability to contribute to large and collaborative teams by presenting information in a logical and timely manner with compelling language and limited supervision Ability to proactively recognize road blocks and demonstrates interest in learning technology that facilitates innovation Ability to identify new technologies and relevant solutions to ensure design constraints are met by the software team Ability to initiate and implement ideas to solve business problems

Posted 1 week ago

Apply

2.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

About Plane Plane is an incisive response to config-heavy, opinionated, and restrictive project management software. [Read our manifesto]. In just two years, Plane’s grown to #1 in its category on GitHub and become a viable open-core alternative to Jira, Monday, Wrike, Asana, ClickUp, and Linear, not to mention all-in-one tools like Notion or Obsidian. Our growth has come on the back of the product’s true flexibility without artificial limits, simple configurations that work out of the box, and thoughtfully packaged features that nurture our customers’ growth instead of punishing it. As a modern product start-up, we obsess over new and power users equally. Our mission is to empower teams everywhere with the simplest, most delightful work management experience on the planet. Our vision is to become the WorkOS of the future with a workbench of unified tools and techniques that intuitively and progressively form a greater whole for knowledge workers. We’re well capitalized, backed by OSS Capital, and are revenue-positive. Our coworkers include ex-Microsoft, Paypal, MongoDB, Deloitte, \[24\]7, Nutanix, and Yahoo! in our 50+ strong human force today. What you will do? Develop the backbone for a variety of user-centric features. Enhance the efficiency of real-time data synchronization across our platforms. Boost database and system performance through advanced caching techniques and connection pooling. Elevate our service's reliability with superior observability, monitoring, and alert systems, enabling prompt incident responses. Expand our service's capabilities with strategic architectural and infrastructural upgrades. Share our meaningful innovations with the open-source community. Set new industry standards for software development practices, aiming to deliver unparalleled quality to the global market. Skills You'll Need to Bring Proficient in programming, with extensive experience working across the full stack. Expertise in Python, with a strong ability to navigate the full stack environment. A self-starter attitude, with a commitment to taking responsibility for tasks. Expirenced in working on mono repos and micro services architecture. Demonstrated success in similar roles. A dedication to delivering high-quality products. 2 to 3 years of proven experience in developing high-quality backend systems. Tech Django/Python, Postgres, RabbitMQ and Redis in the backend Hosted on AWS Cloud with k8s Experienced working in message queues and scheduling. Expirienced in using monitoring tools like DataDog and Sentry Tools: GitHub, Django, PGAdmin, Postman, Pytest Why Join Plane? Be part of a global product team driving meaningful impact worldwide. Thrive in a collaborative, innovation-driven environment that prioritizes continuous learning. Experience a vibrant and supportive company culture. Join a high-growth organization with exciting opportunities for advancement.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

Job Requirements Phenom People is looking for an experienced and motivated Product Manager to join our Product team in Hyderabad, Telangana, India. This is a full-time position. The Associate Product Manager or the Product Manager will be responsible for developing and managing the product roadmap, working with stakeholders to define product requirements, and managing the product life cycle. The ideal candidate will have a strong technical background and experience in product management. Responsibilities: Develop and manage the product roadmap Work with stakeholders to define product requirements Manage the product life cycle Monitor product performance and customer feedback Identify and prioritize product features Develop product pricing and positioning strategies Create product marketing plans Develop product launch plans Analyze market trends and customer needs Collaborate with engineering, design, and marketing teams Requirements: Must-Have: 2+ years of product management experience with at least 2 years in a technical or observability-related role. Strong understanding of APM concepts: distributed tracing, metrics aggregation, anomaly detection, alerting, root cause analysis. Familiarity with modern observability stacks: OpenTelemetry, Prometheus, Grafana, Jaeger, Zipkin, ELK/EFK, Datadog, New Relic, AppDynamics, etc. Exposure to cloud-native infrastructure: containers, Kubernetes, microservices architecture. Experience working with engineers on deeply technical systems and scalable backend architecture. Proficiency in creating technically detailed user stories and acceptance criteria. Strong problem-solving and analytical skills, with a bias for action and customer empathy. Nice-to-Have: Background in software engineering, DevOps, or site reliability engineering. Experience in building Technical products Understanding of telemetry pipelines, sampling strategies, and correlation between MELT signals. Familiarity with SLIs/SLOs, service maps, and incident response workflows. Knowledge of integration with CI/CD, synthetic monitoring, or real-user monitoring (RUM). We prefer candidates with these experiences Experience in product management - worked as PO or PM in a SaaS product organization Experience working on integrations, API's etc., Experience collaborating with customers and internal business partners Experience working with distributed / international teams Experience with JIRA or equivalent product development management tools Minimum Qualifications 1 to 3 years of experience in product management - as a Product Manager or Product owner or Associate Product Manager Experience in HR Tech industry is a plus but not mandatory Bachelor’s degree or equivalent years of experience. MBA is highly desirable. Benefits Competitive salary for a startup Gain experience rapidly Work directly with executive team Fast-paced work environment #LI-JG1

Posted 1 week ago

Apply

5.0 years

18 - 24 Lacs

Hyderābād

On-site

GlassDoor logo

No. of Positions: 2 Position: Observability Engineer Exp: 5-10 Years Location: Hyderabad Mode: 2 Days WFO Mandatory Skills: Observability, Grafana and Writing queries using Prometheus and Loki. Note: Candidate deployed at Vialto premises. Job Description: We are looking for a highly skilled Observability Engineer to design, develop, and maintain observability solutions that provide deep visibility into our infrastructure, applications, and services. You will be responsible for implementing monitoring, logging, and tracing solutions to ensure the reliability, performance, and availability of our systems. Working closely with development, Infra Engineers, DevOps, and SRE teams, you will play a critical role in optimizing system observability and improving incident response. Key Responsibilities: ● Design and implement observability solutions for monitoring, logging, and tracing across cloud and on-premises environments. ● Develop and maintain monitoring tools such as Prometheus, Grafana, Datadog, New Relic, and AppDynamics. ● Implement distributed tracing using OpenTelemetry, Jaeger, Zipkin, or similar tools to improve application performance and troubleshooting. ● Optimize log management and analysis with tools like Elasticsearch, Splunk, Loki, or Fluentd. ● Create alerting and anomaly detection strategies to proactively identify system issues and reduce mean time to resolution (MTTR). ● Collaborate with development and SRE teams to enhance observability in CI/CD pipelines and microservices architectures. ● Automate observability processes using scripting languages like Python, Bash, or Golang. ● Ensure scalability and efficiency of monitoring solutions to handle large-scale distributed systems. ● Support incident response and root cause analysis by providing actionable insights through observability data. ● Stay up to date with industry trends in observability and site reliability engineering (SRE). Required Qualifications: ● 3+ years of experience in observability, SRE, DevOps, or a related field. ● Proficiency in observability tools such as Prometheus, Grafana, Datadog, New Relic, or AppDynamics. ● Experience with logging platforms like Elasticsearch, Splunk, Loki, or Fluentd. ● Strong knowledge of distributed tracing (OpenTelemetry, Jaeger, Zipkin). ● Hands-on experience with Azure cloud platforms and Kubernetes. ● Proficiency in scripting languages (Python, Bash, PowerShell) and infrastructure as code (Terraform, Ansible). ● Solid understanding of system performance, networking, and troubleshooting. ● Strong problem-solving and analytical skills. ● Excellent communication and collaboration abilities. Preferred Qualifications: ● Experience with AI-driven observability and anomaly detection. ● Familiarity with microservices, serverless architectures, and event-driven systems. ● Experience working with on-call rotations and incident management workflows. ● Relevant certifications in observability tools, cloud platforms, or SRE practices. Job Type: Fresher Pay: ₹1,800,000.00 - ₹2,400,000.00 per year Benefits: Provident Fund Supplemental Pay: Performance bonus Work Location: In person

Posted 1 week ago

Apply

0 years

5 - 7 Lacs

Gurgaon

On-site

GlassDoor logo

Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description Description - External United's Digital Technology team is comprised of many talented individuals all working together with cutting-edge technology to build the best airline in the history of aviation. Our team designs, develops and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Job overview and responsibilities As an Engineering Automation Test Manager of Information Technology at United Airlines, you will be responsible for day-to-day supervision and direction of QE Engineer related roles across multiple cross-functional areas. You will directly supervise the Quality engineering professionals. In this role you will have the accountability of leading Quality efforts on multiple projects and/or a large program that consists of multiple testing tracks is necessary. You will work closely with US senior managers, managers, and SME leadership team to influence technical and architectural aspects of computing platforms. Resource Management including mentoring, coaching performance appraisal, Capacity Planning and Budgeting Develop and execute comprehensive test strategies, plans and schedules to ensure product quality and timely delivery. Drive test automation initiatives, including in-sprint and intelligent testing, farmwork, script creation and maintenance to improve coverage and reduce test cycle completion time. Build strong partnership with application development teams, product managers, & other stakeholders to ensure alignment on testing strategy, schedule, and scope. Continuously evaluate and improve testing processes, standards, metrics, and tools to enhance overall test efficiencies. Manage and mitigate testing related risks and issues. Qualifications - External What’s needed to succeed (Minimum Qualifications): Bachelor’s degree in computer science or computer engineering. Software Development Life Cycle (SDLC) and Software Testing Life Cycle (STLC) Agile, Scaled Agile, and Waterfall methodologies DevOps CICD with continues automation and testing Well versed in Test Automation tool, examples - Selenium/BDD , Ready API, JIRA and Zephyr , Github, (any Devops tool) , Jenkins, Rest Assured, Fiddler,Kibana, Playwright Experience in Test environment and Release management Exposure to Cloud Technologies Ability to support during off / CST hours during production deployments. Qualifications What will help you propel from the pack (Preferred Qualifications): Airline Domain Knowledge App D or Dynatrace or Datadog ( any one of the APPIUM Tool) ; Seetest or any Mobile Device cloud platform ; sonar scan ; Security testing tools ( any one of them);BrowserStack ( or any tool to test different browsers), Harness, Load Runner.

Posted 1 week ago

Apply

9.0 years

6 - 7 Lacs

Chennai

On-site

GlassDoor logo

Total 9 years of experience with minimum 5 years of experience working as DBT administrator DBT Core Cloud Manage DBT projects models tests snapshots and deployments in both DBT Core and DBT Cloud Administer and manage DBT Cloud environments including users permissions job scheduling and Git integration Onboarding and enablement of DBT users on Dbt Cloud platform Work closely with users to support DBT adoption and usage SQL Warehousing Write optimized SQL and work with data warehouses like Snowflake BigQuery Redshift or Databricks Cloud Platforms Use AWS GCP or Azure for data storage eg S3 GCS compute and resource management Orchestration Tools Automate DBT runs using Airflow Prefect or DBT Cloud job scheduling Version Control CI CD Integrate DBT with Git and manage CI CD pipelines for model promotion and testing Monitoring Logging Track job performance and errors using tools like dbt-artifacts, Datadog, or cloud-native logging Access Security Configure IAM roles secrets and permissions for secure DBT and data warehouse access Documentation Collaboration Maintain model documentation use dbt docs and collaborate with data teams About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 week ago

Apply

3.0 years

4 - 8 Lacs

Surat

On-site

GlassDoor logo

About the Role We are looking for a DevOps Engineer with 3+ years of hands-on experience in automating, optimizing, and maintaining robust CI/CD pipelines, infrastructure as code, and cloud-based deployments. The ideal candidate will be skilled in bridging the gap between development and operations, ensuring high availability, performance, and scalability across environments. Contributions of a DevOps Engineer The capabilities of a DevOps Engineer encompass a wide range of technical skills, soft skills, and domain knowledge. Here are the key contributions they make: · Toolchain Management · Documentation & Knowledge Sharing · Performance Tuning · Cost Optimization · Customer-Focused Operations · Release Management Expectations for a DevOps Engineer · CI/CD Pipeline Management: Design, implement, and manage continuous integration and continuous deployment pipelines using tools like Jenkins, GitLab CI, or GitHub Actions. · Infrastructure as Code (IaC): Automate infrastructure provisioning using Terraform, CloudFormation, or similar tools. · Cloud Infrastructure Management: Manage and optimize cloud services (AWS, Azure, or GCP), ensuring secure, scalable, and cost-effective solutions. · Monitoring & Logging: Set up and maintain monitoring tools like Prometheus, Grafana, ELK stack, or Datadog to ensure system health and reliability. · Security & Compliance: Implement security best practices including secrets management, access control, and vulnerability scanning. · Scripting & Automation: Write automation scripts using Bash, Python, or similar to streamline operations. · Backup & Disaster Recovery: Develop and maintain disaster recovery plans, backup strategies, and high availability configurations . · Team Collaboration: Collaborate with developers, testers, and system administrators to streamline and secure development workflows. · Containerization & Orchestration: Build and manage containerized applications using Docker and orchestrate deployments with Kubernetes. Capabilities of a DevOps Engineer · Education: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field. · Proven Experience: Demonstrable as a DevOps Engineer, usually supported by a strong portfolio showcasing relevant projects and accomplishments. · Certifications: AWS Certified DevOps Engineer, CKA (Certified Kubernetes Administrator), or equivalent . · Automation Scripting: Competence in writing shell scripts or using languages like Python, Bash, or PowerShell. · Agile & DevOps Culture Alignment: Familiarity with Agile practices and promoting a DevOps mindset across teams. · System Automation Expertise: Ability to automate repetitive tasks and infrastructure provisioning using scripting and IaC tools. · Scalability Planning: Understanding of designing scalable systems that handle high traffic and growth. Benefits of joining Atologist Infotech Paid Leaves Leave Encashment Friendly Leave Policy 5 Days Working Festivals Celebrations Friendly Environment Lucrative Salary packages Paid Sick Leave Diwali Vacation Annual Big Tour Festive Off If the above requirements suit your interest, please call us on +91 9909166110 or send your resume to hr@atologistinfotech.com Job Type: Full-time Benefits: Leave encashment Paid sick time Paid time off Provident Fund Schedule: Fixed shift Monday to Friday Supplemental Pay: Overtime pay Ability to commute/relocate: Surat, Gujarat: Reliably commute or planning to relocate before starting work (Preferred) Experience: DevOps: 3 years (Preferred) Work Location: In person Speak with the employer +91 9909166110

Posted 1 week ago

Apply

Exploring Datadog Jobs in India

Datadog, a popular monitoring and analytics platform, has been gaining traction in the tech industry in India. With the increasing demand for professionals skilled in Datadog, job opportunities are on the rise. In this article, we will explore the Datadog job market in India and provide valuable insights for job seekers looking to pursue a career in this field.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Delhi

These cities are known for their thriving tech industries and are actively hiring for Datadog roles.

Average Salary Range

The average salary range for Datadog professionals in India varies based on experience levels. Entry-level positions can expect a salary ranging from INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.

Career Path

A typical career path in Datadog may include roles such as Datadog Administrator, Datadog Developer, Datadog Consultant, and Datadog Architect. Progression usually follows a path from Junior Datadog Developer to Senior Datadog Developer, eventually leading to roles like Datadog Tech Lead or Datadog Manager.

Related Skills

In addition to proficiency in Datadog, professionals in this field are often expected to have skills in monitoring and analytics tools, cloud computing (AWS, Azure, GCP), scripting languages (Python, Bash), and knowledge of IT infrastructure.

Interview Questions

  • What is Datadog and how does it differ from other monitoring tools? (basic)
  • How do you set up custom metrics in Datadog? (medium)
  • Explain how you would create a dashboard in Datadog to monitor server performance. (medium)
  • What are some key features of Datadog APM (Application Performance Monitoring)? (advanced)
  • Can you explain how Datadog integrates with Kubernetes for monitoring? (medium)
  • Describe how you would troubleshoot an alert in Datadog. (medium)
  • How does Datadog handle metric aggregation and visualization? (advanced)
  • What are some best practices for using Datadog to monitor cloud infrastructure? (medium)
  • Explain the difference between Datadog Logs and Datadog APM. (basic)
  • How would you set up alerts in Datadog for critical system metrics? (medium)
  • Describe a challenging problem you faced while using Datadog and how you resolved it. (advanced)
  • What is anomaly detection in Datadog and how does it work? (medium)
  • How does Datadog handle data retention and storage? (medium)
  • What are some common integrations with Datadog that you have worked with? (basic)
  • Can you explain how Datadog handles tracing for distributed systems? (advanced)
  • Describe a recent project where you used Datadog to improve system performance. (medium)
  • How do you ensure data security and privacy when using Datadog? (medium)
  • What are some limitations of Datadog that you have encountered in your experience? (medium)
  • Explain how you would use Datadog to monitor network traffic and performance. (medium)
  • How does Datadog handle auto-discovery of services and applications for monitoring? (medium)
  • What are some key metrics you would monitor for a web application using Datadog? (basic)
  • Describe a scenario where you had to scale monitoring infrastructure using Datadog. (advanced)
  • How would you implement anomaly detection for a specific metric in Datadog? (medium)
  • What are some best practices for setting up alerts and notifications in Datadog? (medium)

Closing Remark

With the increasing demand for Datadog professionals in India, now is a great time to explore job opportunities in this field. By honing your skills, preparing for interviews, and showcasing your expertise, you can confidently apply for Datadog roles and advance your career in the tech industry. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies