Home
Jobs

1325 Datadog Jobs - Page 10

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.11 years

0 Lacs

Kanpur, Uttar Pradesh, India

On-site

Linkedin logo

Experience : 5.11 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Hybrid (Bengaluru) Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Wayfair) What do you need for this opportunity? Must have skills required: Java, Microsoft SQL Server, Redis, Aerospike, CI/CD, Docker, Kubernetes, REST, microservices Wayfair is Looking for: About The Job Candidates for this position are preferred to be based in Bangalore, and will be expected to comply with their team's hybrid work schedule requirements. Who We Are: Wayfair runs the largest custom e-commerce large parcel network in the United States, approximately 1.6 million square meters of logistics space. The nature of the network is inherently a highly variable ecosystem that requires flexible, reliable, and resilient systems to operate efficiently. What You’ll Do: Partner with your business stakeholders to provide them with transparency, data, and resources to make informed decisions Be a technical leader within and across the teams you work with Drive high impact architectural decisions and hands-on development, including inception, design, execution, and delivery following good design and coding practices Obsessively focus on production readiness for the team including testing, monitoring, deployment, documentation and proactive troubleshooting Identify risks and gaps in technical approaches and propose solutions to meet team and project goals Create proposals and action plans to garner support across the organization Influence and contribute to the team’s strategy and roadmap Tenacity for learning - curious, and constantly pushing the boundary of what is possible We Are a Match Because You Have: 6+ years of experience in backend software engineering architecting and implementing robust, distributed web applications. Bachelor’s degree in Computer Science, Computer Engineering or equivalent combination of education and experience. Track-record of technical leadership for teams following software development best practices (e.g. SOLID, TDD, GRASP, YAGNI, etc). Track-record of being a hands-on developer efficiently building technically sound systems. Experience building web services with Java, REST, Micro-services. Experience with Continuous Integration (CI/CD) practices and tools (Buildkite, Jenkins, etc.). Experience architecting solutions leveraging distributed infrastructure (e.g.Docker, Kubernetes, etc). Experience with Microsoft SQL Server, Aerospike, Redis. Experience leveraging monitoring and logging technologies (e.g. DataDog, Elasticsearch, InfluxDB, etc). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 5 days ago

Apply

5.11 years

0 Lacs

Nashik, Maharashtra, India

On-site

Linkedin logo

Experience : 5.11 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Hybrid (Bengaluru) Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Wayfair) What do you need for this opportunity? Must have skills required: Java, Microsoft SQL Server, Redis, Aerospike, CI/CD, Docker, Kubernetes, REST, microservices Wayfair is Looking for: About The Job Candidates for this position are preferred to be based in Bangalore, and will be expected to comply with their team's hybrid work schedule requirements. Who We Are: Wayfair runs the largest custom e-commerce large parcel network in the United States, approximately 1.6 million square meters of logistics space. The nature of the network is inherently a highly variable ecosystem that requires flexible, reliable, and resilient systems to operate efficiently. What You’ll Do: Partner with your business stakeholders to provide them with transparency, data, and resources to make informed decisions Be a technical leader within and across the teams you work with Drive high impact architectural decisions and hands-on development, including inception, design, execution, and delivery following good design and coding practices Obsessively focus on production readiness for the team including testing, monitoring, deployment, documentation and proactive troubleshooting Identify risks and gaps in technical approaches and propose solutions to meet team and project goals Create proposals and action plans to garner support across the organization Influence and contribute to the team’s strategy and roadmap Tenacity for learning - curious, and constantly pushing the boundary of what is possible We Are a Match Because You Have: 6+ years of experience in backend software engineering architecting and implementing robust, distributed web applications. Bachelor’s degree in Computer Science, Computer Engineering or equivalent combination of education and experience. Track-record of technical leadership for teams following software development best practices (e.g. SOLID, TDD, GRASP, YAGNI, etc). Track-record of being a hands-on developer efficiently building technically sound systems. Experience building web services with Java, REST, Micro-services. Experience with Continuous Integration (CI/CD) practices and tools (Buildkite, Jenkins, etc.). Experience architecting solutions leveraging distributed infrastructure (e.g.Docker, Kubernetes, etc). Experience with Microsoft SQL Server, Aerospike, Redis. Experience leveraging monitoring and logging technologies (e.g. DataDog, Elasticsearch, InfluxDB, etc). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 5 days ago

Apply

6.0 - 11.0 years

8 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title : Backend Developer Location : Bangalore, India Experience : 6 to 13 Yrs About the Role : As a Back-End Developer, youll collaborate with the development team to build and maintain scalable, secure, and high-performing back-end systems for our SaaS products. You will play a key role in designing and implementing microservices architectures, integrating databases, and ensuring seamless operation of cloud-based applications. Responsibilities : - Design, develop, and maintain robust and scalable back-end solutions using modern frameworks and tools. - Create, manage, and optimize microservices architectures, ensuring efficient communication between services. - Develop and integrate RESTful APIs to support front-end and third-party systems. - Design and implement database schemas and optimize performance for SQL and NoSQL databases. - Support deployment processes by aligning back-end development with CI/CD pipeline requirements. - Implement security best practices, including authentication, authorization, and data protection. - Collaborate with front-end developers to ensure seamless integration of back-end services. - Monitor and enhance application performance, scalability, and reliability. - Keep up-to-date with emerging technologies and industry trends to improve back-end practices. Your Qualifications : Must-Have Skills : - Bachelors or Masters degree in Computer Science, Software Engineering, or a related field. - Proven experience as a Back-End Developer with expertise in modern frameworks such as Node.js, Express.js, or Django. - Expertise in .NET frameworks including development in C++ and C# for high performance databases - Strong proficiency in building and consuming RESTful APIs. - Expertise in database design and management with both SQL (e.g., PostgreSQL, MS SQL Server) and NoSQL (e.g., MongoDB, Cassandra) databases. - Hands-on experience with microservices architecture and containerization tools like Docker and Kubernetes. - Strong understanding of cloud platforms like Microsoft Azure, AWS, or Google Cloud for deployment, monitoring, and management. - Proficiency in implementing security best practices (e.g., OAuth, JWT, encryption techniques). - Experience with CI/CD pipelines and tools such as Jenkins, GitHub Actions, or Azure DevOps. - Familiarity with Agile methodologies and participation in sprint planning and reviews. Good-to-Have Skills : - Experience with time-series databases like TimescaleDB or InfluxDB. - Experience with monitoring solutions like Datadog or Splunk. - Experience with real-time data processing frameworks like Kafka or RabbitMQ. - Familiarity with serverless architecture and tools like Azure or AWS Lambda Functions. - Expertise in Java backend services and microservices - Hands-on experience with business intelligence tools like Grafana or Kibana for monitoring and visualization. - Knowledge of API management platforms like Kong or Apigee. - Experience with integrating AI/ML models into back-end systems. - Familiarity with MLOps pipelines and managing AI/ML workloads. - Understanding of iPaaS (Integration Platforms as a Service) and related technologies. Key Competencies & Attributes : - Strong problem-solving and analytical skills. - Exceptional organizational skills with the ability to manage multiple priorities. - Adaptability to evolving technologies and industry trends. - Excellent collaboration and communication skills to work effectively in cross-functional teams. - Ability to thrive in self-organizing teams with a focus on transparency and trust.

Posted 5 days ago

Apply

0.0 - 1.0 years

0 Lacs

Ahmedabad, Gujarat

On-site

Indeed logo

reverseBits is seeking a talented Python Developer to join our dynamic and innovative team. It is an exciting opportunity for a motivated individual with 1-5 years of industry experience to contribute to our cutting-edge projects. Role Description As a Python Backend engineer, you will be crucial in developing and maintaining robust software solutions that solve complex business problems. Your expertise in Python programming, understanding of business needs, and familiarity with cloud services such as AWS and serverless will be essential to your success in this position. Key Responsibilities Design, develop, and maintain high-quality Python-based applications Working with AWS serverless stack and services such as Lambda, DynamoDB, API gateway, etc. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions Write clean, efficient, and reusable code following best practices and coding standards Conduct thorough testing and debugging to ensure software functionality and reliability Participate in code reviews to maintain code quality and improve overall team productivity Stay updated with the latest industry trends and technologies to enhance your technical skills continuously Troubleshoot and resolve software defects and issues reported by users Qualifications 1-5 years of professional experience as a Python Developer, working on complex software projects Strong programming skills with a deep understanding of Python and its frameworks (Flask, Django, or FastAPI) Proven experience in designing and developing RESTful APIs Familiarity with cloud services, particularly AWS, and understanding of how to leverage them in application development Hands-on experience working in AWS ECS, ECR, Lambda, API gateway, DynamoDB, RDS, EC2, Cloudformation Hands-on experience in Docker and kubernetes Solid understanding of software development principles, including object-oriented programming and design patterns Proficiency in database technologies, such as SQL and NoSQL databases Ability to work effectively in an Agile development environment, collaborating with multidisciplinary teams Excellent problem-solving skills and ability to analyse and resolve technical issues Strong communication skills, both verbal and written, with the ability to articulate complex technical concepts to non-technical stakeholders Bonus if you have Knowledge of CI-CD tools and infrastructure as a code services such as terraform, pulumi, etc. Experience working in monitoring tools such as New relic, Grafana, Kibana, Datadog Data pipeline building experience Job Types: Full-time, Permanent Pay: From ₹20,000.00 per month Benefits: Flexible schedule Leave encashment Paid time off Schedule: Day shift Monday to Friday Supplemental Pay: Performance bonus Ability to commute/relocate: Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Experience: Python: 2 years (Required) Django: 1 year (Required) Work Location: In person

Posted 5 days ago

Apply

8.0 years

0 Lacs

Thiruvananthapuram, Kerala, India

On-site

Linkedin logo

Job Requirements We are looking for a seasoned DevOps Architect to lead the design and implementation of automated, scalable, and secure DevOps pipelines and infrastructure. The ideal candidate will bridge development and operations by architecting robust CI/CD processes, ensuring infrastructure as code (IaC) adoption, promoting a culture of automation, and enabling rapid software delivery across cloud and hybrid environments. Key Roles and Responsibilities Design end-to-end DevOps architecture and tooling that supports development, testing, and deployment workflows. Define best practices for source control, build processes, code quality, and artifact repositories. Collaborate with stakeholders to align DevOps initiatives with business and technical goals. Architect and maintain CI/CD pipelines using tools like Jenkins, GitLab CI, Azure DevOps, CircleCI, or ArgoCD. Ensure pipelines are scalable, efficient, and support multi-environment deployments. Integrate automated testing, security scanning, and deployment verifications into pipelines. Lead the implementation of IaC using tools like Terraform, AWS CloudFormation, Azure ARM, or Pulumi. Enforce version-controlled infrastructure and promote immutable infrastructure principles. Manage infrastructure changes through GitOps practices and reviews. Design and support containerized workloads using Docker and orchestration platforms like Kubernetes or OpenShift. Implement Helm charts, Operators, and auto-scaling strategies. Architect cloud-native infrastructure in AWS, Azure, or GCP for microservices applications. Set up observability frameworks using tools like Prometheus, Grafana, ELK/EFK, Splunk, or Datadog. Implement alerting mechanisms and dashboards for system health and performance. Participate in incident response, root cause analysis, and postmortem reviews. Integrate security practices (DevSecOps) into all phases of the delivery pipeline. Enforce policies for secrets management, access controls, and software supply chain integrity. Ensure compliance with regulations like SOC 2, HIPAA, or ISO 27001. Automate repetitive tasks such as provisioning, deployments, and environment setups. Integrate tools across the DevOps lifecycle, including Jira, ServiceNow, SonarQube, Nexus, etc. Promote the use of APIs and scripting to streamline DevOps workflows. Act as a DevOps evangelist, mentoring engineering teams on best practices. Drive adoption of Agile and Lean principles within infrastructure and operations. Facilitate knowledge sharing through documentation, brown-bag sessions, and training. Work Experience Bachelor's/Master’s degree in Computer Science, Engineering, or related discipline. 8+ years of IT experience, with at least 3 in a senior DevOps role. Deep experience with CI/CD tools and DevOps automation frameworks. Proficiency in scripting (Bash, Python, Go, or PowerShell). Hands-on experience with one or more public cloud platforms: AWS, Azure, or GCP. Strong understanding of GitOps, configuration management (Ansible, Chef, Puppet), and observability tools. Experience managing infrastructure and deploying applications in Kubernetes-based environments. Knowledge of software development lifecycle and Agile/Scrum methodologies. Certifications such as: - AWS Certified DevOps Engineer – Professional - Azure DevOps Engineer Expert - Certified Kubernetes Administrator (CKA) or Developer (CKAD) - Terraform Associate Experience implementing FinOps practices and managing cloud cost optimization. Familiarity with service mesh (Istio, Linkerd) and serverless architectures Show more Show less

Posted 5 days ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Position: DevOps Engineer Experience Required: 4+ years Employment Type: Full-Time Location: Pune Role Summary: We are seeking skilled DevOps Engineers with at least 4 years of experience in managing cloud infrastructure, automation, and modern CI/CD workflows. This role requires strong hands-on expertise in designing, deploying, and maintaining scalable cloud environments using Infrastructure-as-Code (IaC) principles. Candidates must be comfortable working with container technologies, cloud security, networking, and monitoring tools to ensure system efficiency and reliability in large-scale applications. Key Responsibilities: Design and manage cloud infrastructure using platforms like AWS, Azure , or GCP . Write and maintain Infrastructure-as-Code (IaC) using tools such as Terraform or CloudFormation . Develop and manage CI/CD pipelines with tools like GitHub Actions, Jenkins, GitLab CI/CD, Bitbucket Pipelines , or AWS CodePipeline . Deploy and manage containers using Kubernetes, OpenShift, AWS EKS, AWS ECS , and Docker . Ensure security compliance with frameworks including SOC 2, PCI, HIPAA, GDPR , and HITRUST . Lead and support cloud migration projects from on-premise to cloud infrastructure. Implement and fine-tune monitoring and alerting systems using tools such as Datadog, Dynatrace, CloudWatch, Prometheus, ELK , or Splunk . Automate infrastructure setup and configuration with Ansible, Chef, Puppet , or equivalent tools. Diagnose and resolve complex issues involving cloud performance, networking , and server management . Collaborate across development, security, and operations teams to enhance DevSecOps practices. Required Skills & Experience: 3+ years in a DevOps, cloud infrastructure , or platform engineering role. Strong knowledge and hands-on experience with AWS Cloud . In-depth experience with Kubernetes, ECS, OpenShift , and container orchestration. Skilled in writing IaC using Terraform , CloudFormation , or similar tools. Proficiency in automation using Python, Bash , or PowerShell . Familiar with CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD , or Bitbucket Pipelines . Solid background in Linux distributions (RHEL, SUSE, Ubuntu, Amazon Linux) and Windows Server environments. Strong grasp of networking concepts : VPCs, subnets, load balancers, firewalls, and security groups. Experience working with monitoring/logging platforms such as Datadog, Prometheus, ELK, Dynatrace, etc. Excellent communication skills and a collaborative mindset. Understanding of cloud security practices including IAM policies, WAF, GuardDuty , and vulnerability management . Preferred/Good-to-Have Skills: Exposure to cloud-native security platforms (e.g., AWS Security Hub, Azure Security Center, Google SCC). Familiarity with regulatory compliance standards like SOC 2, PCI, HIPAA, GDPR , and HITRUST . Experience managing Windows Server environments in tandem with Linux. Understanding of centralized logging tools such as Splunk, Fluentd , or AWS OpenSearch . Knowledge of GitOps methodologies using tools like ArgoCD or Flux . Background in penetration testing, threat detection , and security assessments . Proven experience with cloud cost optimization strategies . A passion for coaching, mentoring , and sharing DevOps best practices within the team. Show more Show less

Posted 5 days ago

Apply

5.0 years

0 Lacs

Thiruvananthapuram, Kerala, India

On-site

Linkedin logo

Required Qualifications & Skills: 5+ years in DevOps, SRE, or Infrastructure Engineering. Strong expertise in Cloud (AWS/GCP/Azure) & Infrastructure-as-Code (Terraform, CloudFormation). Proficient in Docker & Kubernetes. Hands-on with CI/CD tools & scripting (Bash, Python, or Go). Strong knowledge of Linux, networking, and security best practices. Experience with monitoring & logging tools (ELK, Prometheus, Grafana). Familiarity with GitOps, Helm charts & automation Key Responsibilities: Design & manage CI/CD pipelines (Jenkins, GitLab CI/CD, GitHub Actions). Automate infrastructure provisioning (Terraform, Ansible, Pulumi). Monitor & optimize cloud environments (AWS, GCP, Azure). Implement containerization & orchestration (Docker, Kubernetes - EKS/GKE/AKS). Maintain logging, monitoring & alerting (ELK, Prometheus, Grafana, Datadog). Ensure system security, availability & performance tuning. Manage secrets & credentials (Vault, AWS Secrets Manager). Troubleshoot infrastructure & deployment issues. Implement blue-green & canary deployments. Collaborate with developers to enhance system reliability & productivity Preferred Skills: Certifications (AWS DevOps Engineer, CKA/CKAD, Google Cloud DevOps Engineer). Experience with multi-cloud, microservices, event-driven systems. Exposure to AI/ML pipelines & data engineering workflows. Show more Show less

Posted 5 days ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Role Overview We are looking for experienced DevOps Engineers (4+ years) with a strong background in cloud infrastructure, automation, and CI/CD processes. The ideal candidate will have hands-on experience in building, deploying, and maintaining cloud solutions using Infrastructure-as-Code (IaC) best practices. The role requires expertise in containerization, cloud security, networking, and monitoring tools to optimize and scale enterprise-level applications. Key Responsibilities Design, implement, and manage cloud infrastructure solutions on AWS, Azure, or GCP. Develop and maintain Infrastructure-as-Code (IaC) using Terraform, CloudFormation, or similar tools. Implement and manage CI/CD pipelines using tools like GitHub Actions, Jenkins, GitLab CI/CD, BitBucket Pipelines, or AWS CodePipeline. Manage and orchestrate containers using Kubernetes, OpenShift, AWS EKS, AWS ECS, and Docker. Work on cloud migrations, helping organizations transition from on-premises data centers to cloud-based infrastructure. Ensure system security and compliance with industry standards such as SOC 2, PCI, HIPAA, GDPR, and HITRUST. Set up and optimize monitoring, logging, and alerting using tools like Datadog, Dynatrace, AWS CloudWatch, Prometheus, ELK, or Splunk. Automate deployment, configuration, and management of cloud-native applications using Ansible, Chef, Puppet, or similar configuration management tools. Troubleshoot complex networking, Linux/Windows server issues, and cloud-related performance bottlenecks. Collaborate with development, security, and operations teams to streamline the DevSecOps process. Must-Have Skills 3+ years of experience in DevOps, cloud infrastructure, or platform engineering. Expertise in at least one major cloud provider: AWS, Azure, or GCP. Strong experience with Kubernetes, ECS, OpenShift, and container orchestration technologies. Hands-on experience in Infrastructure-as-Code (IaC) using Terraform, AWS CloudFormation, or similar tools. Proficiency in scripting/programming languages like Python, Bash, or PowerShell for automation. Strong knowledge of CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD, or BitBucket Pipelines. Experience with Linux operating systems (RHEL, SUSE, Ubuntu, Amazon Linux) and Windows Server administration. Expertise in networking (VPCs, Subnets, Load Balancing, Security Groups, Firewalls). Experience in log management and monitoring tools like Datadog, CloudWatch, Prometheus, ELK, Dynatrace. Strong communication skills to work with cross-functional teams and external customers. Knowledge of Cloud Security best practices, including IAM, WAF, GuardDuty, CVE scanning, vulnerability management. Good-to-Have Skills Knowledge of cloud-native security solutions (AWS Security Hub, Azure Security Center, Google Security Command Center). Experience in compliance frameworks (SOC 2, PCI, HIPAA, GDPR, HITRUST). Exposure to Windows Server administration alongside Linux environments. Familiarity with centralized logging solutions (Splunk, Fluentd, AWS OpenSearch). GitOps experience with tools like ArgoCD or Flux. Background in penetration testing, intrusion detection, and vulnerability scanning. Experience in cost optimization strategies for cloud infrastructure. Passion for mentoring teams and sharing DevOps best practices. Skills: gitlab ci/cd,ci/cd,cloud security,infrastructure-as-code (iac),terraform,github actions,windows server,devops,security,cloud infrastructure,linux,puppet,ci,aws,chef,scripting (python, bash, powershell),infrastructure,azure,cloud,networking,gcp,automation,monitoring tools,kubernetes,log management,ansible,cd,jenkins,containerization,monitoring tools (datadog, prometheus, elk) Show more Show less

Posted 5 days ago

Apply

8.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Work Location : PAN India Duration : 12 Months (Extendable) Shift : Rotational shifts including night shifts and weekend availability Years of Experience : 8+ Years 🔧 Job Summary We are seeking an experienced and versatile Site Reliability Engineer (SRE) / Observability Engineer to join our project delivery team. The ideal candidate will bring a deep understanding of modern cloud infrastructure, monitoring tools, and automation practices to ensure system uptime, scalability, and performance across a distributed environment. 🎯 Key Responsibilities Site Reliability Engineering Design, build, and maintain scalable, reliable infrastructure. Automate provisioning/configuration using tools like Terraform, Ansible, Chef, or Puppet. Develop automation tools/scripts in Python, Go, Java, or Bash. Administer and optimize Linux/Unix systems and network components (TCP/IP, DNS, load balancers). Deploy and manage infrastructure on AWS or Kubernetes platforms. Build and maintain CI/CD pipelines (e.g., Jenkins, ArgoCD). Monitor production systems with tools such as Prometheus, Grafana, Nagios, Datadog. Conduct postmortems and define SLAs/SLOs to ensure high system reliability. Plan and implement capacity management, failover systems, and auto-scaling mechanisms. Observability Engineering Instrument services for metrics/logs/traces using OpenTelemetry, Prometheus, Jaeger, etc. Manage observability stacks (e.g., Grafana, ELK Stack, Splunk, Datadog, Honeycomb). Work with time-series databases (e.g., InfluxDB, Prometheus) and log aggregation tools. Build actionable alerts and dashboards to reduce alert fatigue and increase insight. Advocate for observability best practices with developers and define performance KPIs. ✅ Required Skills & Qualifications Proven experience as an SRE or Observability Engineer in production environments. Strong Linux/Unix and cloud infrastructure skills (especially AWS, Kubernetes). Proficient in scripting and automation (Python, Go, Bash, Java). Expertise in observability, monitoring, and alerting systems. Experience in Infrastructure as Code (IaC) and modern CI/CD practices. Strong troubleshooting skills and ability to respond to live production issues. Comfortable with rotational shifts, including nights and weekends. 🔍 Mandatory Technical Skills Ansible AWS Automation Services AWS CloudFormation AWS CodePipeline AWS CodeDeploy AWS DevOps Services Show more Show less

Posted 5 days ago

Apply

2.0 - 5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. Job Description You are passionate about quality and how customers experience the products you test. You have the ability to create, maintain and execute test plans in order to verify requirements. As a Quality Engineer at Equifax, you will be a catalyst in both the development and the testing of high priority initiatives. You will develop and test new products to support technology operations while maintaining exemplary standards. As a collaborative member of the team, you will deliver Quality Engineering services (code quality, testing services, performance engineering, development collaboration and continuous integration). You will conduct quality control tests in order to ensure full compliance with specified standards and end user requirements. You will execute tests using established plans and scripts; document problems in an issues log and retest to ensure problems are resolved. You will create test files to thoroughly test program logic and verify system flow. You will identify, recommend and implement changes to enhance effectiveness of QE strategies. What You Will Do With limited oversight, use your experience and knowledge of testing and testability to influence better software design, promote proper engineering practice, bug prevention strategies, testability, accessibility, privacy, and other advanced quality concepts across solutions Develop test strategies, automate tests using test frameworks and write moderately complex code/scripts to test solutions, products and systems Monitor product development and usage at all levels with an eye for product quality. Create test harnesses and infrastructure as necessary Demonstrate an understanding of test methodologies, writing test plans, creating test cases and debugging What Experience You Need Bachelor's degree in a STEM major or equivalent experience 2-5 years of software testing experience Able to create automated test based on functional and non-functional requirements Self-starter that identifies/responds to priority shifts with minimal supervision. Software build management tools like Maven or Gradle Software testing tools like Cucumber, Selenium Software testing, performance, and quality engineering techniques and strategies Testing technologies: JIRA, Confluence, Office products Cloud technology: GCP, AWS, or Azure Cloud Certification Strongly Preferred What Could Set You Apart Experience with cloud based testing environments(AWS,GCP) Hands-on experience working in Agile environments. Knowledge of API testing tools(Bruno,Swagger) and on SOAP API Testing using SoapUI. Certification in ISTQB or similar or Google cloud certification.. Experience with cutting-edge tools & technologies :Familiarity with the latest tools and technologies such as AI, machine learning and cloud computing. Expertise with cross device testing strategies and automation via device clouds Experience monitoring and developing resources Excellent coding and analytical skills Experience with performance engineering and profiling (e.g. Java JVM, Databases) and tools such as Load Runner, JMeter,Gatling Exposure to Application performance monitoring tools like Grafana & Datadog Ability to create good acceptance and integration test automation scripts and integrate with Continuous integration (Jenkins) and code coverage tools (Sonar) to ensure 80% or higher code coverage Experience working in a TDD/BDD environment and can utilize technologies such as JUnit, Rest Assured, Appium, Gauge/Cucumber frameworks, APIs (REST/SOAP). Understanding of Continuous Delivery concepts and can use tools including Jenkins and vulnerability tools such as Sonar,Fortify, etc. Experience in Lamba Testing for Cross browser testing A good understanding of Git version control,including branching strategies , merging and conflict resolution. We offer a hybrid work setting, comprehensive compensation and healthcare packages, attractive paid time off, and organizational growth potential through our online learning platform with guided career tracks. Are you ready to power your possible? Apply today, and get started on a path toward an exciting new career at Equifax, where you can make a difference! Who is Equifax? At Equifax, we believe knowledge drives progress. As a global data, analytics and technology company, we play an essential role in the global economy by helping employers, employees, financial institutions and government agencies make critical decisions with greater confidence. We work to help create seamless and positive experiences during life’s pivotal moments: applying for jobs or a mortgage, financing an education or buying a car. Our impact is real and to accomplish our goals we focus on nurturing our people for career advancement and their learning and development, supporting our next generation of leaders, maintaining an inclusive and diverse work environment, and regularly engaging and recognizing our employees. Regardless of location or role, the individual and collective work of our employees makes a difference and we are looking for talented team players to join us as we help people live their financial best. Equifax is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Overview Cvent is a global meeting, event, travel, and hospitality technology leader, with more than 4000+ employees worldwide. As a leading cloud-based technology company, we have over 28,000+ customers, including 80% of the Fortune 100 companies, in more than 100 countries. Cvent’s software solutions optimize the entire event management value chain and have enabled clients around the world to manage hundreds of thousands of meetings and events. In addition to helping event planners navigate every aspect of the event process, we also provide an integrated platform to hoteliers to help create qualified demand for their hotels, manage that demand more efficiently, and measure their business performance in real-time. In This Role, You Will As a Lead - Site Reliability Engineer, you'll use your advanced development and operations knowledge to identify and prioritize issues. Find universal solutions to common problems and Mentor And Support Junior Staff. Additionally, You Will Enlighten, Enable and Empower a fast-growing set of multi-disciplinary teams, across multiple applications and locations. Tackle complex development, automation and business process problems. Champion Cvent standards and best practices. Ensure the scalability, performance, and resilience of our suite of products. Work with the development and product team of a new application to establish the right monitoring and alerting strategy. Work with a new acquisition's DevOps team to cross -pollinate best practices, educate and close gaps in Cvent standards. Develop build, test and deployment automation that seamlessly targets multiple on-premises and AWS regions. Help a dev team working on a legacy code base to realize zero-down-time deployments Give back by working on and contributing to Open Source projects Automate all the things! Must Have Here's What You Need: Experience with SDLC methodologies (preferably Agile software development methodology). Experience with software development - Knowledge of Java/Python/Ruby is a must. Preferably good understanding of Object-Oriented Programming concepts. Exposure to managing AWS services / operational knowledge of managing applications in AWS Experience with configuration management tools such as Chef, Puppet, Ansible or equivalent Solid Windows and Linux administration skills. Working with APM, monitoring, and logging tools (New Relic, DataDog, Splunk) Experience in managing 3 tier application stacks / Incident response Experience with build tools such as Jenkins, CircleCI, Harness etc Exposure to containerization concepts - docker, ECS, EKS, Kubernetes Working experience with NoSQL databases such as MongoDB, couchbase, postgres etc Self-motivation and the ability to work under minimal supervision is must. Good To Have F5 load balancing concepts Basic understanding of observability & SLIs/SLOs Message Queues (RabbitMQ). Understanding of basic networking concepts Experience with package managers such as nexus, artifactory or equivalent Good communication skills People management experience Show more Show less

Posted 6 days ago

Apply

7.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 7+ years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 6 days ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 7+ years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 6 days ago

Apply

56.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

You will be part of a dynamic team that provides 24X7 support to BFS and end to end support for all the monitoring tools in a supportive and inclusive environment. Our team works closely with key stakeholders, providing monitoring solutions using a variety of modern technologies. At Macquarie, our advantage is bringing together diverse people and empowering them to shape all kinds of possibilities. We are a global financial services group operating in 31 markets and with 56 years of unbroken profitability. You’ll be part of a friendly and supportive team where everyone - no matter what role - contributes ideas and drives outcomes. What role will you play? In this role, you will be responsible for monitoring the core software platforms, analysing and troubleshooting any issues and automating the manual processes. You will collaborate with key stakeholders to investigate issues, implement solutions and drive improvements in reliability and performance of software systems in the organization. What You Offer 4 to 9 years of industry experience working as Site Reliability Engineer with good exposure to production support and incident management; Experience with APM tools like Dynatrace, AppDynamics, DataDog, etc., and log monitoring tools such as Sumo Logic, and Splunk; Good programming skills in any high-level programming languages like Java, python or golang; and Familiarity with public cloud platforms such as AWS GCP is highly desirable. Amenable to follow a hybrid work setup with standard schedule of 6:30am – 3:30pm IST with 1 week mandatory night shift (work-from-home setup) - either from 3pm - 12am or 10pm - 7am IST in every 1.5 - 2 months as per requirement. We love hearing from anyone inspired to build a better future with us, if you're excited about the role or working at Macquarie we encourage you to apply. About Technology Technology enables every aspect of Macquarie, for our people, our customers and our communities. We’re a global team that is passionate about accelerating the digital enterprise, connecting people and data, building platforms and applications and designing tomorrow’s technology solutions. Our commitment to diversity, equity and inclusion We are committed to fostering a diverse, equitable and inclusive workplace. We encourage people from all backgrounds to apply and welcome all identities, including race, ethnicity, cultural identity, nationality, gender (including gender identity or expression), age, sexual orientation, marital or partnership status, parental, caregiving or family status, neurodiversity, religion or belief, disability, or socio-economic background. We welcome further discussions on how you can feel included and belong at Macquarie as you progress through our recruitment process. Our aim is to provide reasonable adjustments to individuals who may need support during the recruitment process and through working arrangements. If you require additional assistance, please let us know in the application process. Show more Show less

Posted 6 days ago

Apply

7.0 years

0 Lacs

Kolkata, West Bengal, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 7+ years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 6 days ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 7+ years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 6 days ago

Apply

7.0 years

0 Lacs

Kanayannur, Kerala, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 7+ years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 6 days ago

Apply

7.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 7+ years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 6 days ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Senior Associate Job Description & Summary At PwC, our people in infrastructure focus on designing and implementing robust, secure IT systems that support business operations. They enable the smooth functioning of networks, servers, and data centres to optimise performance and minimise downtime. Those in cloud operations at PwC will focus on managing and optimising cloud infrastructure and services to enable seamless operations and high availability for clients. You will be responsible for monitoring, troubleshooting, and implementing industry leading practices for cloud-based systems. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: Responsibilities: API, S3 to Azure Blob Storage, AWS to Azure SDK Conversion, EKS to AKS migration Mandatory skill sets: Deep experience with AWS, Azure including migrating services to AWS, Azure Experience with MySQL, Java, Apache, & Tomcat Experience with configuration management tools like Chef, Puppet or CFengine Experience with containerization with Docker, Kubernetes/EKS/AKS Experience with CI/CD with Jenkins, Groovy DSL Familiarity with Prometheus, Cortex, Grafana, NewRelic, DataDog, and Splunk Knowledge of key protocols including TCP/IP, SSH, DNS, SMTP, SNMP, SSL, HTTP and LDAP Knowledge of well-known open source tools for monitoring, trending and configuration management 1 Certification mandatory – “ should not be AZ900” Preferred skill sets: NA Years of experience required: 3 to 8 years Education qualification: BTech/BE Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Bachelor of Technology Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills DevOps Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Amazon Web Services (AWS), Analytical Thinking, Cloud Administration, Cloud-Based Service Management, Cloud Compliance, Cloud Engineering, Cloud Infrastructure, Cloud Infrastructure Architecture Design, Cloud Infrastructure Optimization, Cloud Migration, Cloud Operations (CloudOps), Cloud Performance Optimization, Cloud Service Delivery, Cloud Strategy, Communication, Creativity, CrowdStrike, Dynatrace APM, Embracing Change, Emotional Regulation, Empathy, FinOps Operating Model {+ 17 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less

Posted 6 days ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 7+ years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 6 days ago

Apply

2.0 - 5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Job Description: You are passionate about quality and how customers experience the products you test. You have the ability to create, maintain and execute test plans in order to verify requirements. As a Quality Engineer at Equifax, you will be a catalyst in both the development and the testing of high priority initiatives. You will develop and test new products to support technology operations while maintaining exemplary standards. As a collaborative member of the team, you will deliver Quality Engineering services (code quality, testing services, performance engineering, development collaboration and continuous integration). You will conduct quality control tests in order to ensure full compliance with specified standards and end user requirements. You will execute tests using established plans and scripts; document problems in an issues log and retest to ensure problems are resolved. You will create test files to thoroughly test program logic and verify system flow. You will identify, recommend and implement changes to enhance effectiveness of QE strategies. What You Will Do With limited oversight, use your experience and knowledge of testing and testability to influence better software design, promote proper engineering practice, bug prevention strategies, testability, accessibility, privacy, and other advanced quality concepts across solutions Develop test strategies, automate tests using test frameworks and write moderately complex code/scripts to test solutions, products and systems Monitor product development and usage at all levels with an eye for product quality. Create test harnesses and infrastructure as necessary Demonstrate an understanding of test methodologies, writing test plans, creating test cases and debugging What Experience You Need Bachelor's degree in a STEM major or equivalent experience 2-5 years of software testing experience Able to create automated test based on functional and non-functional requirements Self-starter that identifies/responds to priority shifts with minimal supervision. Software build management tools like Maven or Gradle Software testing tools like Cucumber, Selenium Software testing, performance, and quality engineering techniques and strategies Testing technologies: JIRA, Confluence, Office products Cloud technology: GCP, AWS, or Azure Cloud Certification Strongly Preferred What Could Set You Apart Experience with cloud based testing environments(AWS,GCP) Hands-on experience working in Agile environments. Knowledge of API testing tools(Bruno,Swagger) and on SOAP API Testing using SoapUI. Certification in ISTQB or similar or Google cloud certification.. Experience with cutting-edge tools & technologies :Familiarity with the latest tools and technologies such as AI, machine learning and cloud computing. Expertise with cross device testing strategies and automation via device clouds Experience monitoring and developing resources Excellent coding and analytical skills Experience with performance engineering and profiling (e.g. Java JVM, Databases) and tools such as Load Runner, JMeter,Gatling Exposure to Application performance monitoring tools like Grafana & Datadog Ability to create good acceptance and integration test automation scripts and integrate with Continuous integration (Jenkins) and code coverage tools (Sonar) to ensure 80% or higher code coverage Experience working in a TDD/BDD environment and can utilize technologies such as JUnit, Rest Assured, Appium, Gauge/Cucumber frameworks, APIs (REST/SOAP). Understanding of Continuous Delivery concepts and can use tools including Jenkins and vulnerability tools such as Sonar,Fortify, etc. Experience in Lamba Testing for Cross browser testing A good understanding of Git version control,including branching strategies , merging and conflict resolution. Show more Show less

Posted 6 days ago

Apply

5.0 - 7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Specialist, Software Development Engineering (React) Job Posting Title: Full Stack Web Developer Work Location - Noida Experience - 5-7 Years Mandatory Skill - React, Typescript, Javascript, Redux What does a successful Full Stack Web Developer do at Fiserv? As a Full Stack Web Developer, you can look forward to: Experience in JavaScript (and preferably Typescript)) with good exposure in AngularJS/Angular or ReactJS. Good understanding of OOPs in Javascript and how it applies to code quality. Experience with HTML5, CSS, JQuery, LESS/SASS, Bootstrap and JS. Experience interacting with APIs (REST and GraphQL) Ability to transform design mockups and wireframes into functional components. Developing responsive web pages Knowledge of cross-browser compatibilities, responsiveness, and web accessibility standards like ADA. Some knowledge but not mandatory of REACT tools including React.js, Webpack, Enzyme, and workflows (Redux – Thunk/Saga, Hooks and Flux). Good knowledge of software engineering best practices, including unit testing, code reviews, design documentation, debugging, troubleshooting, and agile development What You Will Do Owning one or more of the web services; adding new features, resolving bugs, and refactoring/ improving code base. Developing new features, investigating/reproducing/resolving bugs Designing and implementing REST APIs for mobile and web clients including our payment devices, web dashboard, and 3rd party apps. Identifying technical requirements in product meetings and assist the business team with realistic project planning and feature development. Translating product requirements into functional, maintainable, extensible software that is in-line with company objectives. Being responsible for your merchant-facing services, features from development through deployment, and production monitoring. Working with the Infrastructure Team to design data models to support large-scale highly available services. Working with QA to develop test plans and strategies. Writing automated tests for new web features and update existing tests as needed. Being a team player, ability to collaborate idea sharing in a strong product setting. Following Agile SDLC, participating in planning and Scrumban boards. Displaying problem-solving skills and browser debugging capabilities Participating in a regular on call rotation. What You Will Need To Have Bachelors Degree required, related Technology degree preferred. 5-8 years of experience in web development using Javascript based development Experience with modern web UI technologies (HTML5, CSS3) Experience with API design and REST APIs . Experienced in the day-to-day practicalities of Software Development Lifecycles such as Scrum. What Would Be Great To Have Database technology such as MySQL/Oracle and NoSql DB. Experience with Jenkinsand monitoring tools like Datadog or Grafana. Previous experience with Ember.js framework. Some experience writing tests; we use Jest, Cypress.io and Selenium. Experience with package management systems, such as yarn, Bower. Understanding of build systems: Webpack, Rollup. Exposure to CSS pre-compilers, such as Sass or Less. POS Checkout, Payment Gateway or E-Commerce experience Thank You For Considering Employment With Fiserv. Please Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our Commitment To Diversity And Inclusion Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note To Agencies Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning About Fake Job Posts Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address. Show more Show less

Posted 6 days ago

Apply

3.0 - 6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Professional, Software Development Engineering (React) Job Posting Title: Full Stack Web Developer Work Location - Noida Experience - 3-6 Years Mandatory Skills - React, Javascript, Typescript, Redux What does a successful Full Stack Web Developer do at Fiserv? As a Full Stack Web Developer, you can look forward to: Experience in JavaScript (and preferably Typescript)) with good exposure in AngularJS/Angular or ReactJS. Good understanding of OOPs in Javascript and how it applies to code quality. Experience with HTML5, CSS, JQuery, LESS/SASS, Bootstrap and JS. Experience interacting with APIs (REST and GraphQL) Ability to transform design mockups and wireframes into functional components. Developing responsive web pages Knowledge of cross-browser compatibilities, responsiveness, and web accessibility standards like ADA. Some knowledge but not mandatory of REACT tools including React.js, Webpack, Enzyme, and workflows (Redux – Thunk/Saga, Hooks and Flux). Good knowledge of software engineering best practices, including unit testing, code reviews, design documentation, debugging, troubleshooting, and agile development What You Will Do Owning one or more of the web services; adding new features, resolving bugs, and refactoring/ improving code base. Developing new features, investigating/reproducing/resolving bugs Designing and implementing REST APIs for mobile and web clients including our payment devices, web dashboard, and 3rd party apps. Identifying technical requirements in product meetings and assist the business team with realistic project planning and feature development. Translating product requirements into functional, maintainable, extensible software that is in-line with company objectives. Being responsible for your merchant-facing services, features from development through deployment, and production monitoring. Working with the Infrastructure Team to design data models to support large-scale highly available services. Working with QA to develop test plans and strategies. Writing automated tests for new web features and update existing tests as needed. Being a team player, ability to collaborate idea sharing in a strong product setting. Following Agile SDLC, participating in planning and Scrumban boards. Displaying problem-solving skills and browser debugging capabilities Participating in a regular on call rotation. What You Will Need To Have Bachelors Degree required, related Technology degree preferred. 5-8 years of experience in web development using Javascript based development Experience with modern web UI technologies (HTML5, CSS3) Experience with API design and REST APIs . Experienced in the day-to-day practicalities of Software Development Lifecycles such as Scrum. What Would Be Great To Have Database technology such as MySQL/Oracle and NoSql DB. Experience with Jenkinsand monitoring tools like Datadog or Grafana. Previous experience with Ember.js framework. Some experience writing tests; we use Jest, Cypress.io and Selenium. Experience with package management systems, such as yarn, Bower. Understanding of build systems: Webpack, Rollup. Exposure to CSS pre-compilers, such as Sass or Less. POS Checkout, Payment Gateway or E-Commerce experience Thank You For Considering Employment With Fiserv. Please Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our Commitment To Diversity And Inclusion Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note To Agencies Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning About Fake Job Posts Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address. Show more Show less

Posted 6 days ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : USD 45000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Oyster) (*Note: This is a requirement for one of Uplers' client - A Renowned Hiring Product Company from USA) What do you need for this opportunity? Must have skills required: AWS IaaS, Azure managed services, IAC, Ansible, Azure architecture, Azure bicep, Ci/Cd Pipelines, Cloud engineering, Datadog, PowerShell, SOC 2, Azure DevOps, Github A Renowned Hiring Product Company from USA is Looking for: Senior Level Cloud Engineer We are looking for a Senior Level Cloud Engineer to join our team. The ideal candidate should be experienced in Cloud engineering and excited to work with Microsoft Azure while supporting strategy migrations away from AWS and GCP . As a Cloud Engineer, you will be responsible for the following: Plan,enable, and support a highly available environment with 99.99% uptime DesignandDeploy infrastructure changes with zero downtime for customers Managing cloud infrastructure entirely as code using Azure Bicep and Powershell Creating and maintaining CI/CD pipelines for automating deployments Monitoring the site using rich telemetry solutions like Datadog Support 24x7 services through a means of automated self-healing workflows Ensures system/stability off-hours with the team in an on-call rotation schedule Provide regular security patching to virtual machines Stay updated on new Azure services and DevSecOps practices Engage in transition from AWS IaaS to Azure managed services Continuously review service design and address technical debt Qualifications: 6-10 years of experience as Cloud Engineer and excited to work with Microsoft Azure while supporting strategy migrations away from AWS and GCP Knowledge of Ansible is preferred Knowledge of scripting languages Understanding of IaC Understanding of Azure Well-Architected Framework Experience with GitHub or Azure DevOps Experience in compliance standards, including SOC 2, is preferred Educational Requirements: Bachelor’s Degree in Computer Science, Information Technology, or related field. A Master’s Degree is highly preferred. If you are highly innovative and passionate about cloud infrastructure, democratizing DevOps, and strategic cloud engineering that is vital to the success of the organization, we would love to explore your potential as part of our team. Engagement Type: Job Type: 1 Year Contract Location: 100% Remote Working time: 9:00 AM to 6:00 PM IST Interview Process - 3 Rounds How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 6 days ago

Apply

6.0 years

0 Lacs

Kolkata, West Bengal, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : USD 45000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Oyster) (*Note: This is a requirement for one of Uplers' client - A Renowned Hiring Product Company from USA) What do you need for this opportunity? Must have skills required: AWS IaaS, Azure managed services, IAC, Ansible, Azure architecture, Azure bicep, Ci/Cd Pipelines, Cloud engineering, Datadog, PowerShell, SOC 2, Azure DevOps, Github A Renowned Hiring Product Company from USA is Looking for: Senior Level Cloud Engineer We are looking for a Senior Level Cloud Engineer to join our team. The ideal candidate should be experienced in Cloud engineering and excited to work with Microsoft Azure while supporting strategy migrations away from AWS and GCP . As a Cloud Engineer, you will be responsible for the following: Plan,enable, and support a highly available environment with 99.99% uptime DesignandDeploy infrastructure changes with zero downtime for customers Managing cloud infrastructure entirely as code using Azure Bicep and Powershell Creating and maintaining CI/CD pipelines for automating deployments Monitoring the site using rich telemetry solutions like Datadog Support 24x7 services through a means of automated self-healing workflows Ensures system/stability off-hours with the team in an on-call rotation schedule Provide regular security patching to virtual machines Stay updated on new Azure services and DevSecOps practices Engage in transition from AWS IaaS to Azure managed services Continuously review service design and address technical debt Qualifications: 6-10 years of experience as Cloud Engineer and excited to work with Microsoft Azure while supporting strategy migrations away from AWS and GCP Knowledge of Ansible is preferred Knowledge of scripting languages Understanding of IaC Understanding of Azure Well-Architected Framework Experience with GitHub or Azure DevOps Experience in compliance standards, including SOC 2, is preferred Educational Requirements: Bachelor’s Degree in Computer Science, Information Technology, or related field. A Master’s Degree is highly preferred. If you are highly innovative and passionate about cloud infrastructure, democratizing DevOps, and strategic cloud engineering that is vital to the success of the organization, we would love to explore your potential as part of our team. Engagement Type: Job Type: 1 Year Contract Location: 100% Remote Working time: 9:00 AM to 6:00 PM IST Interview Process - 3 Rounds How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 6 days ago

Apply

Exploring Datadog Jobs in India

Datadog, a popular monitoring and analytics platform, has been gaining traction in the tech industry in India. With the increasing demand for professionals skilled in Datadog, job opportunities are on the rise. In this article, we will explore the Datadog job market in India and provide valuable insights for job seekers looking to pursue a career in this field.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Delhi

These cities are known for their thriving tech industries and are actively hiring for Datadog roles.

Average Salary Range

The average salary range for Datadog professionals in India varies based on experience levels. Entry-level positions can expect a salary ranging from INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.

Career Path

A typical career path in Datadog may include roles such as Datadog Administrator, Datadog Developer, Datadog Consultant, and Datadog Architect. Progression usually follows a path from Junior Datadog Developer to Senior Datadog Developer, eventually leading to roles like Datadog Tech Lead or Datadog Manager.

Related Skills

In addition to proficiency in Datadog, professionals in this field are often expected to have skills in monitoring and analytics tools, cloud computing (AWS, Azure, GCP), scripting languages (Python, Bash), and knowledge of IT infrastructure.

Interview Questions

  • What is Datadog and how does it differ from other monitoring tools? (basic)
  • How do you set up custom metrics in Datadog? (medium)
  • Explain how you would create a dashboard in Datadog to monitor server performance. (medium)
  • What are some key features of Datadog APM (Application Performance Monitoring)? (advanced)
  • Can you explain how Datadog integrates with Kubernetes for monitoring? (medium)
  • Describe how you would troubleshoot an alert in Datadog. (medium)
  • How does Datadog handle metric aggregation and visualization? (advanced)
  • What are some best practices for using Datadog to monitor cloud infrastructure? (medium)
  • Explain the difference between Datadog Logs and Datadog APM. (basic)
  • How would you set up alerts in Datadog for critical system metrics? (medium)
  • Describe a challenging problem you faced while using Datadog and how you resolved it. (advanced)
  • What is anomaly detection in Datadog and how does it work? (medium)
  • How does Datadog handle data retention and storage? (medium)
  • What are some common integrations with Datadog that you have worked with? (basic)
  • Can you explain how Datadog handles tracing for distributed systems? (advanced)
  • Describe a recent project where you used Datadog to improve system performance. (medium)
  • How do you ensure data security and privacy when using Datadog? (medium)
  • What are some limitations of Datadog that you have encountered in your experience? (medium)
  • Explain how you would use Datadog to monitor network traffic and performance. (medium)
  • How does Datadog handle auto-discovery of services and applications for monitoring? (medium)
  • What are some key metrics you would monitor for a web application using Datadog? (basic)
  • Describe a scenario where you had to scale monitoring infrastructure using Datadog. (advanced)
  • How would you implement anomaly detection for a specific metric in Datadog? (medium)
  • What are some best practices for setting up alerts and notifications in Datadog? (medium)

Closing Remark

With the increasing demand for Datadog professionals in India, now is a great time to explore job opportunities in this field. By honing your skills, preparing for interviews, and showcasing your expertise, you can confidently apply for Datadog roles and advance your career in the tech industry. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies