Home
Jobs

1325 Datadog Jobs - Page 42

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

13 - 17 Lacs

Hyderabad

Remote

Naukri logo

Mode of Interview : 2-3 rounds (Virtual/Inperson) Notice : Immediate - 15 Days Max Technical Skill Requirements : ServiceNow Business Analyst, ITIL, ITSM, Dashboard Creation, APM, Scripting, Datadog Role and Responsibilities : - 6+ Years of experience into SRE Engineer , having thorough knowledge on ITIL/ITSM process - Certification in ITIL v4 framework and deep knowledge of ITSM platforms preferable - Hands on experience on APM tool Datadog - Demonstrable ability to implement complex process workflows, and evidence performance through metrics-driven reporting - Strong understanding of IT Operations - Strong written and verbal communication skills with the ability to understand and present complex technical information in a clear and concise manner to a variety of audiences including executive leadership - Ability to develop strategic relationships with other teams, departments, business stakeholders, and 3rd parties - Ability to understand business requirements and define KPIs which can showcase stability of the application in production and give meaningful insights to business - Proven trouble-shooting experience and strong incident reduction-minded focus - Should be able to unsurfaced recurring issues and Toil and suggest automations - Strong problem-solving skills and the ability to think quickly and execute on short-time frames

Posted 3 weeks ago

Apply

3.0 - 5.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Your Role : As a Back-End Developer, you'll collaborate with the development team to build and maintain scalable, secure, and high-performing back-end systems for our SaaS products. You will play a key role in designing and implementing microservices architectures, integrating databases, and ensuring seamless operation of cloud-based applications. Responsibilities : - Design, develop, and maintain robust and scalable back-end solutions using modern frameworks and tools. - Create, manage, and optimize microservices architectures, ensuring efficient communication between services. - Develop and integrate RESTful APIs to support front-end and third-party systems. - Design and implement database schemas and optimize performance for SQL and NoSQL databases. - Support deployment processes by aligning back-end development with CI/CD pipeline requirements. - Implement security best practices, including authentication, authorization, and data protection. - Collaborate with front-end developers to ensure seamless integration of back-end services. - Monitor and enhance application performance, scalability, and reliability. - Keep up-to-date with emerging technologies and industry trends to improve back-end practices. Your Qualifications : Must-Have Skills : - Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field. - Proven experience as a Back-End Developer with expertise in modern frameworks such as Node.js, Express.js, or Django. - Expertise in .NET frameworks including development in C++ and C# for high performance databases - Strong proficiency in building and consuming RESTful APIs. - Expertise in database design and management with both SQL (e.g., PostgreSQL, MS SQL Server) and NoSQL (e.g., MongoDB, Cassandra) databases. - Hands-on experience with microservices architecture and containerization tools like Docker and Kubernetes. - Strong understanding of cloud platforms like Microsoft Azure, AWS, or Google Cloud for deployment, monitoring, and management. - Proficiency in implementing security best practices (e.g., OAuth, JWT, encryption techniques). - Experience with CI/CD pipelines and tools such as Jenkins, GitHub Actions, or Azure DevOps. - Familiarity with Agile methodologies and participation in sprint planning and reviews. Good-to-Have Skills : - Experience with time-series databases like TimescaleDB or InfluxDB. - Experience with monitoring solutions like Datadog or Splunk. - Experience with real-time data processing frameworks like Kafka or RabbitMQ. - Familiarity with serverless architecture and tools like Azure or AWS Lambda Functions. - Expertise in Java backend services and microservices - Hands-on experience with business intelligence tools like Grafana or Kibana for monitoring and visualization. - Knowledge of API management platforms like Kong or Apigee. - Experience with integrating AI/ML models into back-end systems. - Familiarity with MLOps pipelines and managing AI/ML workloads. - Understanding of iPaaS (Integration Platforms as a Service) and related technologies. Key Competencies & Attributes : - Strong problem-solving and analytical skills. - Exceptional organizational skills with the ability to manage multiple priorities. - Adaptability to evolving technologies and industry trends. - Excellent collaboration and communication skills to work effectively in cross-functional teams. - Ability to thrive in self-organizing teams with a focus on transparency and trust.

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role : Java Full Stack Developer Location : Hitech City, Hyderabad Work Mode : Work from Office (Monday to Friday, 5 Days) Work timings : 1:00 to 10:00 PM IST Experience: 4 to10 Yrs Joining timing: Immediate or within 7 days Key Responsibilities: developing scalable and robust full-stack applications. design and implementation of microservices architecture using Spring Boot and Spring Cloud . Develop and maintain RESTful APIs and integrate third-party services. Create responsive and user-friendly front-end interfaces using Angular and Bootstrap . Design and optimize databases using Amazon Aurora, MySQL, or PostgreSQL. Implement event-driven architecture with Apache Kafka for real-time messaging. Build and manage CI/CD pipelines using tools like Jenkins, Bitbucket, and Docker . Deploy and orchestrate services using Kubernetes (K8s). Maintain clear and accurate API documentation using Swagger/OpenAPI standards . Monitor application performance and health using Datadog or similar tools. Follow and enforce cloud-native development principles and security best practices. Provide technical leadership, mentorship, and conduct code reviews. Collaborate effectively with product managers, DevOps engineers, and other stakeholders to ensure timely and high-quality software delivery. Required Skills & Qualifications: 4+ years of experience in Java and Spring Boot development. Hands-on experience with Angular, Bootstrap, and modern front-end practices. Strong knowledge of RESTful services, API integration, and microservices. Proven expertise in RDBMS (Aurora, MySQL, PostgreSQL). Experience with Apache Kafka in high-throughput environments. Solid understanding of CI/CD pipelines, containerization, and orchestration tools. Proficiency with Docker, Kubernetes, Jenkins, and Bitbucket. Familiarity with cloud monitoring tools like Datadog. Excellent understanding of API documentation standards and best practices. Strong communication and leadership skills. Show more Show less

Posted 3 weeks ago

Apply

0.0 - 5.0 years

0 Lacs

Noida, Uttar Pradesh

On-site

Indeed logo

Location: Noida, India Thales people architect identity management and data protection solutions at the heart of digital security. Business and governments rely on us to bring trust to the billons of digital interactions they have with people. Our technologies and services help banks exchange funds, people cross borders, energy become smarter and much more. More than 30,000 organizations already rely on us to verify the identities of people and things, grant access to digital services, analyze vast quantities of information and encrypt data to make the connected world more secure. Present in India since 1953, Thales is headquartered in Noida, Uttar Pradesh, and has operational offices and sites spread across Bengaluru, Delhi, Gurugram, Hyderabad, Mumbai, Pune among others. Over 1800 employees are working with Thales and its joint ventures in India. Since the beginning, Thales has been playing an essential role in India’s growth story by sharing its technologies and expertise in Defence, Transport, Aerospace and Digital Identity and Security markets. Position Summary The Database Engineering Specialist is an expert across various database technologies. For this position, non-relational database expertise is mandatory with a primary focus on Cassandra databases, as well as expertise in Public Cloud technology (AWS and/or GCP). The Database Engineering Specialist will be a member of Engineering team responsible for delivery of Thales customer solutions and will manage all responsibilities related to database installation, database deployment strategy, database architecture design, new product software release application, technical project planning, technical platform analysis and troubleshooting, project estimate provision, drive technical decision making, and provide overall database consultant services to the delivery and operation team. The Database Engineering Specialist will work in close coordination with product teams, delivery teams, support & operations teams, and management teams to satisfy requirements across areas of solution support, operations and technical project delivery. Essential Functions / Key Areas of Responsibility The Database Engineer primary responsibility footprint: Member of global database services team on call support team 24/7, follow the sun. Build, test, and review technical documentation, utilize documentation set as a playbook to manage and apply production change. Deploy and maintain database monitoring solutions. Develop, design, deploy, and test backup and recovery architectures for customer database platform solutions Develop, design, and deploy database high availability solutions utilizing database replication technology in both active/passive and active/active database architectures. Responsible for participating in maintenance window activities for hosted, on premise, and public cloud platform changes. Responsible for database platform deployment, installation, patching, change management, and third-party software upgrades on internal and external customer platforms. Responsible for leading datacenter technology architecture decisions by managing third party software inventory, recommend/plan/deploy platform upgrade cycles, plan and participate in security audit (GSMA), database migration plan development and execution to move customer solutions to new platforms within and between hosted datacenters and public cloud. Responsible for database hardening procedure identification and deployment on public cloud, hosted, and on-premises platforms. Responsible for providing database expertise and operations support to the technical support teams and project delivery teams. Responsible for leading strategic decision making as part of project delivery teams, analyze project requirements, provide recommendation, define task timelines, design migration and database deployment strategies in order to satisfy project delivery. Responsible to defining, tracking, executing, and planning for database delivery tasks as part of project delivery team in areas of database installation, database upgrade, hardening, database monitoring, and data migration to satisfy project delivery. Responsible for participating in technical database architecture design on public cloud, hosted, and on-premises platforms. Responsible for participating in database platform review, bench and tuning exercises, security evaluation, provide technical analysis and proactive recommendations for improvements and/or design changes for both production platforms and new software product delivery releases. Minimum Requirements: Skills, Experience & Education College degree in Computer Science NOSQL Database: 3-5 years Cassandra administration, other experience in other NOSQL database like MongoDB a plus Relational Database: Oracle database administration (Data Guard, Goldengate, active/passive, active/active) is desired Relational Database: MySQL administration experience is desired Extensive background with public cloud database deployment, management and migration. Expertise in database concepts, defining standards, processes, and procedures in database deployment methodologies Expert in operations of high-profile production database platforms with high SLA and high-performance expectation High level of experience in managing change on production database platform on hosted, on premise, and cloud database platforms Expert in deploying high availability database architectures Very good knowledge of all phases of software development lifecycle: requirements analysis, specification, design, implementation, code review, testing, and release Proactive, team player, and leadership qualities with strong technical background Excellent verbal and written communication skills Preferred Qualifications Highly skilled in Cassandra database administration Skilled in Public Cloud deployment (Cloud Formation, Terraform…), operations and monitoring (Datadog) Skilled in Oracle database and related tools MySQL, MongoDB, SQL Server experience a plus Knowledge of Kubernetes and Docker. Database performance evaluation, platform bench participation Special Position Requirements Candidate will need to be able to multitask and quickly switch if needed to work on emergency incidents on production platforms. The position requires the ability to be able to manage tight deadlines and have visibility on project delivery goals and the ability to communicate effectively to project teams and management. The candidate will be able to thrive in fast paced work environment. The candidate will be required to be available to work during nights and weekends when the situation requires it. At Thales we provide CAREERS and not only jobs. With Thales employing 80,000 employees in 68 countries our mobility policy enables thousands of employees each year to develop their careers at home and abroad, in their existing areas of expertise or by branching out into new fields. Together we believe that embracing flexibility is a smarter way of working. Great journeys start here, apply now!

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh

On-site

Indeed logo

About the Role: Grade Level (for internal use): 12 The Team: We are looking for a highly self-motivated hands-on Platform engineer to lead our DevOps team focusing on our Infrastructure Estate and Devops engineering. The Impact: This is an excellent opportunity to join us as we transform and harmonize our infrastructure into a unified place while also developing your skills and furthering your career as we plan to power the markets of the future. What’s in it for you: This is the place to hone your existing Infrastructure, DevOps and leadership skills while having the chance to become exposed to fresh and divergent technologies. Responsibilities: Have a strong understanding of large-scale cloud computing solutions including setting up and configuring Container platform. Have experience working with Azure DevOps, Docker and Kubernetes or related cloud technologies. Have excellent communication and troubleshooting skills. Have ability to present solution of complex problems to technical and non-technical audience. Have passion to learn new technologies and grow with team. Setup, configure and monitor CI/CD Pipelines and Container platform; conduct routine maintenance work for smooth operation with guaranteed uptime. Onboard applications onto the Container platform as demands come. Support the Assist various DEV and QA teams during their development and testing following the guidelines provided. Work closely with other dev leads and manager in day-to-day operation activities. Conduct regular capacity analysis and POCs. Develop and maintain the platform automation tools using Terraform, dashboard and utilities (Java, .NET C#, shell scripting, python etc.). Experience with setting up Infrastructure via Infrastructure as Code Lead the team providing hands-on guidance and roadmap. What We’re Looking For: Bachelor’s degree in computer science, Engineering or in equivalent discipline is required. 10+ years of relevant work experience managing platform and/or infrastructure. Professional level hands on experience in terraform, GIT, CI-CD, Docker and Containerization. Proficient with modern DevOps tools including GitLab and GitHub based CI/CD pipelines. Strong experience on application deployment and monitoring. Experience with logging framework and strategies. DataDog, Prometheus, Splunk, ELK or similar tools is preferable. Good hands-on experience with Linux/Unix and Windows OS. Hands on Experience in AWS Services (IAM, CloudWatch, S3, EC2, Lamda, SQS, SNS, Step Functions and others) Preferred Qualifications: Excellent communication (written & verbal) and collaboration skills. Excellent presentation skills to senior leadership. Detail-oriented and a great team player. Willing to work providing support coverage for extended hours and leading by example Willing to learn new technology and acquire new skills. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.2 - Middle Professional Tier II (EEO Job Group) Job ID: 316300 Posted On: 2025-05-27 Location: Noida, Uttar Pradesh, India

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh

On-site

Indeed logo

Associate Director - Platform Engineering Noida, India Information Technology 316300 Job Description About The Role: Grade Level (for internal use): 12 The Team: We are looking for a highly self-motivated hands-on Platform engineer to lead our DevOps team focusing on our Infrastructure Estate and Devops engineering. The Impact: This is an excellent opportunity to join us as we transform and harmonize our infrastructure into a unified place while also developing your skills and furthering your career as we plan to power the markets of the future. What’s in it for you: This is the place to hone your existing Infrastructure, DevOps and leadership skills while having the chance to become exposed to fresh and divergent technologies. Responsibilities: Have a strong understanding of large-scale cloud computing solutions including setting up and configuring Container platform. Have experience working with Azure DevOps, Docker and Kubernetes or related cloud technologies. Have excellent communication and troubleshooting skills. Have ability to present solution of complex problems to technical and non-technical audience. Have passion to learn new technologies and grow with team. Setup, configure and monitor CI/CD Pipelines and Container platform; conduct routine maintenance work for smooth operation with guaranteed uptime. Onboard applications onto the Container platform as demands come. Support the Assist various DEV and QA teams during their development and testing following the guidelines provided. Work closely with other dev leads and manager in day-to-day operation activities. Conduct regular capacity analysis and POCs. Develop and maintain the platform automation tools using Terraform, dashboard and utilities (Java, .NET C#, shell scripting, python etc.). Experience with setting up Infrastructure via Infrastructure as Code Lead the team providing hands-on guidance and roadmap. What We’re Looking For: Bachelor’s degree in computer science, Engineering or in equivalent discipline is required. 10+ years of relevant work experience managing platform and/or infrastructure. Professional level hands on experience in terraform, GIT, CI-CD, Docker and Containerization. Proficient with modern DevOps tools including GitLab and GitHub based CI/CD pipelines. Strong experience on application deployment and monitoring. Experience with logging framework and strategies. DataDog, Prometheus, Splunk, ELK or similar tools is preferable. Good hands-on experience with Linux/Unix and Windows OS. Hands on Experience in AWS Services (IAM, CloudWatch, S3, EC2, Lamda, SQS, SNS, Step Functions and others) Preferred Qualifications: Excellent communication (written & verbal) and collaboration skills. Excellent presentation skills to senior leadership. Detail-oriented and a great team player. Willing to work providing support coverage for extended hours and leading by example Willing to learn new technology and acquire new skills. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - IFTECH202.2 - Middle Professional Tier II (EEO Job Group) Job ID: 316300 Posted On: 2025-05-27 Location: Noida, Uttar Pradesh, India

Posted 3 weeks ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

About Company : They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society. · Job Title: GCP Devops Engineer · Location: PAN INDIA(Hybrid) · Experience: 10+ yrs · Job Type : Contract to hire. · Notice Period:- Immediate joiners. Mandatory Skills: GCP Devops Engineer Job Description – GCP Primary Skills: • GCP • Kubernetes (GKE, EKS, AKS) • Logging and monitoring (Grafana, Splunk, Datadog) • Networking (Service Mesh, Istio) • Serverless architecture (GCP Functions, AWS Lambda) Good to have: • Monitoring tools (Grafana, Prometheus, etc.) • Networking (VPC, DNS, Load Balancing) Responsibilities: • Design develop and maintain a scalable and highly available cloud infrastructure • Automate and streamline operations and processes • Monitor and troubleshoot system issues • Create and maintain documentation • Develop and maintain tools to automate operational tasks • Collaborate with software engineers to develop and deploy software applications • Develop and manage automated deployment pipelines • Utilize Continuous Integration and Continuous Delivery CICD tools and practices • Provision and maintain cloud-based databases • Optimize resources to reduce costs • Analyse and optimize system performance • Work with the development team to ensure code quality and security • Ensure compliance with security and other industry standards • Keep up with the latest technologies and industry trends • Proficient in scripting languages such as Python BASH PowerShell etc. • Experience with configuration management tools such as Chef Puppet and Ansible • Experience with CICD tools such as Jenkins TravisCI and CircleCI • Experience with container-based technologies such as Docker Kubernetes and ECS • Experience with version control systems such as Git • Understanding of network protocols and technologies • Ability to prioritize tasks and work independently • Strong problem solving and communication skills • Should be able to implement and maintain a highly available scalable and secure cloud infrastructure Seniority Level Mid-Senior level Industry IT Services and IT Consulting Employment Type Contract Job Functions Business Development Consulting Skills GCP Devops Terraform Cloud Infrastructure Show more Show less

Posted 3 weeks ago

Apply

1.0 - 2.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Organization Overview Company Description QAD is building a world-class SaaS company, and we are growing. We are looking for talented individuals who want to join us on our mission to help solve relevant real-world problems in manufacturing and the supply chain. Job Description As an Associate Enterprise Monitoring Engineer , you are responsible to provide system operation monitoring support for Cloud customers and assuring the ongoing uninterrupted operation of the Cloud solution. The associate monitors the system performance for standard QAD performance SLAs and analyzes reported issues to help determine the next steps. Enterprise Monitor is responsible for the operation of monitoring tools and processes to enable proactive events and performance monitoring of critical enterprise IT services such as servers, QAD Application/RDBMS Services, network devices, operating systems, database engines, storage capacity, and applications across all IT service lines, for early identification and resolution of operational problems or other special conditions. This role is in a structured teaming environment, requiring continued learning and acquisition of product knowledge to maximize preparedness for any new issue. The Associate Technical Consultant plays an important role in the provisioning and delivery of services requested by Cloud and On-Premise customers and in ensuring the smooth conduct of Cloud Operations. What You’ll Do Production Monitoring Non-Production Monitoring Special/Enterprise/manual monitoring. Begin weekends environments monitoring Monitor system performance for standard QAD performance SLAs as well as custom system monitoring per respective customer contracts or as directed by the Technical Service Delivery Lead; respond to alerts/messages coming out of tools. Notify immediately the Technical Service Delivery Team should any interruptions occur. Document issues in the production environment, resolving the issue whenever possible and escalating if necessary Working on assigned severity 2 & 3 incidents and resolving them as per defined Service Level Agreements (SLAs). Understand the Cloud outage/impairment process and follow the process in case of severity 0 situations as regards escalating to the right people at the right time for quick outage recovery. Escalate alerts and investigate cases for trend and determine if case should be escalated further to SDC Leads. Perform basic performance reporting. Work with Release Foundation team for self healing analysis, automation, and analytics Assist team members (locally, globally and across teams). Document in detail all analysis and correspondences throughout the issue resolution process; provide proactive status updates to customers. Qualifications Bachelor's degree in Engineering/B.Tech/MCA/BSC IT/BSC Comp. Sci. 1 to 2 years of technical experience Knowledge of Linux, RDBMS (Progress, SQL, MariaDB) Proficiency in monitoring tools and platforms ( like Datadog, Nagios, Zabbix, ICINGA, SolarWinds) and technical troubleshooting.. Understanding of Cloud Technologies (AWS, Azure, Google Cloud) and their monitoring capabilities Linux skills are a must. Should be well versed with RDBMS concepts and technical troubleshooting. Knowledge of or experience with relational database management systems, Java, Java Script, SQL, HTML5, HTML, XML, Open Source technologies. Knowledge of Progress is advantageous. Can also consider those coming in with some initial network or windows certification, some basic systems admin skills, applications backgrounds Excellent communication and problem solving skills. Knowledge of supporting QAD products and related technologies advantageous Additional Information Your health and well being are important to us at QAD. We provide programs that help you strike a healthy work-life balance. Opportunity to join a growing business, launching into its next phase of expansion and transformation. Collaborative culture of smart and hard-working people who support one another to get the job done. An atmosphere of growth and opportunity, where idea-sharing is always prioritized over level or hierarchy. Compensation packages based on experience and desired skill set About QAD QAD Inc. is a leading provider of adaptive, cloud-based enterprise software and services for global manufacturing companies. Global manufacturers face ever-increasing disruption caused by technology-driven innovation and changing consumer preferences. In order to survive and thrive, manufacturers must be able to innovate and change business models at unprecedented rates of speed. QAD calls these companies Adaptive Manufacturing Enterprises. QAD solutions help customers in the automotive, life sciences, packaging, consumer products, food and beverage, high tech and industrial manufacturing industries rapidly adapt to change and innovate for competitive advantage. QAD is committed to ensuring that every employee feels they work in an environment that values their contributions, respects their unique perspectives and provides opportunities for growth regardless of background. QAD’s DEI program is driving higher levels of diversity, equity and inclusion so that employees can bring their whole self to work. We are an Equal Opportunity Employer and do not discriminate against any employee or applicant for employment because of race, color, sex, age, national origin, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any other federal, state or local protected class. Show more Show less

Posted 3 weeks ago

Apply

4.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Rockwell Automation is a global technology leader focused on helping the world’s manufacturers be more productive, sustainable, and agile. With more than 28,000 employees who make the world better every day, we know we have something special. Behind our customers - amazing companies that help feed the world, provide life-saving medicine on a global scale, and focus on clean water and green mobility - our people are energized problem solvers that take pride in how the work we do changes the world for the better. We welcome all makers, forward thinkers, and problem solvers who are looking for a place to do their best work. And if that’s you we would love to have you join us! Summary Job Description We are looking for a cloud ops engineer who will be part of highly experienced global SRE team maintaining 24/7 availability of the product. You will be working to automate, enhance and maintain product mostly deployed in AWS. You will be reporting to SRE Manager and working in hybrid mode. Your Responsibilities As a member of our Cloud Services team, you will play a key role in managing our application technologies in public cloud and adoption of new technologies including open-source technologies. You will help us improve the reliability and delivery of our products. You will join a team to ensure that issues are resolved in an organized manner. The Essentials - You Will Have Bachelor's degree in Computer Science or Computer Engineering or Cyber Security, or equivalent experience. Overall minimum 4-5 years of experience Experience working with a cloud AWS - Amazon Web Services Experience with Server Operating Systems (Windows Server, Linux) Experience with Containerization platforms : Kubernetes , Docker, Amazon ECS etc... Automation : Git, AWS Codepipeline Infrastructure/Configuration as code: Terraform and Ansible, Cloud Formation, Experience with IPv4/IPv6, FTP, HTTP (request/response), SSL/TLS, HTML, XML Monitoring : AWS cloudwatch, kibana,logstash. The Preferred - You Might Also Have Working Experience with Microsoft Azure Scripting Knowledge - powershell, Bash Experience with Monitoring - Datadog, Snyk What We Offer Our benefits package includes … Comprehensive mindfulness programs with a premium membership to Calm Volunteer Paid Time off available after 6 months of employment for eligible employees. Company volunteer and donation matching program – Your volunteer hours or personal cash donations to an eligible charity can be matched with a charitable donation. Employee Assistance Program Personalized wellbeing programs through our OnTrack program On-demand digital course library for professional development and other local benefits! At Rockwell Automation we are dedicated to building a diverse, inclusive and authentic workplace, so if you're excited about this role but your experience doesn't align perfectly with every qualification in the job description, we encourage you to apply anyway. You may be just the right person for this or other roles. or Rockwell Automation’s hybrid policy aligns that employees are expected to work at a Rockwell location at least Mondays, Tuesdays, and Thursdays unless they have a business obligation out of the office. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

About Us Our leading SaaS-based Global Growth Platform™ enables clients to expand into over 180 countries quickly and efficiently, without the complexities of establishing local entities. At G-P, we’re dedicated to breaking down barriers to global business and creating opportunities for everyone, everywhere. Our diverse, remote-first teams are essential to our success. We empower our Dream Team members with flexibility and resources, fostering an environment where innovation thrives and every contribution is valued and celebrated. The work you do here will positively impact lives around the world. We stand by our promise: Opportunity Made Possible. In addition to competitive compensation and benefits, we invite you to join us in expanding your skills and helping to reshape the future of work. At G-P, we assist organizations in building exceptional global teams in days, not months—streamlining the hiring, onboarding, and management process to unlock growth potential for all. About This Position What we're looking for: Responsibilities And Skills Proficiency in observability tools (New Relic, DataDog) for proactive outage reduction and rapid detection. Solid understanding of application architectures and networking principles. In-depth knowledge of DevOps and Site Reliability Engineering (SRE) concepts. Strong grasp of Unix/Linux systems, including system libraries, file systems, and client-server protocols. Comprehensive networking expertise encompassing network theory, protocols (TCP/IP, UDP, ICMP, MAC, IP, DNS, OSI), and load balancing. Proven experience with containerization and orchestration platforms. Hands-on coding experience in Python, TypeScript, or Golang. Extensive experience with Infrastructure as Code (IaC) frameworks such as Terraform, CloudFormation, and CDK. Demonstrated ability to deliver using CI/CD pipelines (AWS CodeDeploy, GitHub Actions, Jenkins). Significant experience with data streaming technologies (Kinesis, SQS, Kafka). Experience managing remote engineering teams across multiple time zones is a key requirement. We will consider for employment all qualified applicants who meet the inherent requirements for the position. Please note that background checks are required, and this may include criminal record checks. G-P. Global Made Possible. G-P is a proud Equal Opportunity Employer, and we are committed to building and maintaining a diverse, equitable and inclusive culture that celebrates authenticity. We prohibit discrimination and harassment against employees or applicants on the basis of race, color, creed, religion, national origin, ancestry, citizenship status, age, sex or gender (including pregnancy, childbirth, and pregnancy-related conditions), gender identity or expression (including transgender status), sexual orientation, marital status, military service and veteran status, physical or mental disability, genetic information, or any other legally protected status. G-P also is committed to providing reasonable accommodations to individuals with disabilities. If you need an accommodation due to a disability during the interview process, please contact us at careers@g-p.com. Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

We are seeking a highly skilled Lead Python Developer to spearhead the creation, execution, and rollout of innovative Python-based solutions tailored for cloud infrastructure management and automation. You will serve a central role in crafting scalable solutions, incorporating AI/ML technologies, and guiding junior team members to enhance innovation and operational efficacy across cloud platforms. Responsibilities Spearhead the creation, execution, and rollout of Python-based solutions for cloud infrastructure management and automation Craft and implement scalable infrastructure as code utilizing Terraform and Azure DevOps Employ AI/ML technologies through API integrations for seamless cloud operations including reporting and orchestration Work closely with solution architects and devops engineers to transform requirements into programming specifications Supervise the development and production deployment of AI models for predictive analytics leveraging top-tier techniques and Python Establish CI/CD pipelines for streamlining testing, deployment, and monitoring of applications Guide junior engineers and offer technical leadership in Python programming and cloud architecture Manage code quality through comprehensive reviews of Python-based automation tasks Apply security and data protection strategies Maintain abreast of latest trends in Python programming, AI/ML, cloud computing, and API integrations Document development methods, code modifications, and coding standards for educational purposes Requirements 5+ years in Python programming focusing on cloud automation 1+ years in a relevant leadership role Proficiency with Python frameworks such as Django or Flask Familiarity with AI/ML platforms like TensorFlow or PyTorch Extensive exposure to cloud services from AWS and Azure Proficiency in IaC tools such as Terraform Experience integrating AI/ML technologies into cloud management Background in container technologies like Docker and Kubernetes Strong experience with API design, RESTful services, and external API integrations Solid understanding of version control, testing, and CI tools such as Azure DevOps Strong grasp of security protocols in cloud settings Experience with serverless architectures like AWS Lambda or Azure Functions Familiarity with monitoring applications like Prometheus or Datadog Proficiency in software design patterns and architectural concepts Proficiency in assisting developers with SDK usage and best practices Understanding of AI technologies and hands-on experience with LLMs, RAG, and Prompt Engineering Nice to have Additional familiarity with AI/ML platforms TensorFlow or PyTorch We offer International projects with top brands Work with global teams of highly skilled, diverse peers Healthcare benefits Employee financial programs Paid time off and sick leave Upskilling, reskilling and certification courses Unlimited access to the LinkedIn Learning library and 22,000+ courses Global career opportunities Volunteer and community involvement opportunities Opportunity to join and participate in life of EPAM's Employee Resource Groups EPAM Employee Groups Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn Show more Show less

Posted 3 weeks ago

Apply

1.0 - 2.0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Linkedin logo

Organization Overview Company Description QAD is building a world-class SaaS company, and we are growing. We are looking for talented individuals who want to join us on our mission to help solve relevant real-world problems in manufacturing and the supply chain. Job Description As an Associate Enterprise Monitoring Engineer , you are responsible to provide system operation monitoring support for Cloud customers and assuring the ongoing uninterrupted operation of the Cloud solution. The associate monitors the system performance for standard QAD performance SLAs and analyzes reported issues to help determine the next steps. Enterprise Monitor is responsible for the operation of monitoring tools and processes to enable proactive events and performance monitoring of critical enterprise IT services such as servers, QAD Application/RDBMS Services, network devices, operating systems, database engines, storage capacity, and applications across all IT service lines, for early identification and resolution of operational problems or other special conditions. This role is in a structured teaming environment, requiring continued learning and acquisition of product knowledge to maximize preparedness for any new issue. The Associate Technical Consultant plays an important role in the provisioning and delivery of services requested by Cloud and On-Premise customers and in ensuring the smooth conduct of Cloud Operations. What You’ll Do Production Monitoring Non-Production Monitoring Special/Enterprise/manual monitoring. Begin weekends environments monitoring Monitor system performance for standard QAD performance SLAs as well as custom system monitoring per respective customer contracts or as directed by the Technical Service Delivery Lead; respond to alerts/messages coming out of tools. Notify immediately the Technical Service Delivery Team should any interruptions occur. Document issues in the production environment, resolving the issue whenever possible and escalating if necessary Working on assigned severity 2 & 3 incidents and resolving them as per defined Service Level Agreements (SLAs). Understand the Cloud outage/impairment process and follow the process in case of severity 0 situations as regards escalating to the right people at the right time for quick outage recovery. Escalate alerts and investigate cases for trend and determine if case should be escalated further to SDC Leads. Perform basic performance reporting. Work with Release Foundation team for self healing analysis, automation, and analytics Assist team members (locally, globally and across teams). Document in detail all analysis and correspondences throughout the issue resolution process; provide proactive status updates to customers. Qualifications Bachelor's degree in Engineering/B.Tech/MCA/BSC IT/BSC Comp. Sci. 1 to 2 years of technical experience Knowledge of Linux, RDBMS (Progress, SQL, MariaDB) Proficiency in monitoring tools and platforms ( like Datadog, Nagios, Zabbix, ICINGA, SolarWinds) and technical troubleshooting.. Understanding of Cloud Technologies (AWS, Azure, Google Cloud) and their monitoring capabilities Linux skills are a must. Should be well versed with RDBMS concepts and technical troubleshooting. Knowledge of or experience with relational database management systems, Java, Java Script, SQL, HTML5, HTML, XML, Open Source technologies. Knowledge of Progress is advantageous. Can also consider those coming in with some initial network or windows certification, some basic systems admin skills, applications backgrounds Excellent communication and problem solving skills. Knowledge of supporting QAD products and related technologies advantageous Additional Information Your health and well being are important to us at QAD. We provide programs that help you strike a healthy work-life balance. Opportunity to join a growing business, launching into its next phase of expansion and transformation. Collaborative culture of smart and hard-working people who support one another to get the job done. An atmosphere of growth and opportunity, where idea-sharing is always prioritized over level or hierarchy. Compensation packages based on experience and desired skill set About QAD QAD Inc. is a leading provider of adaptive, cloud-based enterprise software and services for global manufacturing companies. Global manufacturers face ever-increasing disruption caused by technology-driven innovation and changing consumer preferences. In order to survive and thrive, manufacturers must be able to innovate and change business models at unprecedented rates of speed. QAD calls these companies Adaptive Manufacturing Enterprises. QAD solutions help customers in the automotive, life sciences, packaging, consumer products, food and beverage, high tech and industrial manufacturing industries rapidly adapt to change and innovate for competitive advantage. QAD is committed to ensuring that every employee feels they work in an environment that values their contributions, respects their unique perspectives and provides opportunities for growth regardless of background. QAD’s DEI program is driving higher levels of diversity, equity and inclusion so that employees can bring their whole self to work. We are an Equal Opportunity Employer and do not discriminate against any employee or applicant for employment because of race, color, sex, age, national origin, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any other federal, state or local protected class. Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Required Skills Experience required- 4 to 6 Should have Full stack experience in Java, REACT JS, JAVA SCRIPT, API Development Skill. – 3 – 5 year Should be able to write and analyze complex SQLs using sub queries, window functions etc. Should have varied database experience (e.g. HANA, PostgreSQL, Mongo, Oracle, MySQL). – 3 – 5 years Should have experience in DevOps (GitHub Actions, Jenkins). – 3 – 5 years Experience in JAVA and UI test case writing Should have exposure of DOTCOM, Datadog/AppDynamics Work experience in an Agile team environment. - 3 - 5 years Excellent interpersonal and communication skills Desired Skills Experience in AWS Services 2-3 years Previous experience in programming/software development in languages such as Python, PySpark – 2 – 3 years Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Linkedin logo

We are seeking a highly skilled Lead Python Developer to spearhead the creation, execution, and rollout of innovative Python-based solutions tailored for cloud infrastructure management and automation. You will serve a central role in crafting scalable solutions, incorporating AI/ML technologies, and guiding junior team members to enhance innovation and operational efficacy across cloud platforms. Responsibilities Spearhead the creation, execution, and rollout of Python-based solutions for cloud infrastructure management and automation Craft and implement scalable infrastructure as code utilizing Terraform and Azure DevOps Employ AI/ML technologies through API integrations for seamless cloud operations including reporting and orchestration Work closely with solution architects and devops engineers to transform requirements into programming specifications Supervise the development and production deployment of AI models for predictive analytics leveraging top-tier techniques and Python Establish CI/CD pipelines for streamlining testing, deployment, and monitoring of applications Guide junior engineers and offer technical leadership in Python programming and cloud architecture Manage code quality through comprehensive reviews of Python-based automation tasks Apply security and data protection strategies Maintain abreast of latest trends in Python programming, AI/ML, cloud computing, and API integrations Document development methods, code modifications, and coding standards for educational purposes Requirements 5+ years in Python programming focusing on cloud automation 1+ years in a relevant leadership role Proficiency with Python frameworks such as Django or Flask Familiarity with AI/ML platforms like TensorFlow or PyTorch Extensive exposure to cloud services from AWS and Azure Proficiency in IaC tools such as Terraform Experience integrating AI/ML technologies into cloud management Background in container technologies like Docker and Kubernetes Strong experience with API design, RESTful services, and external API integrations Solid understanding of version control, testing, and CI tools such as Azure DevOps Strong grasp of security protocols in cloud settings Experience with serverless architectures like AWS Lambda or Azure Functions Familiarity with monitoring applications like Prometheus or Datadog Proficiency in software design patterns and architectural concepts Proficiency in assisting developers with SDK usage and best practices Understanding of AI technologies and hands-on experience with LLMs, RAG, and Prompt Engineering Nice to have Additional familiarity with AI/ML platforms TensorFlow or PyTorch We offer International projects with top brands Work with global teams of highly skilled, diverse peers Healthcare benefits Employee financial programs Paid time off and sick leave Upskilling, reskilling and certification courses Unlimited access to the LinkedIn Learning library and 22,000+ courses Global career opportunities Volunteer and community involvement opportunities Opportunity to join and participate in life of EPAM's Employee Resource Groups EPAM Employee Groups Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Tamil Nadu, India

Remote

Linkedin logo

Application Support Engineer (Path to Developer or QA Role) Location: Salem, Coimbatore or Madurai Must be willing to work US based shift hours Company: RunLoyal | Atlanta, GA Summary RunLoyal, a rapidly growing vertical SaaS company transforming the pet care industry in the U.S., is seeking a passionate, proactive, and technically driven Application Support Engineer to join our team in India. This is not just a support role—it's a launchpad to becoming a full-time developer or QA in our core technology stack: Angular, Java, Flutter, and related frameworks . If you love solving real customer issues, aspire to grow into a developer role, and are motivated by ownership, accountability, and building products that matter , this role is for you. Key Responsibilities Provide timely and empathetic technical support to U.S.-based customers via phone, email, and chat. Diagnose and troubleshoot application issues, system bugs, and environment-related errors; escalate as needed. Act as the bridge between customers and our engineering team , contributing directly to product quality. Document solutions, maintain internal support knowledge base, and suggest improvements to reduce recurring issues. Collaborate with QA and development teams to test and validate product fixes. Follow and improve support policies, processes, and service metrics for 24/7 operations. Contribute to technical documentation, customer guides, and backend support tools. Show progress towards development readiness by learning our tech stack (Angular, Java, Flutter, Spring Boot, AWS). What We're Looking For 2–3 years of experience in application support for a U.S.-based SaaS platform (must-have). Real work experience or course work or certification in any of our dev stack (Java, Angular, Flutter). Solid foundation in MySQL , Linux basics , and exposure to AWS environments . Prior experience with tools like Zendesk, Freshdesk, Datadog, New Relic, or similar (please list tools you've used). Strong communication skills (both written and verbal) for U.S. customer interaction. Self-starter mindset with a track record of owning issues end-to-end . Willingness to work weekends and shifts in a 24/7 environment . Bonus if you have: AWS and SQL certifications. Passion for building and growing into a developer role . RunLoyal Culture We operate with a start-up mindset , high expectations, and a team that’s obsessed with outcomes and customer success . We're building something great—and we want builders who thrive in ambiguity, take ownership, and constantly grow. We value: Kindness : Celebrate wins, assume positive intent, and lift others up. Ownership : You are responsible for your outcomes and impact. Fearlessness : Speak up, try bold things, and fail fast. Curiosity : Always learning and evolving with our customers. Discourse over dissonance : Challenge ideas, not people. Understanding over consensus : Commit fully once decisions are made. Empathy and trust : With customers and teammates alike. We’re not looking for someone who just wants a “job.” We’re looking for someone excited to be part of a mission-driven team , help customers, and grow into a world-class software engineer. Why Join Us? Clear career path to full-stack development roles (Java, Angular, Flutter). Competitive salary & benefits. Exposure to U.S. SaaS support standards and real customer impact. High-growth team, remote flexibility, and autonomy from Day 1. Make a difference in the lives of pet businesses across the country. How to Apply Please send your resume and a short note about why you want to start in support and grow into a developer to: 📧 jointhepack@runloyal.com Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Company Description BETSOL is a cloud-first digital transformation and data management company offering products and IT services to enterprises in over 40 countries. BETSOL team holds several engineering patents, is recognized with industry awards, and BETSOL maintains a net promoter score that is 2x the industry average. BETSOL’s open source backup and recovery product line, Zmanda (Zmanda.com), delivers up to 50% savings in total cost of ownership (TCO) and best-in-class performance. BETSOL Global IT Services (BETSOL.com) builds and supports end-to-end enterprise solutions, reducing time-to-market for its customers. BETSOL offices are set against the vibrant backdrops of Broomfield, Colorado and Bangalore, India. We take pride in being an employee-centric organization, offering comprehensive health insurance, competitive salaries, 401K, volunteer programs, and scholarship opportunities. Office amenities include a fitness center, cafe, and recreational facilities. Learn more at betsol.com Job Description Roles & Responsibilities: Triage alerts and analyze security events/logs for threats such as computer viruses, exploits, and malicious attacks. Use critical thinking to bring together information from multiple sources to determine if a threat is present. Conduct security incident response and investigation. Conduct comprehensive security assessments and risk analysis on existing systems and applications. Analyze web traffic for suspicious patterns and potential security breaches. Perform vulnerability assessments and penetration testing. Prepare and provide security documentation and evidence for internal and external audits, ensuring compliance with regulatory requirements and security standards. Stay abreast of the latest cybersecurity trends, threats, and technologies to proactively address emerging risks. Qualifications Bachelor’s degree in computer science, Information Technology, cybersecurity, or a related field. 3+ years of relevant experience. Proficiency in conducting risk assessments, vulnerability assessments, and penetration testing. Experience deploying and maintaining email security systems including anti-phishing, DLP, and encryption technologies to safeguard sensitive data and mitigate threats. Hands-on experience with security tools and technologies such as IDS/IPS, SIEM, and Penetration testing tools like Qualys/Tenable. Hands-on troubleshooting skills for security alerts related to Firewall (SonicWall & FortiGate), Microsoft Entra ID/O365, Windows and Linux Servers. Strong knowledge of GRC frameworks such as PCI-DSS ISO 27001:2022 & 9001:2015, SOC2 Type II CEH (Certified Ethical Hacker) AZ-500 Microsoft Azure Security Technoligies/Cloud Security Certifications with hands on experience Experience with evidence gathering for any of the compliances like PCI DSS, SOC2, HIPPA and ISO. Good understanding of the IT infrastructure architecture both on-prem and AWS and Azure clouds. Tools: Vulnerability management: Tenable, QualysGuard, Nessus Endpoint protection: Sophos, Bitdefender, Trend Micro, Windows Defender SIEM: Wazuh, DataDog, Splunk, Microsoft Sentinel, Sumo Logic Email Security: Zix email security, Exchange Online Protection, Defender for Office 365 Compliance standards: ISO ISMS, SOC2, PCI DSS, HIPAA Preferred: Any of the Certifications like - AWS Certified Security - Specialty, Certified Information Systems Security Professional (CISSP), Certified Information Security Manager (CISM), Certified Information System Auditor (CISA), GIAC Certifications, or NIST Cybersecurity Framework (CSF) Additional Information NA Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

As Lead Splunk, Your Role And Responsibilities Would Include Hands-on experience in the SIEM domain Deep understanding of Splunk backend operations (UF, HF, SH, and Indexer Cluster) and architecture Strong knowledge of Log Management and Splunk SIEM. Understanding of log collection, parsing, normalization, and retention practices. Expertise in optimizing logs and license usage. Solid understanding of designing, deploying, and implementing scalable SIEM architecture. Understanding of data parsimony as a concept, especially in terms of German data security standards. Working knowledge of integrating Splunk logging infrastructure with third-party observability tools like ELK and DataDog. Experience in identifying the security and non-security logs and applying appropriate filters to route the logs correctly. Expertise in understanding network architecture and identifying the components of impact. Proficiency in Linux administration. Experience with Syslog. Proficiency in scripting languages like Python, PowerShell, or Bash for task automation. Expertise with OEM SIEM tools, preferably Splunk. Experience with open-source SIEM/log storage solutions like ELK or Datadog. Strong documentation skills for creating high-level design (HLD), low-level design (LLD), implementation guides, and operation manuals. Skills: siem,linux administration,team collaboration,communication skills,architecture design,python,parsing,normalization,retention practices,powershell,data security,log management,bash,splunk,log collection,documentation,syslog,incident response,data analysis Show more Show less

Posted 3 weeks ago

Apply

5.0 - 7.0 years

14 - 18 Lacs

Gurugram

Work from Office

Naukri logo

Role Cloud DevOps Engineer. Location Noida and Gurgaon. Notice Immediate Joiner. Detailed - Proactively monitor availability and performance of the client application and infrastructure using key tools effectively and quickly respond to monitoring alerts, incident tickets, and email requests coming to us/team.- Perform infrastructure, application and web site troubleshooting to quickly resolve the issues.- Escalate issues as needed to Product Engineering / Client teams.- Handling communication and notification on major site issues to the management team.- Document resolution run books and standard operating procedures.- Works on incidents, analyzes application issues/logs , does impact analysis and post-incident reviews.- Participates and provides feedback during design discussions.- Experience with build automation and CICD.- Work towards the reduction of toil on a day-to-day basis through automation of repeated jobs , develops tools to improve efficiency , enhances the knowledge base.- Handles/manages projects.- Ready to work in shifts. Primary Key skills - Hands-on Cloud experience on AWS, Azure.- Understanding of 3-tier production architecture.- Linux OS administration, Support, Performance optimizations, Troubleshooting, Security and hardening of OS.- Tomcat, Websphere, JBoss.- Apache, Nginx, Jetty, Node.- Scripting (bash, groovy, perl, python etc.- Java applications troubleshooting and deployments.- Good knowledge of CI-CD, Azure DevOps, Git.- Hands-on experience Jenkins and Octopus Deploy.- Alfresco CMS.- Hands-on experience on CloudFront / Cloudflare / Akamai / CDN.- Hands-on experience on working with Web Application Firewall, Security Certificates, SSL.- Strong concepts of Infrastructure, Networking, VM's, VPN, Subnets, Gateways, DNS, Active Directory.- Good knowledge and experience in SRE.- Hands-on with monitoring tools like Datadog, NewRelic, Site24x7, Pingdom, AppDynamics.- Knowledge of databases.Experience 5-7 Years

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

WHO WE ARE: Zinnia is the leading technology platform for accelerating life and annuities growth. With innovative enterprise solutions and data insights, Zinnia simplifies the experience of buying, selling, and administering insurance products. All of which enables more people to protect their financial futures. Our success is driven by a commitment to three core values: be bold, team up, deliver value – and that we do. Zinnia has over $180 billion in assets under administration, serves 100+ carrier clients, 2500 distributors and partners, and over 2 million policyholders. WHO WE ARE: Zinnia is simplifying how people buy, sell, and administer insurance products. Combining intuitive enterprise technology solutions and data insights, the Policygenius marketplace, and market-leading products including smartoffice, annuitynet, lifespeed, winflex, TPP, vitalsales Suite, and Exchange Consulting, Zinnia is redesigning the insurance experience for shoppers, advisors, and insurers alike — and enabling more people to protect their financial futures along the way. Zinnia has over $173.7 billion in assets under administration across 100+ carrier clients, 2500 distributors and partners, and over 2 million policyholders. WHO YOU ARE: As a Software Development Engineer in Test (SQE III), you will be actively involved in building an Enterprise Automation Solution for Global Zinnia team which includes, API, WebUI, DB to speed up the product launches, customizing the Automation Testing solutions to Insurance client needs. You will be client-facing and will independently run multiple automation projects and guide other automation testers. The purpose of the role is to speed up the automation coverage by creating more reusable automation scripts. WHAT YOU’LL DO: Develop automated test scripts using ROBOT framework (Python), focusing on keyword-driven and data-driven testing. Create automation test strategy, plan, and estimates. Guide and mentor automation testing team members. Analyze tests using CI/CD tools such as GitHub pipelines or similar, with reporting in testrail or equivalent. Stay updated with AI-driven testing trends and tools similar to Cypress for modern web application testing. Ensure compliance with security testing tools and code quality analysis using tools like SonarQube. Review and improve automation processes with AI-driven tools and Test Projects. Collaborate with teams using API testing tools like Postman, and UI automation tools like Playwright. Contribute to QA strategy focusing on shift-left testing methodologies and chaos engineering practices. Provide reports and metrics using visualization tools and log analysis with Datadog, and version control with GitHub. Utilize AI and ML for test result analysis for intelligent test automation using external tools. Prior experience in advanced test scripts using BDD frameworks like Cucumber with Gherkin syntax or similar will be an added advantage. Drive automation framework implementation, leveraging tools like any Grid for distributed testing and Docker for containerized environments. Act as a subject matter expert, participating in industry conferences and forums to stay abreast of the latest trends. WHAT YOU’LL NEED: Bachelor’s degree in computer science, engineering, or a related field. Proven experience 6-7years as a quality engineer or in senior automation testing role. Proven experience in developing and maintaining automated test scripts using ROBOT framework (Python). Stay up to date with market/technology trends. Demonstrated experience in Selenium, Appium, or similar frameworks. Proficiency in programming languages such as Python, Java, or C#, XML for test automation. Advanced SQL ability and complete complex database testing Knowledge of Agile methodologies and experience working in Agile teams. Excellent problem-solving skills, with a strong attention to detail. Good understanding of continuous integration and continuous delivery practices. Ability to multitask, prioritize work, and manage time effectively in a dynamic environment. Strong analytical and decision-making abilities for test strategy and planning. Ability to travel independently up to 20% of the time. Bonus Points: Annuity, mutual funds, or life insurance work experience. FLMI or other industry- or field-related designations/certifications. WHAT’S IN IT FOR YOU? At Zinnia, you collaborate with smart, creative professionals who are dedicated to delivering cutting-edge technologies, deeper data insights, and enhanced services to transform how insurance is done. Visit our website at www.zinnia.com for more information. Apply by completing the online application on the careers section of our website. We are an Equal Opportunity employer committed to a diverse workforce. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability. Show more Show less

Posted 3 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

We are looking for a Mid-Senior Site Reliability Engineer to contribute to the design, development, and maintenance of our payment platforms. This role requires a strong background in development, hands-on experience with scalable, high-performance systems, and an understanding of the payments ecosystem. As a key member of the engineering team, you will work on building, optimizing, and maintaining platforms that drive secure, efficient, and reliable payment solutions. Required Skills and Qualifications: 3-5 years of experience as an SRE, Platform Engineer, or DevOps Engineer, with a focus on large-scale, high-availability Enterprise systems. Experience with containerization technologies (OpenShift, Kubernetes) and orchestration tools. Proficient with monitoring tools such as Splunk, Datadog, or AWS CloudWatch for Application and Infrastructure Monitoring and performing Predictive Analysis. Solid understanding of security best practices for cloud and on-prem systems (e.g., encryption, access control, firewall management). Proficiency with payment protocols such as SWIFT, MTS, and ACH. Knowledge and experience with Mainframe systems for supporting legacy payments infrastructure and Java/IIB for supporting modernized payment applications. Familiarity with Glass box for tracking and analyzing customer interactions with payment systems. Experience with CI/CD tools (e.g., Jenkins, GitHub) for automating software builds and deployments. Experience in payment systems or banking platforms is a plus. Familiarity with industry standards and compliance regulations such as ISO20022, SEPA, and PCI-DSS. Proficient in AWS Cloud and cloud-native tools. Experience with on-premise infrastructure management (e.g., VMware, physical servers, network systems). Strong experience with Linux/Unix system administration and troubleshooting. Familiarity with Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or Ansible. Perform root cause analysis (RCA) for recurring issues and implement long-term resolutions. Collaborate with internal and external teams to resolve escalated technical issues, ensuring minimal downtime and system reliability. Assist with patching, upgrades, and system configuration changes for the EPP and related platforms. Participate in on-call rotation to provide 24/7 support for critical systems. Strong problem-solving skills and the ability to troubleshoot complex production environments. Experience with Linux/Unix environments, basic shell scripting, and database queries. Excellent communication skills for cross-team collaboration and vendor interaction. Ability to work effectively in a fast-paced, high-demand environment. Preferred Qualifications : Proven hands-on experience in managing Kubernetes and Terraform in real-world projects. Strong background in implementing and maintaining infrastructure automation and container orchestration solutions. Experience with payment gateways, fraud detection systems, or financial technologies. Familiarity with serverless architecture using AWS Lambda or similar technologies. Certification in AWS (e.g., AWS Certified Solutions Architect, DevOps Engineer). Show more Show less

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Data Engineer Lead - AEP Location:Remote Experience Required: 12–15 years overall experience 8+ years in Data Engineering 5+ years leading Data Engineering teams Cloud migration & consulting experience (GCP preferred) Job Summary: We are seeking a highly experienced and strategic Lead Data Engineer with a strong background in leading data engineering teams, modernizing data platforms, and migrating ETL pipelines and data warehouses to Google Cloud Platform (GCP) . You will work directly with enterprise clients, architecting scalable data solutions, and ensuring successful delivery in high-impact environments. Key Responsibilities: Lead end-to-end data engineering projects including cloud migration of legacy ETL pipelines and Data Warehouses to GCP (BigQuery) . Design and implement modern ELT/ETL architectures using Dataform , Dataplex , and other GCP-native services. Provide strategic consulting to clients on data platform modernization, governance, and data quality frameworks. Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders. Define and enforce data engineering best practices , coding standards, and CI/CD processes. Mentor and manage a team of data engineers; foster a high-performance, collaborative team culture. Monitor project progress, ensure delivery timelines, and manage client expectations. Engage in technical pre-sales and solutioning , driving excellence in consulting delivery. Technical Skills & Tools: Cloud Platforms: Strong experience with Google Cloud Platform (GCP) – particularly BigQuery , Dataform , Dataplex , Cloud Composer , Cloud Storage , Pub/Sub . ETL/ELT Tools: Apache Airflow, Dataform, dbt (if applicable). Languages: Python, SQL, Shell scripting. Data Warehousing: BigQuery, Snowflake (optional), traditional DWs (e.g., Teradata, Oracle). DevOps: Git, CI/CD pipelines, Docker. Data Modeling: Dimensional modeling, Data Vault, star/snowflake schemas. Data Governance & Lineage: Dataplex, Collibra, or equivalent tools. Monitoring & Logging: Stackdriver, DataDog, or similar. Preferred Qualifications: Proven consulting experience with premium clients or Tier 1 consulting firms. Hands-on experience leading large-scale cloud migration projects . GCP Certification(s) (e.g., Professional Data Engineer, Cloud Architect). Strong client communication, stakeholder management, and leadership skills. Experience with agile methodologies and project management tools like JIRA. Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Required Skills & Qualifications: Bachelor's degree in computer science or information technology fields, or equivalent professional experiences. A Master’s Degree is preferred. 3+ years of professional site reliability and DevOps career experience. In-depth familiarity with SRE terminologies including Service Level Objectives (SLOs), Service Level Indicators (SLIs), error budgets, incident management, postmortem analysis, Recovery Time Objective (RTO), and Recovery Point Objective (RPO). Ability to identify organization-wide gaps in the SRE domain and identify implementable solutions that contribute to the transformation of the organization. Ability to build and lead high-performance SRE teams to consistently achieve business results. Expertise with monitoring, APM, and alerting tools like Splunk, Dynatrace, Grafana, Datadog, New Relic, etc. Experience with one or more high-level languages such as Python, Go, Java, JavaScript, C#, Ruby, PHP. Experience with CI/CD pipelines like Jenkins, ADO, GitHub Actions, or GitLab, etc. Demonstrated expertise in cloud platforms like Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP). Proficiency in containerization and orchestration technologies such as Docker, Kubernetes, or OpenShift. Experience with Infrastructure as a Code (IaC) and configuration management tools like Terraform, Ansible, Chef, Puppet, or Salt. Exposure to open-source telemetry (OTel) frameworks and tools. Experience with tools like ServiceNow, PagerDuty, XMatters, etc. Strong communication skills and ability to partner across organizations. Ability to create technical documents on SRE best practices and processes. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

As Lead Splunk, your role and responsibilities would include: Hands on experience in the SIEM domain o Expert knowledge on splunk> Backend operations (UF, HF, SH and Indexer Cluster) and architecture o Expert knowledge of Log Management and Splunk SIEM. Understanding of log collection, parsing, normalization, and retention practices. o Expert in Logs/License optimization techniques and strategy. o Good Understanding of Designing, Deployment & Implementation of a scalable SIEM Architecture . o Understanding of data parsimony as a concept, especially in terms of German data security standards. o Working knowledge of integration of Splunk logging infrastructure with 3rd party Observability Tools (e.g. ELK, DataDog etc.) o Experience in identifying the security and non-security logs and apply adequate filters/re-route the logs accordingly. o Expert in understanding the Network Architecture a nd identifying the components of impact. o Expert in Linux Administration. o Proficient in working with Syslog . o Proficiency in scripting languages like Python, PowerShell, or Bash to automate tasks Expertise with OEM SIEM tools preferably Splunk E xperience with open source SIEM/Log storage solutions like ELK OR Datadog etc. . o Very good with documentation of HLD, LLD, Implementation guide and Operation Manuals Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About Us IAMOPS is DevOps-focused services company helping startups and enterprises build scalable, reliable, and secure infrastructure. Our team thrives on solving complex infrastructure challenges, implementing automation, and working directly with clients to deliver value through modern DevOps practices. Job Summary We are seeking a highly capable and experienced Senior DevOps Engineer with 4–5 years of hands-on experience. The ideal candidate must possess deep knowledge of Linux systems, networking, scripting (Bash, Python), and automation tools, with the ability to take ownership of projects, collaborate directly with clients, and lead internal team efforts when required. You’ll play a key role in delivering DevOps solutions across various client environments while mentoring junior team members and driving technical excellence. Key Responsibilities Client-Facing DevOps Delivery: Work directly with client stakeholders to gather requirements, understand their infrastructure pain points, and deliver robust DevOps solutions. Linux & Networking Mastery: Architect and troubleshoot systems with a strong foundation in Linux internals, process management, network stack, routing, firewalls, etc. Automation & Scripting: Automate repetitive tasks using Bash and Python scripts. Maintain and extend reusable automation assets. Infrastructure as Code (IaC): Develop and manage infrastructure using tools like Terraform, Ansible, or similar. CI/CD Ownership: Build, maintain, and optimize CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI, etc. Containerization & Orchestration: Deploy and manage applications using Docker and Kubernetes in production environments. Cloud Management: Architect and manage infrastructure across AWS, Azure, or GCP. Implement cost-effective and scalable cloud strategies. Monitoring & Logging: Implement observability stacks like Prometheus, Grafana, ELK, or cloud-native solutions. Mentorship: Guide and support junior engineers; contribute to knowledge-sharing, code reviews, and internal standards. Key Requirements 4–5 years of hands-on DevOps experience in production environments. Strong fundamentals in: Linux administration and troubleshooting. Computer Networking – firewalls, routing, DNS, load balancing, NAT, etc. Scripting – Bash (required), Python (preferred). Experience with: CI/CD tools (Jenkins, GitLab CI, GitHub Actions). Docker and Kubernetes. Cloud platforms – AWS (preferred), GCP or Azure. Infrastructure-as-Code – Terraform, Ansible, or similar. Ability to work independently with clients, understand business needs, and translate them into technical solutions. Proven experience in collaborating with or leading small teams in a fast-paced environment. Nice to Have Cloud or Kubernetes certifications (AWS Certified DevOps Engineer, CKA, etc.) Familiarity with GitOps, Helm, and service mesh architectures. Exposure to monitoring tools like Datadog, New Relic, or OpenTelemetry. Soft Skills Strong communication skills (written and verbal) to interact effectively with clients and team members. Mature problem-solver who can anticipate issues and resolve them proactively. Organized and self-motivated with a willingness to take ownership of projects. Leadership potential with a collaborative team mindset. Why Join Us? Work with cutting-edge DevOps stacks and innovative startups globally. Be part of a collaborative, learning-focused culture. Opportunity to grow into technical leadership roles. Flexible working environment with a focus on outcomes. Skills: github actions,gitlab ci,jenkins,terraform,elk,aws,python,basic networking,devops,linux,grafana,automation,bash,ansible,azure,gcp,kubernetes,docker,prometheus,networking,infrastructure Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Ingram Micro is a leading technology company for the global information technology ecosystem. With the ability to reach nearly 90% of the global population, we play a vital role in the worldwide IT sales channel, bringing products and services from technology manufacturers and cloud providers to business-to-business technology experts. Our market reach, diverse solutions and services portfolio, and digital platform Ingram Micro Xvantage™ set us apart Position Summary: We have an excellent opportunity for an experienced AWS Services Business Development Manager (BDM) to join our AWS Services team and play a key role in driving AWS Professional Services and Managed Services sales across our key routes to market, including the vendor, our partner-base and their end-customers. The successful candidate will demonstrate exemplary core sales and relationship management skills and will also have sound knowledge of Cloud services and how these can be harnessed to transform an organization. The candidate must also have a proven, and successful track record on sales and acquiring new relationships. A typical AWS Services BDM will have had two or more years in a similar services sales type role. What you bring to the role: Working in close collaboration with the local (in-country) AWS pre-sales and technical teams, along with the wider Regional Service Centre, the core responsibilities of the role include, but not limited to, the following: Strategic Sales Planning Identify market opportunities, routes to market, industry trends, and vendor, partner, and end-customer needs to guide sales approach Collaborate with local Ingram Micro leadership to set clear revenue targets and growth objectives Develop and execute a comprehensive account/territory plan to manage and grow numerous accounts concurrently Define and agree alignment and working approach with local Ingram Micro Cloud (resell) BDM’s and PDM’s for the demarcation / hand-off of Services-specific opportunities Internal, Channel Partner, and Vendor Relationships Forge strong relationship with key AWS individuals including Sellers, Professional Services, and Partner Development teams Forge strong and trusted relationship with partner account base, including actively promoting pre-sales, service offerings, and consulting capabilities Be a visible representative of Ingram Micro at partner engagements and key vendor events, supporting the identification and development of future service offerings Evaluate potential collaboration opportunities to enhance product offerings and market reach Market Expansion Drive market expansion efforts by identifying target segments and developing tailored go-to-market strategies Lead initiatives to penetrate new routes to market and partner / end-customer segments for cloud services adoption Analyze market competition and positioning to differentiate company offerings effectively Support the development of the AWS services propositions, solutions and GTM Sales Delivery and Achievement Provide ownership of complex sales engagements, including RFPs, proposals, and customer presentations Pro-actively hunt for and generate new business opportunities Meet or exceed revenue and goal targets in a defined territory – which will include those set by both Ingram Micro and AWS Maintain a robust sales pipeline, including regular updates and reporting Be familiar, and correctly position, key vendor programs including OLA, MAP and WAFR Collate, summarise, and accurately report key management account information on a monthly basis Identify other Ingram Micro service and sales opportunities Personal Skills Development Build and maintain a strong relationship with key peers, including the local and regional Cloud teams, AWS Sellers and Partner Development teams, partners and their end-customers Build a solid understanding of the Ingram Micro AWS services portfolio, capabilities and motions Build and maintain a good awareness of the internal SMEs areas of specialism Keep up to date with current and future technologies, products and strategies Build and enhance relationships with peers Qualifications And Experience An AWS Services BDM should also have the following qualifications and experience: Desirable Qualifications Cloud Computing related sales or business development experience AWS Cloud Practitioner Certification AWS Associate Certification Expected Experience (Services And Sales Understanding / Positioning) AWS Optimisation Licensing and Assessment (OLA) AWS Migration Acceleration Program (MAP) AWS Well-Architected Framework Review (WAFR) Compute services (e.g. EC2, containers, serverless) Monitoring/Observability tools (e.g. CloudWatch, Prometheus, DataDog) CI/CD tools (e.g. CodePipeline/CodeBuild, Jenkins, GitHub Actions) IaC tools (e.g. Terraform, CloudFormation, Serverless Framework) Containers and orchestration tools (e.g. Docker, ECS, Kubernetes) Gen A/I and Machine Learning Migrations from VMware and/or Azure Knowledge, Skills, And Characteristics Two or more years of experience in a Professional and Managed Services sales type role Excellent communicator both verbally and written (both local and English) Experienced, mature, influential, assertive, and diplomatic Able to network with industry peers and customers A flexible approach to work and prepared 'go the extra mile' to exceed customer expectations Applies knowledge and skills through handling complex problems beyond own area of expertise Ability and willingness to travel English language proficiency is essential This is not a complete listing of the job duties. It’s a representation of the things you will be doing, and you may not perform all of these duties. Show more Show less

Posted 3 weeks ago

Apply

Exploring Datadog Jobs in India

Datadog, a popular monitoring and analytics platform, has been gaining traction in the tech industry in India. With the increasing demand for professionals skilled in Datadog, job opportunities are on the rise. In this article, we will explore the Datadog job market in India and provide valuable insights for job seekers looking to pursue a career in this field.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Delhi

These cities are known for their thriving tech industries and are actively hiring for Datadog roles.

Average Salary Range

The average salary range for Datadog professionals in India varies based on experience levels. Entry-level positions can expect a salary ranging from INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.

Career Path

A typical career path in Datadog may include roles such as Datadog Administrator, Datadog Developer, Datadog Consultant, and Datadog Architect. Progression usually follows a path from Junior Datadog Developer to Senior Datadog Developer, eventually leading to roles like Datadog Tech Lead or Datadog Manager.

Related Skills

In addition to proficiency in Datadog, professionals in this field are often expected to have skills in monitoring and analytics tools, cloud computing (AWS, Azure, GCP), scripting languages (Python, Bash), and knowledge of IT infrastructure.

Interview Questions

  • What is Datadog and how does it differ from other monitoring tools? (basic)
  • How do you set up custom metrics in Datadog? (medium)
  • Explain how you would create a dashboard in Datadog to monitor server performance. (medium)
  • What are some key features of Datadog APM (Application Performance Monitoring)? (advanced)
  • Can you explain how Datadog integrates with Kubernetes for monitoring? (medium)
  • Describe how you would troubleshoot an alert in Datadog. (medium)
  • How does Datadog handle metric aggregation and visualization? (advanced)
  • What are some best practices for using Datadog to monitor cloud infrastructure? (medium)
  • Explain the difference between Datadog Logs and Datadog APM. (basic)
  • How would you set up alerts in Datadog for critical system metrics? (medium)
  • Describe a challenging problem you faced while using Datadog and how you resolved it. (advanced)
  • What is anomaly detection in Datadog and how does it work? (medium)
  • How does Datadog handle data retention and storage? (medium)
  • What are some common integrations with Datadog that you have worked with? (basic)
  • Can you explain how Datadog handles tracing for distributed systems? (advanced)
  • Describe a recent project where you used Datadog to improve system performance. (medium)
  • How do you ensure data security and privacy when using Datadog? (medium)
  • What are some limitations of Datadog that you have encountered in your experience? (medium)
  • Explain how you would use Datadog to monitor network traffic and performance. (medium)
  • How does Datadog handle auto-discovery of services and applications for monitoring? (medium)
  • What are some key metrics you would monitor for a web application using Datadog? (basic)
  • Describe a scenario where you had to scale monitoring infrastructure using Datadog. (advanced)
  • How would you implement anomaly detection for a specific metric in Datadog? (medium)
  • What are some best practices for setting up alerts and notifications in Datadog? (medium)

Closing Remark

With the increasing demand for Datadog professionals in India, now is a great time to explore job opportunities in this field. By honing your skills, preparing for interviews, and showcasing your expertise, you can confidently apply for Datadog roles and advance your career in the tech industry. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies