Home
Jobs

1516 Datadog Jobs - Page 5

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

1 - 10 Lacs

Hyderābād

On-site

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the AI/ML Data Platform team, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job responsibilities Architect, Build and maintain comprehensive observability systems at scale Partner with Customers, Product managers, and Engineering teams to develop systems and align observability objectives across the organization. Provide architectural guidance and establish standards for observability implementations with a strong focus on cost optimization and compliance. Collaborate with cross-functional teams to drive business impact and customer satisfaction through exceptional software delivery. Evaluate and implement modern data storage solutions (data warehouses, data lakes) to support advanced analytics and dashboards. Implement best practices for data transformation, real-time processing and modern data storage solutions to support advanced analytics Evaluate and introduce new observability tools and technologies, staying current with industry best practices Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 3+ years applied experience Hands-on practical experience in backend development using Python, Java, Node.js, or Golang. Expertise with Cloud native observability tools, OpenTelemetry, Grafana/Prometheus/CloudWatch/Datadog etc and associated agents, collectors. Experience in designing and architecting large-scale distributed systems and cloud-native architecture in AWS. Excellent communication skills and ability to represent and present business and technical concepts to stakeholders. Experience with a product-oriented mindset who can see the big picture and engineer solutions centered around customer needs. Demonstrate a strong sense of responsibility, urgency, and determination. Preferred qualifications, capabilities, and skills Familiarity with modern front-end technologies Familiar with Platform Software Engineer in a dynamic technology company or startup. Exposure to cloud technologies Exposure working in AI, ML, or Data engineering. Exposure in developing Automation frameworks/AI Ops

Posted 3 days ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

We are seeking a highly capable Azure Engineer with a strong software development background and deep expertise in Cloud Back-End (BE) baseline architecture . This role is ideal for someone who can design, build, and manage scalable, secure, and high-performing back-end services in Microsoft Azure. The ideal candidate will have hands-on experience with cloud-native application development, microservices architecture, and infrastructure automation. Key Responsibilities: Design and develop back-end cloud services using Azure-native technologies (App Services, Functions, API Management, Service Bus, Event Grid, etc.). Implement scalable and secure cloud architectures aligned with Azure well-architected framework. Build APIs and microservices leveraging .NET or Node.js or Python, or similar technologies. Ensure cloud back-end performance, reliability, and monitoring using Azure Monitor, App Insights, and Log Analytics. Collaborate with DevOps, security, and front-end teams to ensure seamless integration and CI/CD automation. Define and enforce coding standards, version control, and deployment strategies. Implement and maintain cloud governance, cost optimization, and security best practices. Provide support and troubleshooting for production issues in Azure environments. Required Skills& Experience: 5+ years of professional experience in software development and cloud engineering. Strong development skills in .NET Core or C# or Python or Node.js or Java. Deep expertise in Azure services relevant to back-end architecture (Functions, Key Vault, API Management, Cosmos DB, Azure SQL, etc.) Strong understanding of microservices architecture, containerization (Docker), and Kubernetes(AKS). Hands-on experience with Azure DevOps, GitHub Actions, or similar CI/CD tools. Solid grasp of Azure identity and access management, including RBAC and Managed Identities. Experience with unit testing, integration testing, and automated deployments. About Loginsoft: For over 20 years, leading companies in Telecom, Cybersecurity, Healthcare, Banking, New Media, and more have come to rely on Loginsoft as a trusted resource for technology talent. From startups, to product and enterprises rely on our services. Whether Onsite, Offsite, or Offshore, we deliver. With a track record of successful partnerships with leading technology companies globally, and specifically in the past 6 years with Cybersecurity product companies, Loginsoft offers a comprehensive range of security offerings, including Software Supply Chain, Vulnerability Management, Threat Intelligence, Cloud Security, Cybersecurity Platform Integrations, creating content packs for Cloud SIEM, Logs onboarding and more. Our commitment to innovation and expertise has positioned us as a trusted player in the cybersecurity space. Loginsoft continues to provide traditional IT services which include Software development & Support, QA automation, Data Science& AI, etc. Expertise in Integrations with Threat Intelligence and Security Products: Built more than 200+ integrations with leading TIP, SIEM, SOAR, and Ticketing Platforms such as Cortex XSOAR, Anomali, ThreatQ, Splunk, IBM QRadar& Resilient, Microsoft Azure Sentinel, ServiceNow, Swimlane, Siemplify, MISP, Maltego, Cryptocurrency Digital Exchange Platforms, CISCO, Datadog, Symantec, Carbonblack, F5, Fortinet, and so on. Loginsoft is a partner with industry leading technology vendors Palo Alto, Splunk, Elastic, IBM Security, etc. In addition, Loginsoft offers Research as a service: We're more than just experts in cybersecurity; we're your accredited in-house research team focused on unraveling the complexities of cybersecurity and future technologies. From Application Security to Threat Research, our seasoned professionals have cultivated expertise in every facet of the field. We've earned the trust of over 20 security platform companies, who count on our research and analysis to strengthen their cybersecurity solutions. Job Overview Hyderabad, India 5+ Years Exp Full-Time Position

Posted 3 days ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

We are looking for a highly skilled DevOps Engineer with hands-on experience managing and deploying Azure Policies in multi-tenant environments. The ideal candidate will have a deep understanding of Azure governance, compliance, and infrastructure automation to help enforce organizational standards and ensure secure, compliant Azure deployments. Key Responsibilities: Design, deploy, and manage Azure Policies, Initiatives, and Blueprints across multiple Azure tenants . Collaborate with Cloud Security, Networking, and Application teams to enforce security, cost, and operational policies. Automate policy compliance monitoring and remediation using Azure Policy, Azure Monitor, and Log Analytics. Integrate policy deployment with CI/CD pipelines using tools like Azure DevOps, GitHub Actions, or Jenkins. Provide governance recommendations and ensure alignment with Azure Well-Architected Framework. Troubleshoot policy conflicts, evaluate policy impact, and support ongoing improvements to the cloud governance model. Maintain documentation and change management for policy lifecycle. Required Skills& Experience: 5+ years of experience in a DevOps Engineering Strong hands-on experience with Azure Tenant environments Hands on Azure Policy, Management Groups, Subscriptions. Solid understanding of Azure governance, security best practices, and policy compliance. Familiarity with CI/CD pipelines and tools (Azure DevOps, GitHub Actions, etc.). Experience working in enterprise Azure environments with multiple tenants/subscriptions. About Loginsoft: For over 20 years, leading companies in Telecom, Cybersecurity, Healthcare, Banking, New Media, and more have come to rely on Loginsoft as a trusted resource for technology talent. From startups, to product and enterprises rely on our services. Whether Onsite, Offsite, or Offshore, we deliver. With a track record of successful partnerships with leading technology companies globally, and specifically in the past 6 years with Cybersecurity product companies, Loginsoft offers a comprehensive range of security offerings, including Software Supply Chain, Vulnerability Management, Threat Intelligence, Cloud Security, Cybersecurity Platform Integrations, creating content packs for Cloud SIEM, Logs onboarding and more. Our commitment to innovation and expertise has positioned us as a trusted player in the cybersecurity space. Loginsoft continues to provide traditional IT services which include Software development & Support, QA automation, Data Science& AI, etc. Expertise in Integrations with Threat Intelligence and Security Products: Built more than 200+ integrations with leading TIP, SIEM, SOAR, and Ticketing Platforms such as Cortex XSOAR, Anomali, ThreatQ, Splunk, IBM QRadar& Resilient, Microsoft Azure Sentinel, ServiceNow, Swimlane, Siemplify, MISP, Maltego, Cryptocurrency Digital Exchange Platforms, CISCO, Datadog, Symantec, Carbonblack, F5, Fortinet, and so on. Loginsoft is a partner with industry leading technology vendors Palo Alto, Splunk, Elastic, IBM Security, etc. In addition, Loginsoft offers Research as a service: We're more than just experts in cybersecurity; we're your accredited in-house research team focused on unraveling the complexities of cybersecurity and future technologies. From Application Security to Threat Research, our seasoned professionals have cultivated expertise in every facet of the field. We've earned the trust of over 20 security platform companies, who count on our research and analysis to strengthen their cybersecurity solutions. Job Overview Hyderabad, India 5+ Years Exp Full-Time Position

Posted 3 days ago

Apply

20.0 years

0 Lacs

Hyderābād

On-site

We are seeking a highly skilled and motivated Cloud Security Engineer with a strong background in security research, operations, and assurance, along with cloud architecture expertise. The role involves implementing security controls, conducting in-depth assessments of cloud services, and identifying secure configurations and misconfigurations across enterprise cloud environments. You will work closely with development, DevOps, and security teams to ensure that cloud infrastructure meets industry and organizational security standards. Key Responsibilities: Implement cloud-native and third-party security controls across AWS, Azure, and/or GCP environments. Conduct detailed security assessments of cloud services (IaaS, PaaS, SaaS) to ensure compliance with internal policies and industry frameworks. Identify and remediate secure misconfigurations and vulnerabilities using automated scanning and manual inspection techniques. Collaborate with cloud architects and security engineers to design and recommend secure infrastructure patterns. Stay updated with evolving cloud threats and vulnerabilities, and contribute to threat modeling and risk assessments. Develop scripts and tools to automate security monitoring and compliance validation. Document findings, remediation guidance, and contribute to security standards development. Required Skill Set: Technical Expertise: Strong knowledge of cloud platforms: AWS, Azure, or GCP Hands-on experience with CSPM (Cloud Security Posture Management) tools and cloud-native security services Deep understanding of IAM, encryption, network security, and data protection within cloud environments Experience with CI/CD security integration and DevSecOps practices Familiarity with security standards such as CIS Benchmarks, NIST, ISO 27001 Security Domains: Security Research: Ability to evaluate and analyze security trends, tools, and techniques Security Operations: Incident detection, log analysis, SIEM tools, and response processes Security Assurance: Risk assessments, compliance audits, and policy enforcement Cloud Architecture: Knowledge of secure cloud design patterns and service integrations Tools & Languages: Tools: Prisma Cloud, Wiz, AWS Security Hub, Azure Defender, GCP Security Command Center Scripting: Python Soft Skills: Strong analytical and problem-solving abilities Excellent verbal and written communication skills Team player with cross-functional collaboration experience Ability to manage priorities in a fast-paced environment About Loginsoft: For over 20 years, leading companies in Telecom, Cybersecurity, Healthcare, Banking, New Media, and more have come to rely on Loginsoft as a trusted resource for technology talent. From startups, to product and enterprises rely on our services. Whether Onsite, Offsite, or Offshore, we deliver. With a track record of successful partnerships with leading technology companies globally, and specifically in the past 6 years with Cybersecurity product companies, Loginsoft offers a comprehensive range of security offerings, including Software Supply Chain, Vulnerability Management, Threat Intelligence, Cloud Security, Cybersecurity Platform Integrations, creating content packs for Cloud SIEM, Logs onboarding and more. Our commitment to innovation and expertise has positioned us as a trusted player in the cybersecurity space. Loginsoft continues to provide traditional IT services which include Software development & Support, QA automation, Data Science& AI, etc. Expertise in Integrations with Threat Intelligence and Security Products: Built more than 200+ integrations with leading TIP, SIEM, SOAR, and Ticketing Platforms such as Cortex XSOAR, Anomali, ThreatQ, Splunk, IBM QRadar& Resilient, Microsoft Azure Sentinel, ServiceNow, Swimlane, Siemplify, MISP, Maltego, Cryptocurrency Digital Exchange Platforms, CISCO, Datadog, Symantec, Carbonblack, F5, Fortinet, and so on. Loginsoft is a partner with industry leading technology vendors Palo Alto, Splunk, Elastic, IBM Security, etc. In addition, Loginsoft offers Research as a service: We're more than just experts in cybersecurity; we're your accredited in-house research team focused on unraveling the complexities of cybersecurity and future technologies. From Application Security to Threat Research, our seasoned professionals have cultivated expertise in every facet of the field. We've earned the trust of over 20 security platform companies, who count on our research and analysis to strengthen their cybersecurity solutions. Job Overview Hyderabad, India 10+ Years Exp Full-Time Position

Posted 3 days ago

Apply

2.0 years

6 - 8 Lacs

India

On-site

About the Role We are looking for a DevOps Engineer to build and maintain scalable, secure, and high- performance infrastructure for our next-generation healthcare platform. You will be responsible for automation, CI/CD pipelines, cloud infrastructure, and system reliability, ensuring seamless deployment and operations. Responsibilities 1. Infrastructure & Cloud Management Design, deploy, and manage cloud-based infrastructure (AWS, Azure, GCP) Implement containerization (Docker, Kubernetes) and microservices orchestration Optimize infrastructure cost, scalability, and performance 2. CI/CD & Automation Build and maintain CI/CD pipelines for automated deployments Automate infrastructure provisioning using Terraform, Ansible, or CloudFormation Implement GitOps practices for streamlined deployments 3. Security & Compliance Ensure adherence to ABDM, HIPAA, GDPR, and healthcare security standards Implement role-based access controls, encryption, and network security best practices Conduct Vulnerability Assessment & Penetration Testing (VAPT) and compliance audits 4. Monitoring & Incident Management Set up monitoring, logging, and alerting systems (Prometheus, Grafana, ELK, Datadog, etc.) Optimize system reliability and automate incident response mechanisms Improve MTTR (Mean Time to Recovery) and system uptime KPIs 5. Collaboration & Process Improvement Work closely with development and QA teams to streamline deployments Improve DevSecOps practices and cloud security policies Participate in architecture discussions and performance tuning Required Skills & Qualifications 2+ years of experience in DevOps, cloud infrastructure, and automation Hands-on experience with AWS and Kubernetes Proficiency in Docker and CI/CD tools (Jenkins, GitHub Actions, ArgoCD, etc.) Experience with Terraform, Ansible, or CloudFormation Strong knowledge of Linux, shell scripting, and networking Experience with cloud security, monitoring, and logging solutions Nice to Have Experience in healthcare or other regulated industries Familiarity with serverless architectures and AI-driven infrastructure automation Knowledge of big data pipelines and analytics workflows What You'll Gain Opportunity to build and scale a mission-critical healthcare infrastructure Work in a fast-paced startup environment with cutting-edge technologies Growth potential into Lead DevOps Engineer or Cloud Architect roles Job Type: Full-time Pay: ₹600,000.00 - ₹800,000.00 per year Schedule: Day shift Morning shift Work Location: In person Speak with the employer +91 9575285285

Posted 3 days ago

Apply

2.0 years

5 - 7 Lacs

Gurgaon

On-site

Ahom Technologies Pvt. Ltd. is looking for a skilled and proactive DevOps Engineer to join our growing technology team. The ideal candidate will be responsible for ensuring seamless integration, continuous delivery, and stable deployment of applications across environments. You will collaborate closely with developers, QA, and infrastructure teams to automate processes, monitor performance, and enhance the CI/CD pipeline. Key Responsibilities: Design, implement, and manage CI/CD pipelines for application deployment. Automate infrastructure provisioning using tools like Terraform, Ansible, or CloudFormation . Monitor application performance and availability using tools such as Prometheus, Grafana, Datadog, or ELK Stack . Maintain and manage cloud infrastructure on AWS / Azure / GCP . Ensure security best practices , compliance, and system hardening. Develop and maintain scripts/tools to support infrastructure and deployments. Manage containers and orchestration using Docker and Kubernetes . Troubleshoot and resolve infrastructure, network, and deployment-related issues. Support development teams in implementing automation and DevOps best practices. Requirements: Bachelor’s degree in Computer Science, Engineering, or a related field. 2-5 years of experience as a DevOps Engineer or SRE . Hands-on experience with Linux systems , shell scripting , and Git . Expertise in tools like Jenkins, GitLab CI, Bamboo, or CircleCI . Proficiency with cloud services (AWS/GCP/Azure) and serverless architecture. Experience with Docker , Kubernetes , and container orchestration. Good understanding of networking , security , and load balancing . Excellent problem-solving, documentation, and communication skills. Preferred Qualifications: Knowledge of Agile methodologies and experience working in cross-functional teams. Familiarity with IaC (Infrastructure as Code) and configuration management. Why Work With Us? Exposure to modern DevOps tools and cloud technologies Collaborative and growth-driven environment Flexible work culture and continuous learning support Opportunities to work on enterprise-scale systems *Candidates from Delhi NCR need only apply *Immediate joiners preferred Job Type: Full-time Pay: ₹500,000.00 - ₹700,000.00 per year Benefits: Provident Fund Schedule: Day shift Application Question(s): We want to fill this position urgently. Are you an immediate joiner? Do you have experience with Docker, Kubernetes, and container orchestration? Are you proficient in Javascript deployment & networking? Are you proficient into Docker, Nignix & Apache? Work Location: In person Speak with the employer +91 9267985735 Application Deadline: 16/06/2025 Expected Start Date: 23/06/2025

Posted 3 days ago

Apply

5.0 years

8 - 8 Lacs

Bengaluru

On-site

As an employee at Thomson Reuters, you will play a role in shaping and leading the global knowledge economy. Our technology drives global markets and helps professionals around the world make decisions that matter. As the world’s leading provider of intelligent information, we want your unique perspective to create the solutions that advance our business and your career. Our Service Management function is transforming into a truly global, data and standards-driven organization, employing best-in-class tools and practices across all disciplines of Technology Operations. This will drive ever-greater stability and consistency of service across the technology estate as we drive towards optimal Customer and Employee experience. About You You’re a fit for the role of Database Engineer if your background includes: 5+ years of DBA responsibilities in a MS SQL Server and/or Postgres environment 2+ years working experience with AWS and Azure Experience leading MS SQL Server and/or Postgres Installation, Configuration and Upgrade efforts Performance monitoring and tuning skills are required. Experience leading problem solving and analytical efforts in high pressure situations Backup/recovery experience including Disaster Recovery planning. Ability to quickly learn and adapt to changes in database technology Excellent analytical and problem-solving skills Ability to work effectively as part of a team via excellent verbal and written communication, as well as independently toward assigned goals Experience with DataDog, ServiceNow Experience in scripting languages Python, PowerShell, etc. Strong quantitative, analytical, communication and verbal skills with a strong customer service focus About the Role. In this role as Database Engineer, you will: Over 5 years managing databases with MS SQL Server and Postgres. More than 2 years using AWS and Azure. Led efforts in setting up, configuring, and upgrading MS SQL Server and Postgres. Skilled in monitoring and improving database performance. Experienced in solving problems under pressure. Knowledge in backup, recovery, and disaster recovery planning. Quick to learn new database technologies. Strong problem-solving and analytical skills. Effective team player and communicator, able to work independently too. Familiar with tools like DataDog and ServiceNow. Experience with scripting languages like Python and PowerShell. Excellent analytical and communication skills with a focus on customer service. Nice to have skills/education: Bachelor's Degree - Computer Science, technical or engineering degree preferred. Work experience will be taken into consideration in lieu of education #LI-VN1 What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.

Posted 3 days ago

Apply

70.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

hackajob is collaborating with Zipcar to connect them with exceptional tech professionals for this role. SDE 2 (Java Engineer) Zipcar Engineering Who are we? Glad you asked! Avis Budget Group is a leading provider of mobility options, with brands including Avis, Budget & Budget Truck, and Zipcar. With more than 70 years of experience and 11,000 locations in 180 countries, we are shaping the future of our industry and want you to join us in our mission. Zipcar is the world’s leading car-sharing network, found in urban areas and university campuses in more than 500 cities and towns. Our team is smart, creative and fun, and we’re driven by a mission - to enable simple and responsible urban living Apply today to get connected to an exciting career, a supportive family of employees and a world of opportunities. What is ABG’s strategy in India? At our India Build Center, you will play a key role in driving the digital transformation narrative of ABG. Being at the core of ABG’s growth strategy, we will develop technology-led offerings that would position Avis and its brands as the best vehicle rental company in the world. Our goal is to create the future of customer experience through technology. The India Build Center is based in Bengaluru, India . We are currently located at WeWork Kalyani Roshni Tech Hub in Marathahalli on Outer Ring Road , strategically located close to product companies and multiple tech-parks like Embassy Tech Park, ETV, RMZ Ecospace, Kalyani Tech Park, EPIP Zone and ITPL among others. The Fine Print We encourage Zipsters to bring their whole selves to work - unique perspectives, personal experiences, backgrounds, and however they identify. We are proud to be an equal opportunity employer - M/F/D/V. This document does not constitute a promise or guarantee of employment. This document describes the general nature and level of this position only. Essential functions and responsibilities may change as business needs require. This position may be with any affiliate of Avis Budget Group. SDE 2 (Java Engineer) Location: Bengaluru, India | 100% on-site The Impact You’ll Make We are looking for a talented and passionate engineer to make their mark on the development and maintenance of Zipcar’s back-end services. These are the underlying services that support our car sharing mobile and web ecommerce products - the primary driver of $9B in annual revenue. This role requires a resourceful individual, a persistent problem solver, and a strong hands-on engineer. This is a great opportunity to have a big impact as part of a growing team in the midst of technology and product transformation. Watch our talk at a recent AWS Re: Invent conference here . What You’ll Do Build a deep understanding of existing systems. Write robust and testable code that meets design requirements. Review code developed by other developers, providing feedback on style, functional correctness, testability, and efficiency. Can work independently and can participate/contribute to architecture discussions. Identify and resolve existing critical technical debt. Build transparent systems with proper monitoring, observability, and alerting. Plan for robust build, test, and deployment automation Work with product stakeholders and front-end developers to understand the essence of requirements and to provide pragmatic solutions Work within an Agile framework What We’re Looking For 5-8 years of Professional experience designing/writing/supporting highly available web services. 5+ years of experience writing Java applications. Experience with event driven architecture (1+ years) Must have experience analyzing complex data flows - batch or real time processing (3+ years) Worked on RabbitMQ or similar (3+ years) Experience with Postgres or similar ( 3+ years) Strong experience with NoSQL DBs - MongoDB and Cassandra/Datastax (3+ years) Strong experience with AWS and CI/CD environment. (3+ years) Experience writing and consuming webservices using Java/Springboot. Experience developing React/NodeJs Apps and Services is a plus. Understanding of distributed systems - performance bottlenecks, fault tolerance, and data consistency concerns. Experience in Kubernetes for containerized application management. Experience building mission critical systems, running 24x7. Desire to work within a team of engineers at all levels of experience. Familiarity with tools like DataDog, Grafana, or similar products to monitor web services. Good written and spoken communication skills. Perks You’ll Get Health insurance for yourself and your immediate family Life, accident, and disability insurance Paid time off comparable with similar technology companies in India Show more Show less

Posted 3 days ago

Apply

2.0 years

6 - 7 Lacs

Bhopal

On-site

About the Role: We are looking for a DevOps Engineer to build and maintain scalable, secure, and high-performance infrastructure for our next-generation healthcare platform. You will be responsible for automation, CI/CD pipelines, cloud infrastructure, and system reliability, ensuring seamless deployment and operations. Responsibilities 1. Infrastructure & Cloud Management Design, deploy, and manage cloud-based infrastructure (AWS, Azure, GCP) Implement containerization (Docker, Kubernetes) and microservices orchestration Optimize infrastructure cost, scalability, and performance 2. CI/CD & Automation Build and maintain CI/CD pipelines for automated deployments Automate infrastructure provisioning using Terraform, Ansible, or CloudFormation Implement GitOps practices for streamlined deployments 3. Security & Compliance Ensure adherence to ABDM, HIPAA, GDPR, and healthcare security standards Implement role-based access controls, encryption, and network security best practices Conduct Vulnerability Assessment & Penetration Testing (VAPT) and compliance audits 4. Monitoring & Incident Management Set up monitoring, logging, and alerting systems (Prometheus, Grafana, ELK, Datadog, etc.) Optimize system reliability and automate incident response mechanisms Improve MTTR (Mean Time to Recovery) and system uptime KPIs 5. Collaboration & Process Improvement Work closely with development and QA teams to streamline deployments • Improve DevSecOps practices and cloud security policies Participate in architecture discussions and performance tuning Required Skills & Qualifications 2+ years of experience in DevOps, cloud infrastructure, and automation Hands-on experience with AWS and Kubernetes Proficiency in Docker and CI/CD tools (Jenkins, GitHub Actions, ArgoCD, etc.) Experience with Terraform, Ansible, or CloudFormation Strong knowledge of Linux, shell scripting, and networking Experience with cloud security, monitoring, and logging solutions Nice to Have Experience in healthcare or other regulated industries Familiarity with serverless architectures and AI-driven infrastructure automation Knowledge of big data pipelines and analytics workflows What You'll Gain Opportunity to build and scale a mission-critical healthcare infrastructure Work in a fast-paced startup environment with cutting-edge technologies Growth potential into Lead DevOps Engineer or Cloud Architect roles Job Type: Full-time Pay: ₹600,000.00 - ₹700,000.00 per year Schedule: Day shift Fixed shift Work Location: In person

Posted 3 days ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Description RESPONSIBILITIES Design and implement CI/CD pipelines for AI and ML model training, evaluation, and RAG system deployment (including LLMs, vectorDB, embedding and reranking models, governance and observability systems, and guardrails). Provision and manage AI infrastructure across cloud hyperscalers (AWS/GCP), using infrastructure-as-code tools -strong preference for Terraform-. Maintain containerized environments (Docker, Kubernetes) optimized for GPU workloads and distributed compute. Support vector database, feature store, and embedding store deployments (e.g., pgVector, Pinecone, Redis, Featureform. MongoDB Atlas, etc). Monitor and optimize performance, availability, and cost of AI workloads, using observability tools (e.g., Prometheus, Grafana, Datadog, or managed cloud offerings). Collaborate with data scientists, AI/ML engineers, and other members of the platform team to ensure smooth transitions from experimentation to production. Implement security best practices including secrets management, model access control, data encryption, and audit logging for AI pipelines. Help support the deployment and orchestration of agentic AI systems (LangChain, LangGraph, CrewAI, Copilot Studio, AgentSpace, etc.). Must Haves: 4+ years of DevOps, MLOps, or infrastructure engineering experience. Preferably with 2+ years in AI/ML environments. Hands-on experience with cloud-native services (AWS Bedrock/SageMaker, GCP Vertex AI, or Azure ML) and GPU infrastructure management. Strong skills in CI/CD tools (GitHub Actions, ArgoCD, Jenkins) and configuration management (Ansible, Helm, etc.). Proficient in scripting languages like Python, Bash, -Go or similar is a nice plus-. Experience with monitoring, logging, and alerting systems for AI/ML workloads. Deep understanding of Kubernetes and container lifecycle management. Bonus Attributes: Exposure to MLOps tooling such as MLflow, Kubeflow, SageMaker Pipelines, or Vertex Pipelines. Familiarity with prompt engineering, model fine-tuning, and inference serving. Experience with secure AI deployment and compliance frameworks Knowledge of model versioning, drift detection, and scalable rollback strategies. Abilities: Ability to work with a high level of initiative, accuracy, and attention to detail. Ability to prioritize multiple assignments effectively. Ability to meet established deadlines. Ability to successfully, efficiently, and professionally interact with staff and customers. Excellent organization skills. Critical thinking ability ranging from moderately to highly complex. Flexibility in meeting the business needs of the customer and the company. Ability to work creatively and independently with latitude and minimal supervision. Ability to utilize experience and judgment in accomplishing assigned goals. Experience in navigating organizational structure. Show more Show less

Posted 3 days ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Description RESPONSIBILITIES Design and implement CI/CD pipelines for AI and ML model training, evaluation, and RAG system deployment (including LLMs, vectorDB, embedding and reranking models, governance and observability systems, and guardrails). Provision and manage AI infrastructure across cloud hyperscalers (AWS/GCP), using infrastructure-as-code tools -strong preference for Terraform-. Maintain containerized environments (Docker, Kubernetes) optimized for GPU workloads and distributed compute. Support vector database, feature store, and embedding store deployments (e.g., pgVector, Pinecone, Redis, Featureform. MongoDB Atlas, etc). Monitor and optimize performance, availability, and cost of AI workloads, using observability tools (e.g., Prometheus, Grafana, Datadog, or managed cloud offerings). Collaborate with data scientists, AI/ML engineers, and other members of the platform team to ensure smooth transitions from experimentation to production. Implement security best practices including secrets management, model access control, data encryption, and audit logging for AI pipelines. Help support the deployment and orchestration of agentic AI systems (LangChain, LangGraph, CrewAI, Copilot Studio, AgentSpace, etc.). Must Haves: 4+ years of DevOps, MLOps, or infrastructure engineering experience. Preferably with 2+ years in AI/ML environments. Hands-on experience with cloud-native services (AWS Bedrock/SageMaker, GCP Vertex AI, or Azure ML) and GPU infrastructure management. Strong skills in CI/CD tools (GitHub Actions, ArgoCD, Jenkins) and configuration management (Ansible, Helm, etc.). Proficient in scripting languages like Python, Bash, -Go or similar is a nice plus-. Experience with monitoring, logging, and alerting systems for AI/ML workloads. Deep understanding of Kubernetes and container lifecycle management. Bonus Attributes: Exposure to MLOps tooling such as MLflow, Kubeflow, SageMaker Pipelines, or Vertex Pipelines. Familiarity with prompt engineering, model fine-tuning, and inference serving. Experience with secure AI deployment and compliance frameworks Knowledge of model versioning, drift detection, and scalable rollback strategies. Abilities: Ability to work with a high level of initiative, accuracy, and attention to detail. Ability to prioritize multiple assignments effectively. Ability to meet established deadlines. Ability to successfully, efficiently, and professionally interact with staff and customers. Excellent organization skills. Critical thinking ability ranging from moderately to highly complex. Flexibility in meeting the business needs of the customer and the company. Ability to work creatively and independently with latitude and minimal supervision. Ability to utilize experience and judgment in accomplishing assigned goals. Experience in navigating organizational structure. Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description This is your chance to change the path of your career and guide multiple teams to success at one of the world's leading financial institutions. As a Manager of Software Engineering at JPMorgan Chase within the AI/ML & Data Platform team, you lead multiple teams and manage day-to-day implementation activities by identifying and escalating issues and ensuring your team’s work adheres to compliance standards, business requirements, and tactical best practices. Job Responsibilities Oversee and guide a team of software engineers dedicated to developing tools for observability, data pipelines, financial operations, and orchestration. Collaborate with customers, product managers, and Site Reliability Engineering (SRE) teams to create systems and synchronize observability goals throughout the organization. Offer architectural guidance and set standards for observability implementations, emphasizing cost efficiency and compliance. Work with cross-functional teams to enhance business impact and ensure customer satisfaction through outstanding software delivery. Assess and adopt modern data storage solutions, such as data warehouses and data lakes, to facilitate advanced analytics and dashboard capabilities. Implement best practices for data transformation, real-time processing, and modern data storage solutions to support advanced analytics. Review and integrate new observability tools and technologies, keeping abreast of industry best practices. Required Qualifications, Capabilities, And Skills Formal training or certification on Software engineering concepts and 5+ years applied experience Experience in managing people and leading technical teams. Hands-on experience in backend development using languages such as Python, Java, Node.js, or Golang. Experience working as an Infrastructure or Platform Software Engineer in a fast-paced technology company or startup. Proficiency in cloud-native observability tools, including OpenTelemetry, Grafana, Prometheus, CloudWatch, Datadog, and related agents and collectors. Experience in designing and architecting large-scale distributed systems and cloud-native architecture on AWS. Excellent communication skills with the ability to convey and present business and technical concepts to stakeholders. Preferred Qualifications, Capabilities, And Skills Familiarity working in AI, ML, or Data engineering. Familiar with developing Automation frameworks/AI Ops Show more Show less

Posted 3 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description There’s nothing more exciting than being at the center of a rapidly growing field in technology and applying your skillsets to drive innovation and modernize the world's most complex and mission-critical systems. As a Site Reliability Engineer III at JPMorgan Chase within the Infrastructure Platform team, you will solve complex and broad business problems with simple and straightforward solutions. Through code and cloud infrastructure, you will configure, maintain, monitor, and optimize applications and their associated infrastructure to independently decompose and iteratively improve on existing solutions. You are a significant contributor to your team by sharing your knowledge of end-to-end operations, availability, reliability, and scalability of your application or platform. Job Responsibilities Guides and assists others in the areas of building appropriate level designs and gaining consensus from peers where appropriate Collaborates with other software engineers and teams to design and implement deployment approaches using automated continuous integration and continuous delivery pipelines Collaborates with other software engineers and teams to design, develop, test, and implement availability, reliability, scalability, and solutions in their applications Collaborates with technical experts, key stakeholders, and team members to resolve complex problems Understands service level indicators and utilizes service level objectives to proactively resolve issues before they impact customers Demonstrated strong analytical skills to diagnose and resolve complex technical issues. Ability to perform root cause analysis and implement preventive measures. Experience in managing incidents and coordinating response efforts Has the ability to drive initiatives for process and system improvements. Supports the adoption of site reliability engineering best practices within your team Should complete SRE Bar Raiser Program Required Qualifications, Capabilities, And Skills Formal training or certification as Site Reliability Engineer in an enterprise infrastructure environment and 3+ years applied experience Proficient in site reliability culture and principles and familiarity with how to implement site reliability within an application or platform Proficient in at least one programming language such as Python, Java/Spring Boot, and .Net Proficient knowledge of software applications and technical processes within a given technical discipline (e.g., Cloud, artificial intelligence, Android, etc.) Experience in observability such as white and black box monitoring, service level objective alerting, and telemetry collection using tools such as Grafana, Dynatrace, Prometheus, Datadog, Splunk, and others Experience with continuous integration and continuous delivery tools like Jenkins, GitLab, or Terraform Familiarity with container and container orchestration such as ECS, Kubernetes, and Docker Familiarity with CI/CD pipelines and tools like Jenkins, GitLab CI, or CircleCI. Proficiency in scripting languages like Python. Experience with cloud platforms like AWS, Google Cloud, or Azure Understanding of infrastructure as code (IaC) using tools like Terraform or Ansible. Preferred Qualifications, Capabilities, And Skills Strong communication skills to collaborate with cross-functional teams. Skills in planning for future growth and scalability of systems Experience with Data Protection solutions such as Cohesity or Commvault About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success. Show more Show less

Posted 3 days ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

ob Title: SRE Engineer with GCP cloud (Only Immediate joiner's, No Fix Notice Period) Location : Hyderabad & Ahmedabad Employment Type: Full-Time Work Model - 3 Days from office Exp in year : 7year+ Job Overview Dynamic, motivated individuals deliver exceptional solutions for the production resiliency of the systems. The role incorporates aspects of software engineering and operations, DevOps skills to come up with efficient ways of managing and operating applications. The role will require a high level of responsibility and accountability to deliver technical solutions. Summary: As a Senior SRE, you will ensure platform reliability, incident management, and performance optimization. You'll define SLIs/SLOs, contribute to robust observability practices, and drive proactive reliability engineering across services. Experience Required: 6–10 years of SRE or infrastructure engineering experience in cloud-native environments. Mandatory: • Cloud: GCP (GKE, Load Balancing, VPN, IAM) • Observability: Prometheus, Grafana, ELK, Datadog • Containers & Orchestration: Kubernetes, Docker • Incident Management: On-call, RCA, SLIs/SLOs • IaC: Terraform, Helm • Incident Tools: PagerDuty, OpsGenie Nice to Have : • GCP Monitoring, Skywalking • Service Mesh, API Gateway • GCP Spanner, MongoDB (basic) Scope: • Drive operational excellence and platform resilience • Reduce MTTR, increase service availability • Own incident and RCA processes Roles and Responsibilities: •Define and measure Service Level Indicators (SLIs), Service Level Objectives (SLOs), and manage error budgets across services. • Lead incident management for critical production issues – drive root cause analysis (RCA) and postmortems. • Create and maintain runbooks and standard operating procedures for high[1]availability services. • Design and implement observability frameworks using ELK, Prometheus, and Grafana; drive telemetry adoption. • Coordinate cross-functional war-room sessions during major incidents and maintain response logs. • Develop and improve automated system recovery, alert suppression, and escalation logic. • Use GCP tools like GKE, Cloud Monitoring, and Cloud Armor to improve performance and security posture. • Collaborate with DevOps and Infrastructure teams to build highly available and scalable systems. • Analyze performance metrics and conduct regular reliability reviews with engineering leads. • Participate in capacity planning, failover testing, and resilience architecture reviews. Show more Show less

Posted 3 days ago

Apply

2.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Syensqo is all about chemistry. We’re not just referring to chemical reactions here, but also to the magic that occurs when the brightest minds get to work together. This is where our true strength lies. In you. In your future colleagues and in all your differences. And of course, in your ideas to improve lives while preserving our planet’s beauty for the generations to come. Job Overview And Responsibilities This position will be based in Pune, India. As the GCP/Azure Cloud Engineer, you will be responsible for designing, implementing and optimizing scalable, resilient cloud infrastructure of Google Cloud and Azure platform. This role involves deploying, automating and maintaining cloud-based applications, services and tools to ensure high availability, security and performance. The ideal candidate will have in-depth knowledge of GCP and Azure services and architecture best practices, along with strong experience in infrastructure automation, monitoring and troubleshooting. We count on you for: Design and implement secure, scalable and highly available cloud infrastructure using GCP/Azure services, based on business and technical requirements Develop automated deployment pipelines using Infrastructure-as-Code (IaC) tools such as Terraform, GCP/Azure CloudFormation and GCP/Azure CDK, ensuring efficient, repeatable and consistent infrastructure deployments Implement and manage security practices such as Identity and Access Management, network security and encryption to ensure data protection and compliance with industry standards and regulations Design and implement backup, disaster recovery and failover solutions for high availability and business continuity Create and maintain comprehensive documentation of infrastructure architecture, configuration and troubleshooting steps and share knowledge with team members Close collaboration with multi-cloud enterprise architect, DevOps solution architect, Cloud Operations Manager to ensure quick MVP prior to pushing into production Keep up to date with new GCP/Azure services, features and best practices, providing recommendations for process and architecture improvements Education And Experience Bachelor's degree in Information Technology, Computer Science, Business Administration, or a related field. Master's degree or relevant certifications would be a plus. Minimum of 2-5 years of experience in cloud engineering, cloud architecture or infrastructure role Proven experience with GCP/Azure services, including EC2, S3, RDS, Lambda, VPC, IAM and CloudFormation Hands-on experience with Infrastructure-as-Code (IaC) tools such as Terraform, GCP/Azure CloudFormation or GCP/Azure CDK Strong scripting skills in Python, Bash or PowerShell for automation tasks Familiarity with CI/CD tools (eg: Gitlab CI/CD, Jenkins) and experience integrating them with GCP/Azure Knowledge of networking fundamentals and experience with GCP/Azure VPC, security groups, VPN and routing Proficiency in monitoring and logging tools such as native cloud tools or third-party tools like Datadog and Splunk Cybersecurity Expertise: Understanding of cybersecurity principles, best practices, and frameworks. Knowledge of encryption, identity management, access controls, and other security measures within cloud environments. Preferably with certifications such as GCP/Azure Certified DevOps Engineer, GCP/Azure Certified SysOps Administrator, GCP/Azure Certified Solutions Architect Skills And Behavioral Competencies Excellent problem solving and troubleshooting abilities Result orientation, influence & impact Empowerment & accountability with the ability to work independently Team spirit, building relationships, collective accountability Excellent oral and written communication skills for documenting and sharing information with technical and non-technical stakeholders Language skills English mandatory What’s in it for the candidate Be part of and contribute to a once-in-a-lifetime change journey Join a dynamic team that is going to tackle big bets Have fun and work at a high pace About Us Syensqo is a science company developing groundbreaking solutions that enhance the way we live, work, travel and play. Inspired by the scientific councils which Ernest Solvay initiated in 1911, we bring great minds together to push the limits of science and innovation for the benefit of our customers, with a diverse, global team of more than 13,000 associates. Our solutions contribute to safer, cleaner, and more sustainable products found in homes, food and consumer goods, planes, cars, batteries, smart devices and health care applications. Our innovation power enables us to deliver on the ambition of a circular economy and explore breakthrough technologies that advance humanity. At Syensqo, we seek to promote unity and not uniformity. We value the diversity that individuals bring and we invite you to consider a future with us, regardless of background, age, gender, national origin, ethnicity, religion, sexual orientation, ability or identity. We encourage individuals who may require any assistance or accommodations to let us know to ensure a seamless application experience. We are here to support you throughout the application journey and want to ensure all candidates are treated equally. If you are unsure whether you meet all the criteria or qualifications listed in the job description, we still encourage you to apply. Show more Show less

Posted 3 days ago

Apply

2.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Syensqo is all about chemistry. We’re not just referring to chemical reactions here, but also to the magic that occurs when the brightest minds get to work together. This is where our true strength lies. In you. In your future colleagues and in all your differences. And of course, in your ideas to improve lives while preserving our planet’s beauty for the generations to come. Job Overview And Responsibilities This position will be based in Pune, India. As the AWS Cloud Engineer, you will be responsible for designing, implementing and optimizing scalable, resilient cloud infrastructure of Google Cloud and AWS platform. This role involves deploying, automating and maintaining cloud-based applications, services and tools to ensure high availability, security and performance. The ideal candidate will have in-depth knowledge of GCP and AWS services and architecture best practices, along with strong experience in infrastructure automation, monitoring and troubleshooting. We count on you for: Design and implement secure, scalable and highly available cloud infrastructure using GCP/Azure services, based on business and technical requirements Develop automated deployment pipelines using Infrastructure-as-Code (IaC) tools such as Terraform, GCP/Azure CloudFormation and GCP/Azure CDK, ensuring efficient, repeatable and consistent infrastructure deployments Implement and manage security practices such as Identity and Access Management, network security and encryption to ensure data protection and compliance with industry standards and regulations Design and implement backup, disaster recovery and failover solutions for high availability and business continuity Create and maintain comprehensive documentation of infrastructure architecture, configuration and troubleshooting steps and share knowledge with team members Close collaboration with multi-cloud enterprise architect, DevOps solution architect, Cloud Operations Manager to ensure quick MVP prior to pushing into production Keep up to date with new GCP/Azure services, features and best practices, providing recommendations for process and architecture improvements Education and experience Bachelor's degree in Information Technology, Computer Science, Business Administration, or a related field. Master's degree or relevant certifications would be a plus. Minimum of 2-5 years of experience in cloud engineering, cloud architecture or infrastructure role Proven experience with GCP/Azure services, including EC2, S3, RDS, Lambda, VPC, IAM and CloudFormation Hands-on experience with Infrastructure-as-Code (IaC) tools such as Terraform, GCP/Azure CloudFormation or GCP/Azure CDK Strong scripting skills in Python, Bash or PowerShell for automation tasks Familiarity with CI/CD tools (eg: Gitlab CI/CD, Jenkins) and experience integrating them with GCP/Azure Knowledge of networking fundamentals and experience with GCP/Azure VPC, security groups, VPN and routing Proficiency in monitoring and logging tools such as native cloud tools or third-party tools like Datadog and Splunk Cybersecurity Expertise: Understanding of cybersecurity principles, best practices, and frameworks. Knowledge of encryption, identity management, access controls, and other security measures within cloud environments. Preferably with certifications such as GCP/Azure Certified DevOps Engineer, GCP/Azure Certified SysOps Administrator, GCP/Azure Certified Solutions Architect Skills and behavioral competencies Excellent problem solving and troubleshooting abilities Result orientation, influence & impact Empowerment & accountability with the ability to work independently Team spirit, building relationships, collective accountability Excellent oral and written communication skills for documenting and sharing information with technical and non-technical stakeholders Language skills English mandatory What’s in it for the candidate Be part of and contribute to a once-in-a-lifetime change journey Join a dynamic team that is going to tackle big bets Have fun and work at a high pace About Us Syensqo is a science company developing groundbreaking solutions that enhance the way we live, work, travel and play. Inspired by the scientific councils which Ernest Solvay initiated in 1911, we bring great minds together to push the limits of science and innovation for the benefit of our customers, with a diverse, global team of more than 13,000 associates. Our solutions contribute to safer, cleaner, and more sustainable products found in homes, food and consumer goods, planes, cars, batteries, smart devices and health care applications. Our innovation power enables us to deliver on the ambition of a circular economy and explore breakthrough technologies that advance humanity. At Syensqo, we seek to promote unity and not uniformity. We value the diversity that individuals bring and we invite you to consider a future with us, regardless of background, age, gender, national origin, ethnicity, religion, sexual orientation, ability or identity. We encourage individuals who may require any assistance or accommodations to let us know to ensure a seamless application experience. We are here to support you throughout the application journey and want to ensure all candidates are treated equally. If you are unsure whether you meet all the criteria or qualifications listed in the job description, we still encourage you to apply. Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

Udaipur, Rajasthan, India

On-site

Linkedin logo

Job Summary We are looking for an experienced DevOps Lead to join our technology team and drive the design, implementation, and optimization of our DevOps processes and infrastructure. You will lead a team of engineers to ensure smooth CI/CD workflows, scalable cloud environments, and high availability for all deployed applications. This is a hands-on leadership role requiring a strong technical foundation and a collaborative mindset. Key Responsibilities Lead the DevOps team and define best practices for CI/CD pipelines, release management, and infrastructure automation. Design, implement, and maintain scalable infrastructure using tools such as Terraform, CloudFormation, or Ansible. Manage and optimize cloud services (e.g., AWS, Azure, GCP) for cost, performance, and security. Oversee monitoring, alerting, and logging systems (e.g., Prometheus, Grafana, ELK, Datadog). Implement and enforce security, compliance, and governance policies in cloud environments. Collaborate with development, QA, and product teams to ensure reliable and efficient software delivery. Lead incident response and root cause analysis for production issues. Evaluate new technologies and tools to improve system efficiency and reliability. Required Qualifications Bachelor's or master's degree in computer science, Engineering, or related field. 5+ years of experience in DevOps or SRE roles, with at least 2 years in a lead or managerial capacity. Strong experience with CI/CD tools (e.g., Jenkins, GitHub Actions, GitLab CI/CD). Expertise in infrastructure as code (IaC) and configuration management. Proficiency in scripting languages (e.g., Python, Bash). Deep knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes). Experience with version control (Git), artifact repositories, and deployment strategies (blue/green, canary). Solid understanding of networking, DNS, firewalls, and security protocols. Preferred Qualifications Certifications (e.g., Azure Certified DevOps Engineer, CKA/CKAD). Experience in a regulated environment (e.g., HIPAA, PCI, SOC2). Exposure to observability platforms and chaos engineering practices. Background in agile/scrum Skills : Strong leadership and team-building capabilities. Excellent problem-solving and troubleshooting skills. Clear and effective communication, both written and verbal. Ability to work under pressure and adapt quickly in a fast-paced environment. (ref:hirist.tech) Show more Show less

Posted 3 days ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

About Chargebee: Chargebee is a leading provider of billing and monetization solutions for thousands of businesses at every stage of growth — from early-stage startups to global enterprises. With our powerful suite of multi-product solutions, Chargebee helps businesses unlock unparalleled revenue growth, experiment with new offerings and monetization models, and stay globally compliant as they scale. Chargebee counts innovative companies like Zapier, Freshworks, DeepL, Condé Nast, and Pret A Manger amongst its global customer base and is proud to have been consistently recognized by customers as a Leader in Subscription Management on G2. With headquarters in North Bethesda, Maryland, our 1000+ team members work remotely throughout the world, including in India, Europe and the US. About the Job: We are looking for enthusiastic individuals to join our Cloud Reliability Engineering Team , capable of being part of our cloud initiatives and managing our global infrastructure. As a Senior Software Engineer , you will be responsible for leading design, implementation and management of cloud capabilities with key focus on reliability, uptime and cost. If you see yourself having following qualities across Technical, Collaboration and Execution pillars, then we would like you to be part of our team. Technical Quality Lead the design and implementation of complex feature development. Clearly understands how the changes impacts existing dependent features or capabilities. Mentor fresh team members, ensuring adherence to high technical standards. Participate actively with proactive suggestions in design decisions, Code reviews and SOP creation. Ensure Coding/Testing standards and internal SOPs are kept intact. Collaboration Skills Collaborate across other teams to achieve feature goals. Facilitate knowledge sharing and foster a collaborative learning environment. Execution Take a lead role in project execution, ensuring high-quality and timely deliverables. Contribute and motivates Software Engineers in achieving Tool KPIs. Qualifications/Skills required Bachelor's degree in Computer Science or equivalent, certification in relevant technologies or equivalent practical experience. Excellent problem solving, logical thinking capabilities. 2-5 years of experience working in AWS cloud around resource provisioning, scaling, maintenance and troubleshooting. Proven experience working with Linux, Terraform, Github Actions, Datadog or equivalent, Splunk or equivalent Prior experience with cloud capabilities that includes, Infra Engineering, Infra Change Management, CICD, Release management, Monitoring, support/troubleshooting and Reliability best practices. Excellent communication skills. We are Globally Local With a diverse team across four continents, and customers in over 60 countries, you get to work closely with a global perspective right from your own neighborhood. We value Curiosity We believe the next great idea might just be around the corner. Perhaps it’s that random thought you had ten minutes ago. We believe in creating an ecosystem that fosters a desire to seek out hard questions, and then figure out answers to them. Customer! Customer! Customer! Everything we do is driven towards enabling our customers’ growth. This means no matter what you do, you will always be adding real value to a real business problem. It’s a lot of responsibility, but also a lot of fun. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title :Performance Tester Key Skills :AWS,Jmeter,AppDynamics, New Relic, Splunk, DataDog Job Locations :Chennai,Pune Experience : 65-7 Education Qualification : Any Graduation. Work Mode : Hybrid. Employment Type : Contract. Notice Period : Immediate Job Description: Experience, Skills and Qualifications: • Performance Engineering, testing and tuning cloud hosted digital platforms (e.g. AWS) • Working knowledge (preferably with an AWS Solutions Architect certification) on Cloud Platforms like AWS and AWS Key Services and DevOps tools like CloudFormation, Teraform • Performance engineering and testing of web Apps (Linux) Performance testing and tuning web-based applications • Performance engineering toolsets such as JMeter, Microfocus Performance Center, BrowserStack, Taurus, Lighthouse, • Monitoring/logging tools (such as AppDynamics, New Relic, Splunk, DataDog) • Windows / UNIX / Linux / Web / Database / Network performance monitors to diagnose performance issues along with JVM tuning and Heap analysing skills • Docker, Kubernetes and Cloud-native development and container orchestration frameworks, Kubernetes clusters, pods & nodes, vertical/horizontal pod autoscaling concepts, High availability • Performance Testing and Engineering activity planning, estimating, designing, executing and analysing output from performance tests • Working in an agile environment, "DevOps" team or a similar multi-skilled team in a technically demanding function • Jenkins and CI-CD Pipelines including Pipeline scripting • Chaos engineering using tools like Chaos toolkit, AWS Fault Injection Simulator, Gremlin etc. • Programming and scripting language skills in Java, Shell, Scala, Groovy, Python and knowledge of security mechanisms such as OAuth etc. • Tools like GitHub, Jira & Confluence • Assisting with Resiliency Production support teams and Performance Incident Root Cause Analysis • Ability to prioritize work effectively and deliver within agreed service levels in a diverse and ever-changing environment • High levels of judgment and decision making, being able to rationalize and present the background and reasoning for direction taken • Strong stakeholder management and excellent communication skills. • Extensive knowledge of risk management and mitigation • Strong analytical and problem-solving skills Show more Show less

Posted 3 days ago

Apply

3.0 years

0 Lacs

India

On-site

Linkedin logo

Construction is the 2nd largest industry in the world (4x the size of SaaS!). But unlike software (with observability platforms such as AppDynamics and Datadog), construction teams lack automated feedback loops to help projects stay on schedule and on budget. Without this observability, construction wastes a whopping $3T per year because glitches aren’t detected fast enough to recover. Doxel AI exists to bring computer vision to construction, so the industry can deliver what society needs to thrive. From hospitals to data centers, from foreman to VPs of construction, teams use Doxel to make better decisions everyday. In fact, Doxel has contributed to the construction of the facilities that provide many of the products and services you use everyday. We have LLM-driven automation, classic computer vision, deep learning ML object detection, a low-latency 3D three.js web app, and a complex data pipeline powering it all in the background. We’re building out new workflows, analytics dashboards, and forecasting engines. Join us in bringing AI to construction! We're at an exciting stage of scale as we build upon our growing market momentum. Our software is trusted by Shell Oil, Genentech, HCA healthcare, Kaiser, Turner, Layton and several others. Join us in bringing AI to construction! The Role As a Navisworks Engineer, your mission is to revolutionize construction job sites by creating powerful tools within Navisworks that capture, process, and visualize project data for doxel customers. You will collaborate with VDC engineers, Product, Design, Backend, CV/ML teams to deliver seamless, responsive, and visually compelling user experiences. Your work will directly influence how foremen, project managers, and executives make mission-critical decisions on job sites worldwide. What You’ll Do Develop and maintain AEC plugins using .NET (C#) and other relevant technologies to enhance construction workflows Design and implement robust Windows-based applications that integrate with Doxel’s backend APIs and data pipelines Collaborate closely with Product Managers, Designers, and Backend Engineers, VDC engineers and operations to deliver seamless end-to-end solutions Optimize Navisworks plugin performance for handling large, complex 3D models efficiently Ensure robust testing, monitoring, and debugging practices to maintain high software quality Stay up to date with Navisworks API updates and best practices for plugin development Mentor engineers and promote best practices in Windows and plugin development What You’ll Bring To The Team 3+ years of professional experience in software development with expertise in Navisworks plugin development Strong C#/.NET programming skills with experience in Navisworks API and Windows application development Experience with Autodesk Forge, Revit API, or BIM data integration is a plus Strong understanding of 3D model visualization, performance optimization, and state management Experience with RESTful APIs, database integrations, and modern software development practices Proficiency in debugging, profiling, and optimizing Windows-based applications Experience with CI/CD pipelines, automated testing, and deployment Independent, strong problem-solving and Soft skills with the ability to debug and fix issues efficiently Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field Doxel also provides comprehensive health/dental/vision benefits for employees and their families, an Unlimited PTO policy, and a flexible work environment among other benefits. Doxel is an equal opportunity employer and actively seeks diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Job Role : Computer and Information Systems Managers For Workflow Annotation Specialist Project Type: Contract-based / Freelance / Part-time – 1 Month Job Overview: We are seeking domain experts to participate in a Workflow Annotation Project . The role involves documenting and annotating the step-by-step workflows of key tasks within the candidate’s area of expertise. The goal is to capture real-world processes in a structured format for AI training and process optimization purposes. Domain Expertise Required :  Plan and deliver IT projects on time and within scope  Supervise technical and project staff  Oversee IT infrastructure and operations  Enforce information security policies and protocols  Manage vendor contracts and service agreements  Align technology strategy with overall business objectives . Tools & Technologies You May have Worked: Project & task management: Jira, Microsoft Project, Smartsheet Monitoring & analytics: Datadog, Splunk Security tools: Nessus, Qualys Service management: ServiceNow, Zendesk Cloud platforms: AWS Console, Azure Portal, Google Cloud Console Enterprise systems: SAP, Oracle ERP Collaboration tools: Slack, Microsoft Teams Open Source / Free Software Experience Project management: OpenProject, Taiga, Kanboard Monitoring & visualization: Zabbix, Prometheus + Grafana Security tools: OpenVAS Version control & DevOps: GitLab Community Edition (CE) Collaboration & support: Rocket.Chat, osTicket ERP systems: Odoo Community Edition Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

New Delhi, Delhi, India

On-site

Linkedin logo

At AlgoSec, What you do matters! Over 2,200 of the world’s leading organizations trust AlgoSec to help secure their most critical workloads across public cloud, private cloud, containers, and on-premises networks. Join our global team, securing application connectivity, anywhere. AlgoSec is seeking for Site Reliability Engineer for the SRE team in India. Reporting to: Head of SRE Location : Gurgaon, India Direct Employment Responsibilities Ensure the reliability, scalability, and performance of our company's production environment, including complex architecture with multiple servers, deployment & various cloud technologies. Ability to collaborate with cross-functional teams, work independently, and prioritize effectively in a fast-paced environment. Effectively oversee and enhance monitoring capabilities for production environment and ensure optimal performance and functionality across the technology stack. Demonstrates flexibility to support our 24/7 operations and is willing to participate in on-call rotations to ensure timely incident response and resolution. Effectively address and resolve unexpected service issues while also creating and implementing tools and automation measures to proactively mitigate the likelihood of future problems. Requirements Minimum 5 years of experience in SRE/DevOps position for SaaS based products. Experience in managing mission critical production environment. Experience on version control tools like GIT, Bitbucket, etc. Experience in establishing CI/CD procedures with Jenkins. Working knowledge of databases. Experience in effectively managing AWS infrastructure, demonstrating proficiency across multiple AWS Cloud services including networking, EC2, VPC, EKS, ELB/NLB, API GW, Cognito, and more. Experience in monitoring tools like Datadog, ELK, Prometheus and Grafana, etc. Experience in understanding and managing Linux infrastructure. Experience in bash or python. Experience with IaC like CloudFormation / CDK / Terraforms Experience in Kubernetes and container management. Possesses excellent written and verbal communication skills in English, allowing for effective and articulate correspondence. Demonstrates strong teamwork, maintains a positive demeanor, and upholds a high level of integrity. Exhibits exceptional organizational abilities, displays thorough attention to detail, and remains highly committed to tasks at hand. Displays sharp intellect, adeptness at picking up new information quickly, and is highly self-motivated. Advantages Additional cloud services knowledge (Azure, GCP, etc.) Understanding of Java, Maven, NodeJS based applications. Experience in serverless architecture AlgoSec is an Equal Opportunity Employer (EEO), committed to creating a friendly, inclusive environment that is a pleasure to work in, and where there is an unbiased acceptance of others. AlgoSec believes that diversity and an inclusive company culture are key drivers of creativity, innovation and performance. Furthermore, a diverse workforce and the maintenance of an atmosphere that welcomes versatile perspectives will enhance our ability to fulfill our vision. Show more Show less

Posted 3 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Staff Engineer-Site Reliability Engineering, Assurant, GCC-India This job is responsible for basic administration, support, planning, implementation and monitoring of systems and infrastructure across various platforms. This includes the understanding of standard engineering patterns in one domain. Can provide solution expansion of existing infrastructure platforms. This position will be at Hyderabad at our India location. What will be my duties and responsibilities in this job? 40% - With moderate direction has working knowledge of job role demonstrating practical application of technical skills and interpersonal core competencies 30% - Provide technical support and in-depth problem analysis capabilities 20% - Manages several small to medium-scale projects and/or task of various complexities across the enterprise 10% - Serves as an advocate for Enterprise customers In some instances of the job, this grade level can be designated to be 80% Operational Support / 20% Solution Delivery Digital Forensic Capabilities: Initiate forensic investigations into IT systems to identify the cause of failures and breaches. Help recover and analyze data from compromised systems using specialized forensic tools (Azure monitor, Dynatrace, Datadog, etc.) Prepare detailed reports on investigation findings, including methods used and evidence discovered. Stay updated on the latest trends and advancements in digital forensics and cybersecurity. Develop automation to diagnose potential problems and alerts before they occur. Wireshark certification (WCNA Certification) preferred Project Management awareness: Understand the basics of project management skills (i.e. Planning/coordination, communications, problem solving, etc. – no certification required, but nice to have) Scrum practice understanding (No certification required, but nice to have) DevOps engineer capabilities: Automation and scripting with moderate programming capabilities (i.e. python, Visual Studio code, and automation tools like Power Automate, PowerApps, etc.) Monitoring/logging Disaster Recovery/ Resiliency capabilities: Knowledge and some experience with disaster recovery practices/planning Business continuity understanding. Miscellaneous: Excellent communications/collaboration skills Problem solving, Continuous learning, Adaptability. Other projects and assignments may result and be assigned to accommodate the changing needs of the department and the company What are the requirements needed for this position? May include skills in the following areas: Infrastructure / Network / Server/ Industry Monitoring and Performance Testing Tools / Service Management Process, working knowledge of Technology methodologies (life-cycle management, Agile, ITIL, Waterfall), Intermediate knowledge of Windows, Unix/ Linux Operating Systems, Technology infrastructures in distributed / cloud configurations, broad network IP, relational databases and AD / LDAP directory understanding. Good proficiency in PowerShell, PowerApps, or equivalent scripting languages. 3+ Years of exp in Site Reliability Engineering. What is the Preferred Experience, Skills, and Knowledge needed for this position? Wireshark certification (WCNA Certification) preferred Show more Show less

Posted 3 days ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Skills Required Strong understanding of DevOps principles and practice, solid grasp of CI/CD, IaC, containerization, and orchestration Experience with cloud platforms and services like AWS, Azure, GCP or other cloud providers Proficiency in scripting languages such as Bash, Python or PowerShell for automation Experience with DevOps tools and technologies like Jenkins, Github, Gitlab, Docker, Openshift, Kubernetes, ArgoCD, Tekton, Terraform, Ansible, and monitoring/APM tools like Elastic, Dynatrace, Datadog, Grafana, Prometheus etc. Strong communication and collaboration, Problem-solving and analytical skills Roles And Responsibilities Technically analyze the client's IT environment for achieving better efficiency using DevOps tools & methodologies in On-Premise or Cloud environments Prepare a gap analysis document and / or design a proposed DevOps Automation adoption roadmap on CI / CD from requirements to solution Architect the DevOps tooling infrastructure and define the interface requirements among the various tool sets integrated with various DevOps Platforms Provide mentorship to DevOps Engineers, project delivery and support in the areas of Build / Test / Deploy lifecycle activities and Application Performance Management Research and evaluate emerging technologies, industry and market trends to assist in project and/or solution offering to development/operations activities Experience : 10+ years of overall experience in the field of IT and minimum 2-3 years of experience as DevOps Lead/ Subject Matter Expert (ref:hirist.tech) Show more Show less

Posted 3 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career We're offering this role as associate level What you'll do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. We are seeking a highly skilled and motivated AWS Cloud Engineer with deep expertise in Amazon EKS, Kubernetes, Docker, and Helm chart development. The ideal candidate will be responsible for designing, implementing, and maintaining scalable, secure, and resilient containerized applications in the cloud. You’ll Also Be Design, deploy, and manage Kubernetes clusters using Amazon EKS. Develop and maintain Helm charts for deploying containerized applications. Build and manage Docker images and registries for microservices. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation). Monitor and troubleshoot Kubernetes workloads and cluster health. Support CI/CD pipelines for containerized applications. Collaborate with development and DevOps teams to ensure seamless application delivery. Ensure security best practices are followed in container orchestration and cloud environments. Optimize performance and cost of cloud infrastructure. The skills you'll need You’ll need a background in software engineering, software design, architecture, and an understanding of how your area of expertise supports our customers. You'll need experience in Java full stack including Microservices, ReactJS, AWS, Spring, SpringBoot, SpringBatch, Pl/SQL, Oracle, PostgreSQL, Junit, Mockito, Cloud, REST API, API Gateway, Kafka and API development. You’ll Also Need 3+ years of hands-on experience with AWS services, especially EKS, EC2, IAM, VPC, and CloudWatch. Strong expertise in Kubernetes architecture, networking, and resource management. Proficiency in Docker and container lifecycle management. Experience in writing and maintaining Helm charts for complex applications. Familiarity with CI/CD tools such as Jenkins, GitLab CI, or GitHub Actions. Solid understanding of Linux systems, shell scripting, and networking concepts. Experience with monitoring tools like Prometheus, Grafana, or Datadog. Knowledge of security practices in cloud and container environments. Preferred Qualifications: AWS Certified Solutions Architect or AWS Certified DevOps Engineer. Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools like ArgoCD or Flux. Experience with logging and observability tools (e.g., ELK stack, Fluentd). Show more Show less

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies