Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 - 5.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 3 to 5 years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 day ago
2.0 - 6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We're Celonis, the global leader in Process Mining technology and one of the world's fastest-growing SaaS firms. We believe there is a massive opportunity to unlock productivity by placing data and intelligence at the core of business processes - and for that, we need you to join us. The Team: Our team is responsible for building the Celonis’ end-to-end Task Mining solution . Task Mining is the technology that allows businesses to capture user interaction (desktop) data, so they can analyze how people get work done, and how they can do it even better. We own all the related components, e.g. the desktop client, the related backend services, the data processing capabilities, and Studio frontend applications. The Role: Celonis is looking for a Senior Software Engineer to build new features and increase the reliability of our Task Mining solution. You would contribute to the development of our Task Mining Client so expertise on C# and .NET framework is required and knowledge of Java and Spring boot is a plus. The work you’ll do: Implement highly performant and scalable desktop components to improve our existing Task Mining software Own the implementation of end to end solutions: leading the design, implementation, build and delivery to customers Increase the maintainability, reliability and robustness of our software Continuously improve and automate our development processes Document procedures, concepts, and share knowledge within and across teams Manage complex requests from support, finding the right technical solution and managing the communication with stakeholders Occasionally work directly with customers, including getting to know their system in detail and helping them debug and improve their setup. The qualifications you need: 2-6 years of professional experience building .NET applications Passion for writing clean code that follows SOLID principles Hand-on experience in C# and .NET framework. Experience in user interface development using WPF and MVVM. Familiarity with Java, Spring framework is a plus. Familiarity with containerization technologies (i.e. Docker) Experience in REST APIs and/or distributed micro service architecture Experience in monitoring and log analysis capabilities (i.e. DataDog) Experience in writing and setting up unit and integration tests Experience in refactoring legacy components. Able to supervise and coach junior colleagues Experience interacting with customers is a plus. Strong communication skills. What Celonis Can Offer You: Pioneer Innovation: Work with the leading, award-winning process mining technology, shaping the future of business. Accelerate Your Growth: Benefit from clear career paths, internal mobility, a dedicated learning program, and mentorship opportunities. Receive Exceptional Benefits: Including generous PTO, hybrid working options, company equity (RSUs), comprehensive benefits, extensive parental leave, dedicated volunteer days, and much more. Prioritize Your Well-being: Access to resources such as gym subsidies, counseling, and well-being programs. Connect and Belong: Find community and support through dedicated inclusion and belonging programs. Make Meaningful Impact: Be part of a company driven by strong values that guide everything we do: Live for Customer Value, The Best Team Wins, We Own It, and Earth Is Our Future. Collaborate Globally: Join a dynamic, international team of talented individuals. Empowered Environment: Contribute your ideas in an open culture with autonomous teams. About Us: Celonis makes processes work for people, companies and the planet. The Celonis Process Intelligence Platform uses industry-leading process mining and AI technology and augments it with business context to give customers a living digital twin of their business operation. It’s system-agnostic and without bias, and provides everyone with a common language for understanding and improving businesses. Celonis enables its customers to continuously realize significant value across the top, bottom, and green line. Celonis is headquartered in Munich, Germany, and New York City, USA, with more than 20 offices worldwide. Get familiar with the Celonis Process Intelligence Platform by watching this video. Celonis Inclusion Statement: At Celonis, we believe our people make us who we are and that “The Best Team Wins”. We know that the best teams are made up of people who bring different perspectives to the table. And when everyone feels included, able to speak up and knows their voice is heard - that's when creativity and innovation happen. Your Privacy: Any information you submit to Celonis as part of your application will be processed in accordance with Celonis’ Accessibility and Candidate Notices By submitting this application, you confirm that you agree to the storing and processing of your personal data by Celonis as described in our Privacy Notice for the Application and Hiring Process. Please be aware of common job offer scams, impersonators and frauds. Learn more here. Show more Show less
Posted 1 day ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description AJA Consulting Services LLP, founded by Phaniraj Jaligama, is committed to empowering youth and creating employment opportunities in both IT and non-IT sectors. With a focus on skill development, AJA provides exceptional resource augmentation, staffing solutions, interns pool management, and corporate campus engagements for a diverse range of clients. Through its flagship CODING TUTOR platform, AJA trains fresh graduates and IT job seekers in full-stack development, enabling them to transition seamlessly into industry roles. Based in Hyderabad, AJA operates from a state-of-the-art facility in Q City. Role Description We're hiring a Senior DevOps/Site Reliability Engineer with 5–6 years of hands-on experience in managing cloud infrastructure, CI/CD pipelines, and Kubernetes environments. You’ll also mentor junior engineers and lead real-time DevOps initiatives. 🔧 What You’ll Do *Build and manage scalable, fault-tolerant infrastructure (AWS/GCP/Azure) *Automate CI/CD with Jenkins, Github Actions or CircleCI *Work with IaC tools: Terraform, Ansible, CloudFormation *Set up observability with Prometheus, Grafana, Datadog *Mentor engineers on best practices, tooling, and automation ✅ What You Bring *5–6 years in DevOps/SRE roles *Strong scripting (Bash/Python/Go) and automation skills *Kubernetes & Docker expertise *Experience in production monitoring, alerting, and RCA *Excellent communication and team mentorship skills 💡 Bonus: GitOps, Service Mesh, ELK/EFK, Vault 📩 Apply now by emailing your resume to a.malla@ajacs.in Show more Show less
Posted 1 day ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are looking for a Senior DevOps Engineer to join our Life Sciences & Healthcare DevOps team . This is an exciting opportunity to work on cutting-edge Life Sciences and Healthcare products in a DevOps environment. If you love coding in Python or any scripting language, have experience with Linux, and ideally have worked in a cloud environment, we’d love to hear from you! We specialize in container orchestration, Terraform, Datadog, Jenkins, Databricks, and various AWS services. If you have experience in these areas, we’d be eager to connect with you. About You – Experience, Education, Skills, And Accomplishments At least 7+ years of professional software development experience and 5+ years as DevOps Engineer or similar skillsets with experience on various CI/CD and configuration management tools e.g., Jenkins, Maven, Gradle, Jenkins, Spinnaker, Docker, Packer, Ansible, Cloudformation, Terraform, or similar CI/CD orchestrator tool(s). At least 3+ years of AWS experience managing resources in some subset of the following services: S3, ECS, RDS, EC2, IAM, OpenSearch Service, Route53, VPC, CloudFront, Glue and Lambda. 5+ years of experience with Bash/Python scripting. Wide knowledge in operating system administration, programming languages, cloud platform deployment, and networking protocols Be on-call as needed for critical production issues. Good understanding of SDLC, patching, releases, and basic systems administration activities It would be great if you also had AWS Solution Architect Certifications Python programming experience. What will you be doing in this role? Design, develop and maintain the product's cloud infrastructure architecture, including microservices, as well as developing infrastructure-as-code and automated scripts meant for building or deploying workloads in various environments through CI/CD pipelines. Collaborate with the rest of the Technology engineering team, the cloud operations team and application teams to provide end-to-end infrastructure setup Design and deploy secure, resilient, and scalable Infrastructure as Code per our developer requirements while upholding the InfoSec and Infrastructure guardrails through code. Keep up with industry best practices, trends, and standards and identifies automation opportunities, designs, and develops automation solutions that improve operations, efficiency, security, and visibility. Ownership and accountability of the performance, availability, security, and reliability of the product/s running across public cloud and multiple regions worldwide. Document solutions and maintain technical specifications Product you will be developing The Products rely on container orchestration (AWS ECS,EKS), Jenkins, various AWS services (such as Opensearch, S3, IAM, EC2, RDS,VPC, Route53, Lambda, Cloudfront), Databricks, Datadog, Terraform and you will be working to support the Development team build them. About The Team Life Sciences & HealthCare Content DevOps team mainly focus on DevOps operations on Production infrastructure related to Life Sciences & HealthCare Content products. Our team consists of five members and reports to the DevOps Manager. We as a team provides DevOps support for almost 40+ different application products internal to Clarivate and which are source for customer facing products. Also, responsible for Change process on production environment. Incident Management and Monitoring. Team also handles customer raised /internal user service requests. Hours of Work Shift timing 12PM to 9PM. Must provide On-call support during non-business hours per week based on team bandwidth At Clarivate, we are committed to providing equal employment opportunities for all qualified persons with respect to hiring, compensation, promotion, training, and other terms, conditions, and privileges of employment. We comply with applicable laws and regulations governing non-discrimination in all locations. Show more Show less
Posted 1 day ago
6.0 years
0 Lacs
Kochi, Kerala, India
On-site
Experience: 6 + years Mandatory to have working experience as SRE Lead in the Retail domain or as Site Reliability Engineer (SRE) at customer work location in the e-com domain. Should be able to work on rotational shift. Must have experience in Production Application Support of OMS (prefer to have Blue Yonder) Must know how retail platforms upstream and downstream integrations work with various tracks such as dot com, warehouse management, stores etc. Must have skills in any of the automation languages like Python, shell, Java to automate periodic OMS related task/SRE task. Should know how to gather SRE requirement from Tech and non-tech aspect from customer. Must have skills in interacting with Level 2 and Level 3 Deve support experience in eCommerce platforms. Hands on experience in Monitoring, Logging, Alerting, Dashboarding, and report generation in any monitoring tools such as AppDynamics/Splunk/Dynatrace/Datadog/CloudWatch/ELK/Prome/NewRelic). Must have knowledge in ITIL framework specifically on Alerts, Incident, change management, CAB, Production deployments, Risk and mitigation plan, SLA, SLI Should be able to lead P1 calls, brief about the P1 to customer, proactive in gathering leads/ customers into the P1 calls till RCA. Experience working with postman. Should have knowledge on building and executing SOP, runbooks, handling any ITSM platforms (JIRA/ServiceNow/BMC Remedy) Must know how to work with the Dev team, cross functional teams across time zones. Should be able to generate WSR/MSR by extracting the tickets from ITSM platforms. Show more Show less
Posted 1 day ago
3.0 - 5.0 years
0 Lacs
Kanayannur, Kerala, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 3 to 5 years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 day ago
3.0 - 5.0 years
0 Lacs
Trivandrum, Kerala, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 3 to 5 years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 day ago
10.0 years
0 Lacs
Pune, Maharashtra, India
Remote
At NiCE, we don’t limit our challenges. We challenge our limits. Always. We’re ambitious. We’re game changers. And we play to win. We set the highest standards and execute beyond them. And if you’re like us, we can offer you the ultimate career opportunity that will light a fire within you. So, what’s the role all about? The Senior Specialist Technical Support Engineer role is to deliver technical support to end users about how to use and administer the NICE Service and Sales Performance Management, Contact Analytics and/or WFM software solutions efficiently and effectively in fulfilling business objectives. We are seeking a highly skilled and experienced Senior Specialist Technical Support Engineer to join our global support team. In this role, you will be responsible for diagnosing and resolving complex performance issues in large-scale SaaS applications hosted on AWS. You will work closely with engineering, DevOps, and customer success teams to ensure our customers receive world-class support and performance optimization. How will you make an impact? Serve as a subject matter expert in troubleshooting performance issues across distributed SaaS environments in AWS. Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address CSS Recording and Compliance application related product issues and resolve high-level issues. Analyze logs, metrics, and traces using tools like CloudWatch, X-Ray, Datadog, New Relic, or similar. Collaborate with development and operations teams to identify root causes and implement long-term solutions. Provide technical guidance and mentorship to junior support engineers. Act as an escalation point for critical customer issues, ensuring timely resolution and communication. Develop and maintain runbooks, knowledge base articles, and diagnostic tools to improve support efficiency. Participate in on-call rotations and incident response efforts. Have you got what it takes? 10+ years of experience in technical support, site reliability engineering, or performance engineering roles. Deep understanding of AWS services such as EC2, RDS, S3, Lambda, ELB, ECS/EKS, and CloudFormation. Proven experience troubleshooting performance issues in high-availability, multi-tenant SaaS environments. Strong knowledge of networking, load balancing, and distributed systems. Proficiency in scripting languages (e.g., Python, Bash) and familiarity with infrastructure-as-code tools (e.g., Terraform, CloudFormation). Excellent communication and customer-facing skills. Preferred Qualifications: AWS certifications (e.g., Solutions Architect, DevOps Engineer). Experience with observability platforms (e.g., Prometheus, Grafana, Splunk). Familiarity with CI/CD pipelines and DevOps practices. Experience working in ITIL or similar support frameworks. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7554 Reporting into: Tech Manager Role Type: Individual Contributor About NiCE NICE Ltd. (NASDAQ: NICE) software products are used by 25,000+ global businesses, including 85 of the Fortune 100 corporations, to deliver extraordinary customer experiences, fight financial crime and ensure public safety. Every day, NiCE software manages more than 120 million customer interactions and monitors 3+ billion financial transactions. Known as an innovation powerhouse that excels in AI, cloud and digital, NiCE is consistently recognized as the market leader in its domains, with over 8,500 employees across 30+ countries. NiCE is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, age, sex, marital status, ancestry, neurotype, physical or mental disability, veteran status, gender identity, sexual orientation or any other category protected by law. Show more Show less
Posted 1 day ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Panasonic Avionics Corporation, a leading provider of in-flight entertainment and communication solutions, is seeking a dynamic and experienced Architect to join their esteemed team in Pune. Exp : 10-17 years Work Mode : Onsite - Work from Office only Location : Pune About the Role: As a Principal Engineer, you will: Design and implement scalable, high-performance systems for digital platforms. Lead the development of Android applications, middleware services, and AWS cloud solutions. Architect low-latency networking systems and secure communication protocols for IoT/enterprise. Harness big data, machine learning, and edge computing to enable real-time decision-making. Build RESTful APIs, optimize CI/CD pipelines, and manage infrastructure using AWS CloudFormation. Collaborate with clients and cross-functional teams to deliver tailored, innovative solutions. Key Skills We’re Looking For: 15+ years of experience in web/mobile development, middleware design, and AWS cloud. Expertise in C/C++, Java, Python, Kotlin, and networking protocols (TCP/IP). Proficiency in big data tools (Spark, Kafka), ML frameworks, and edge computing. Hands-on experience with CI/CD (GitLab), monitoring tools (CloudWatch, Datadog), and Agile methodologies. Strong leadership, communication, and client engagement skills. Education & Preferences: Bachelor’s degree (required) in Computer Science or related field; Master’s preferred. Familiarity with multimedia streaming, IoT, or Agile/Scrum is a plus. Interested candidates share your updated profile with me Sam.Thilak@antal.com Show more Show less
Posted 1 day ago
6.0 years
0 Lacs
India
On-site
Job Title: Site Reliability Engineer About noon noon.com is a technology leader with a simple mission: to be the best place to buy and sell things. In doing this we hope to accelerate the digital economy of the Middle East, empowering regional talent and businesses to meet the full range of consumers' online needs. noon operates without boundaries; we are aggressively and voraciously ambitious. Starting in 2017 with noon.com, the region’s homegrown e-commerce platform and leading online shopping destination, noon is now a digital ecosystem of products and services - noon, noon Food, Noon in Minutes, NowNow, SIVVI, noon One, and noon Pay. At noon we have the courage to pursue what seems impossible, we work hard to get things done, we go to great lengths to ensure that the experience of everyone from our customers to our sellers or noon Bandidos is stellar but above all, we are grateful for the opportunities we have. If you feel the above values resonate with you – you will enjoy this incredible journey with us! Job Description As a Site Reliability Engineer (SRE) at noon payments, you will play a crucial role in maintaining and enhancing the reliability, availability, and performance of our cloud-based infrastructure and services. You will be responsible for automating deployments, optimizing systems, and ensuring seamless performance across our platforms. This position requires a strong foundation in cloud infrastructure management, particularly with Azure - AKS and GCP-GKE, alongside hands-on experience with Azure DevOps and monitoring tools like Datadog. You will: Cloud Infrastructure Management: Manage and optimize cloud environments across Azure and GCP, ensuring efficient resource utilization, high system availability, and scalability (AKS-GKE). Infrastructure as Code: Utilize Terraform for infrastructure provisioning, ensuring consistent and scalable deployments, and managing infrastructure via Azure DevOps pipelines. Configuration Management: Implement and manage system configurations using Ansible to ensure consistency and streamline updates across different environments. Continuous Integration/Continuous Deployment (CI/CD): Develop, maintain, and optimize CI/CD pipelines within Azure DevOps to automate testing and deployment processes, reducing time from development to production. Monitoring and Observability: Set up and maintain comprehensive monitoring and observability solutions using Datadog to track system health, performance, and proactively detect issues. Container Orchestration: Deploy, manage, and optimize Kubernetes clusters to support scalable and resilient application deployments. Incident Management: Participate in a 24/7 on-call or roster-based team to respond to incidents, conduct root cause analysis, and implement solutions to minimize downtime and ensure system reliability. Performance Tuning: Continuously monitor system performance, identify bottlenecks, and implement optimizations to improve efficiency and response times. Capacity Planning: Plan and manage system capacity to ensure resources meet current and future demands, enabling seamless service delivery. Collaboration: Work closely with Network Operations Center (NOC) and DevOps teams to troubleshoot issues, optimize deployment processes, and drive continuous improvement . Documentation: Create and maintain detailed documentation for system configurations, deployment processes, and incident reports. Skill Requirements Bachelor’s degree in computer science, Information Technology or any other related discipline or equivalent related experience. Cloud, ITIL, CKA certifications are a plus. 6+ years of directly related or relevant experience, preferably in information security. Extensive experience with cloud platforms such as Azure, GCP, and Huawei Cloud. Proficiency with Terraform for infrastructure automation and Ansible for configuration management. Hands-on experience with Kubernetes for container orchestration mainly AKS and GKE. Expertise in monitoring and observability tools such as Datadog. Familiarity with Azure VMSS, GCP MIG for virtual machine scaling and management. Experience in a 24/7 on-call or roster-based team environment, focusing on system uptime and incident response. Strong understanding of SRE processes and best practices for system reliability, availability, and performance. Excellent problem-solving skills and the ability to handle complex technical issues under pressure. Effective communication skills and a collaborative approach to working with diverse teams. Experience with payment gateway projects or similar high-transaction systems is preferred. Additional knowledge in advanced monitoring techniques, performance tuning, and capacity planning is a plus. Who will excel? We’re looking for candidates who thrive in a fast-paced, dynamic start-up environment. We’re searching for problem solvers, people who operate with a bias for action and have a deep understanding of the importance of resourcefulness over reliance. Candor is our only default. Demanding unequivocal high standards should be non-negotiable because quality matters. We want people who are radically candid, cohorts who commit to settling for nothing but the best - in hiring, in accepting work from colleagues, and in your own work. Ours is not an easy mission, but it is a meaningful one. Every hire must actively raise the bar of talent in the company to help us reach our vision. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Responsibilities Role description Design, develop, and implement solutions using Oracle BRM 12 (or above), Java Spring Boot, and related technologies. Customize and extend BRM functionality through opcode development and configuration. Develop and maintain integrations between BRM and other systems using APIs and messaging queues. Troubleshoot and resolve complex issues related to BRM, Java applications, and system integrations. Write efficient database queries and shell scripts for automation and data analysis. Work with cloud technologies (e.g., AWS, GCP, Azure) to deploy and manage applications. Utilize and manage data in various databases (Oracle, DynamoDB, NoSQL). Integrate with messaging queues (Kafka, AWS SQS). Contribute to the design and implementation of microservices. Monitor application performance and identify areas for optimization. Participate in code reviews and provide constructive feedback. Collaborate effectively with other developers, testers, and business stakeholders. Provide support during US business hours for a few hours. Must-Have Skills Oracle BRM 12 (or above): Experience with opcode customization and configuration. Java Development: Proficiency in Java, with hands-on experience using Spring Boot. Database Queries: Strong experience with SQL, PL/SQL, and shell scripting for automation and data analysis. Cloud Technologies: Hands-on experience with at least one of the major cloud platforms (AWS, GCP, Azure). Messaging Systems: Experience with systems like Kafka and AWS SQS. Microservices: Understanding of microservice design patterns and their implementation. Debugging & Troubleshooting: Excellent debugging skills and problem-solving ability. Communication: Strong written and verbal communication skills to work with diverse teams. Good-to-Have Skills Monitoring Tools: Familiarity with tools like Chaossearch, Kibana, Grafana, Datadog. Containerization: Experience with Docker and Kubernetes. Apache Airflow: Experience with workflow orchestration. Additional Cloud Platforms: Knowledge of other cloud platforms beyond AWS, GCP, or Azure. Experience Range 5+ years of hands-on experience with Oracle BRM, Java Spring Boot, and cloud technologies. Qualifications Education: Bachelor’s degree in Computer Science or a related field. Skills Oracle Brm,Javaspringboot,GCP Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
NVIDIA is searching for a highly motivated software engineer for the NVIDIA NetQ team that is building a next gen Network management and Telemetry system in cloud using modern design principles at internet scale. NVIDIA NetQ is a highly scalable, modern network operations toolset that provides visibility, troubleshooting, and validation of your Cumulus fabrics in real time. NetQ utilizes telemetry and delivers actionable insights about the health of your data center network, integrating the fabric into your DevOps ecosystem. What you'll be doing: Building and maintaining infrastructure components like NoSQL DB (Cassandra, Mongo), TSDB, Kafka etc Maintain CI/CD pipelines to automate the build, test, and deployment process and build improvements on the bottlenecks. Managing tools and enabling automations for redundant manual workflows via Jenkins, Ansible, Terraforms etc Enable performing scans and handling of security CVEs for infrastructure components Enable triage and handling of production issues to improve system reliability and servicing for customers What we need to see: 5+ years of experience in complex microservices based architectures and Bachelors degree. Highly skilled in Kubernetes and Docker/containerd. Experienced with modern deployment architecture for non-disruptive cloud operations including blue green and canary rollouts. Automation expert with hands on skills in frameworks like Ansible & Terraform. Strong knowledge of NoSQL DB (preferably Cassandra), Kafka/Kafka Streams and Nginx. Expert in AWS, Azure or GCP. Having good programming background in languages like Scala or Python. Knows best practices and discipline of managing a highly available and secure production infrastructure. Ways to stand out from the crowd: Experience with APM tools like Dynatrace, Datadog, AppDynamics, New Relic, etc. Skills in Linux/Unix Administration. Experience with Prometheus/Grafana. Implemented highly scalable log aggregation systems in past using ELK stack or similar. Implemented robust metrics collection and alerting infrastructure. NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people on the planet working for us. If you're creative, passionate and self-motivated, we want to hear from you! NVIDIA is leading the way in ground-breaking developments in Artificial Intelligence, High-Performance Computing and Visualization. The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services. JR1998880 Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Key Responsibilities: Design, implement, and manage CI/CD pipelines for rapid and reliable product delivery Automate infrastructure provisioning using tools like Terraform, CloudFormation, or Ansible Monitor and maintain system performance, reliability, and scalability Manage cloud infrastructure (AWS, Azure, or GCP) with a focus on cost, performance, and security Implement and maintain logging, monitoring, and alerting solutions (e.g., Prometheus, Grafana, ELK, Datadog) Ensure infrastructure security best practices including secrets management, access controls, and compliance Collaborate with development teams to ensure DevOps best practices are followed across the lifecycle Troubleshoot production issues and lead root cause analysis Support containerization and orchestration using Docker and Kubernetes Requirements : Bachelor’s degree in Computer Science, Engineering, or a related field 5+ years of experience in a DevOps, SRE, or similar role in a product-focused environment Proficient with CI/CD tools such as Jenkins, GitLab CI, CircleCI, etc. Strong experience with AWS, Azure, or Google Cloud Hands-on experience with infrastructure-as-code (Terraform, Ansible, etc.) Solid understanding of containerization (Docker) and orchestration (Kubernetes) Experience with scripting languages (Bash, Python, etc.) Knowledge of monitoring/logging tools like ELK, Prometheus, Grafana, or Datadog Strong communication and collaboration skills Show more Show less
Posted 1 day ago
4.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. Job Description You are passionate about quality and how customers experience the products you test. You have the ability to create, maintain and execute test plans in order to verify requirements. As a Quality Engineer at Equifax, you will be a catalyst in both the development and the testing of high priority initiatives. You will develop and test new products to support technology operations while maintaining exemplary standards. As a collaborative member of the team, you will deliver Quality Engineering services (code quality, testing services, performance engineering, development collaboration and continuous integration). You will conduct quality control tests in order to ensure full compliance with specified standards and end user requirements. You will execute tests using established plans and scripts; document problems in an issues log and retest to ensure problems are resolved. You will create test files to thoroughly test program logic and verify system flow. You will identify, recommend and implement changes to enhance effectiveness of QE strategies. What You Will Do Be viewed as a lead across the team, engaging and energizing teams to achieve aggressive goals. Ensure enforcement of testing policies, standards and guidelines to drive a consistent testing framework across the business. Demonstrate an understanding of test methodologies, writing test plans/test strategies, creating test cases ,defect reporting and debugging. Define test cases and create scripts based on assessment and understanding of product specifications and test plan. Automate defined test cases and test suites per project and plan Develop test automation using automation frameworks Conduct rigorous testing to validate product functionality per the test plan and record testing results and defects in Test management tool, JIRA. Create defects as a result of test execution with correct severity and priority; Responsible for conducting Functional ,Non-Functional Testing, analyzing performance metrics and identifying bottlenecks to optimize system performance. Collaborate with peers, Product Owners and Test Lead to understand product functionality and specifications to create effective test cases and test automation Collaborate with development teams to integrate automated tests into CI/CD pipeline. Participate in security testing activities to identify and mitigate vulnerabilities. Maintain thorough and accurate quality reports/metrics and dashboards to ensure visibility of product quality, builds and environments. Ensure communications are thorough and accurate for all work documentation including status updates. Review all requirements/acceptance criteria to assure completeness and coverage Actively involve in root cause analysis and problem -solving activities to prevent defects and improve product quality. Propose and implement process improvements to enhance the overall quality assurance process. Work with team leads to track and determine prioritization of defect fixes. What Experience You Need BS or MS degree in Computer Science or Business or equivalent job experience required 4+ years of software testing and automation experience Expertise and skilled in programming languages like core-Java ,python or JavaScript. Able to create automated test based on functional and nonfunctional requirements Ability to write, debug, and troubleshoot code in Java, Spring boot, TypeScript/JavaScript, HTML, CSS Understanding of SQL and experience working with databases like MYSQL, PostgreSQL, or Oracle. Good understanding of software development methodologies(preferably Agile) & testing methodologies. Proficiency in working with Test Automation Frameworks created for WEB & API Automation using Selenium, Appium, TestNG, Rest Assured, Karate, Gauge, Cucumber, Bruno Experience with performance testing tools -JMeter , Gatling Knowledge of security testing concepts . Strong analytical and problem solving skills. Excellent written and verbal communication skills. Ability to lead and motivate teams. Self-starter that identifies/responds to priority shifts with minimal supervision. Software build management tools like Maven or Gradle Testing technologies: JIRA, Confluence, Office products Knowledge in Test Management tool : Zephyr What could set you apart Experience with cloud based testing environments(AWS,GCP) Hands-on experience working in Agile environments. Knowledge of API testing tools(Bruno, Swagger) and on SOAP API Testing using SoapUI. Certification in ISTQB or similar or Google cloud certification.. Experience with cutting-edge tools & technologies :Familiarity with the latest tools and technologies such as AI, machine learning and cloud computing. Expertise with cross device testing strategies and automation via device clouds Experience monitoring and developing resources Excellent coding and analytical skills Experience with performance engineering and profiling (e.g. Java JVM, Databases) and tools such as Load Runner, JMeter, Gatling Exposure to Application performance monitoring tools like Grafana & Datadog Ability to create good acceptance and integration test automation scripts and integrate with Continuous integration (Jenkins) and code coverage tools (Sonar) to ensure 80% or higher code coverage Experience working in a TDD/BDD environment and can utilize technologies such as JUnit, Rest Assured, Appium, Gauge/Cucumber frameworks, APIs (REST/SOAP). Understanding of Continuous Delivery concepts and can use tools including Jenkins and vulnerability tools such as Sonar, Fortify, etc. Experience in Lamba Testing for Cross browser testing A good understanding of Git version control, including branching strategies , merging and conflict resolution. We offer a hybrid work setting, comprehensive compensation and healthcare packages, attractive paid time off, and organizational growth potential through our online learning platform with guided career tracks. Are you ready to power your possible? Apply today, and get started on a path toward an exciting new career at Equifax, where you can make a difference! Who is Equifax? At Equifax, we believe knowledge drives progress. As a global data, analytics and technology company, we play an essential role in the global economy by helping employers, employees, financial institutions and government agencies make critical decisions with greater confidence. We work to help create seamless and positive experiences during life’s pivotal moments: applying for jobs or a mortgage, financing an education or buying a car. Our impact is real and to accomplish our goals we focus on nurturing our people for career advancement and their learning and development, supporting our next generation of leaders, maintaining an inclusive and diverse work environment, and regularly engaging and recognizing our employees. Regardless of location or role, the individual and collective work of our employees makes a difference and we are looking for talented team players to join us as we help people live their financial best. Equifax is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Show more Show less
Posted 1 day ago
12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the Role: We are seeking an accomplished and visionary DevOps Leader to spearhead our entire DevOps function. In this pivotal role, you will be the strategic architect and technical authority, responsible for guiding the evolution and optimization of our infrastructure, operations, and deployment practices. You will lead the DevOps team, ensuring our systems are highly available, scalable, secure, fault-tolerant, and cost-efficient. This position demands a blend of deep technical expertise, exceptional leadership, and a commitment to fostering a culture of operational excellence across the engineering organization. Key Responsibilities: Strategic DevOps Leadership & Architecture: Lead the entire DevOps organization, taking full ownership of the architecture and technical leadership of the entire DevOps infrastructure and team. Define, communicate, and execute the long-term DevOps strategy, roadmap, and vision, aligning it directly with broader business and engineering objectives. Drive the adoption of cutting-edge practices in infrastructure as code, continuous integration/delivery, and site reliability engineering. Platform Operations & Reliability Engineering: Deploy, manage, and operate scalable, highly available, fault-tolerant, and cost-optimized systems in a dynamic production environment. Setup and champion the Application Monitoring Framework, establishing robust logging, alerting, and performance monitoring best practices and processes across all engineering teams. Oversee incident response, root cause analysis, and proactive measures to ensure maximum uptime and system health. Platform Security & Compliance Management: Manage Platform Security and Compliance, ensuring the entire platform consistently meets the latest security and compliance requirements. Proactively identify vulnerabilities and critical business risks within our infrastructure and applications. Collaborate strategically with engineering teams to plan and drive the timely closure of all identified security and compliance gaps. Implement and enforce security-first principles throughout the operational lifecycle. Team Leadership & Development: Recruit, mentor, coach, and develop a high-performing team of DevOps/SRE engineers, cultivating a culture of innovation, continuous learning, and shared ownership. Provide clear direction, set performance expectations, and foster career growth for team members. Technical Vendor Management & Negotiation: Manage critical vendor relationships (tech), including cloud providers, SaaS tools, and specialized services. Lead engagement with vendors during critical issues, driving accountability across defined Service Level Agreements (SLAs) to ensure optimal performance and support. Qualifications: Bachelor's or Master's degree in Computer Science, Engineering. 12+ years of progressive experience in DevOps or Infrastructure roles, with at least 5+ years in a leadership position managing and mentoring DevOps teams. Proven track record of architecting, deploying, and managing highly scalable, secure, and resilient cloud-native infrastructure (specifically AWS). Expert-level proficiency in CI/CD methodologies and tools (e.g., Jenkins, GitLab CI, ArgoCD, Spinnaker). Deep expertise with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible. Extensive experience with containerization (Docker) and container orchestration platforms (Kubernetes). Strong background in setting up and leveraging comprehensive observability stacks (monitoring, logging, tracing – e.g., Prometheus, Grafana, ELK Stack, Datadog). Demonstrated ability to implement and enforce robust security practices and manage compliance frameworks (e.g., ISO 27001, SOC 2). Strong experience in vendor management, including contract negotiations and driving SLA adherence. Exceptional leadership, strategic thinking, and problem-solving abilities. Excellent communication, interpersonal, and stakeholder management skills, with the ability to influence technical and non-technical audiences. Preferred Qualifications: Experience in a high-growth, fast-paced SaaS or logistics technology environment. Relevant industry certifications (e.g., AWS Certified DevOps Engineer - Professional, Certified Kubernetes Administrator). Experience with advanced networking concepts, distributed systems, and microservices architectures. Proficiency in programming/scripting languages such as Python, Go, or Java for automation and tooling development Show more Show less
Posted 1 day ago
20.0 years
0 Lacs
Greater Bengaluru Area
On-site
Who is Forcepoint? Forcepoint simplifies security for global businesses and governments. Forcepoint’s all-in-one, truly cloud-native platform makes it easy to adopt Zero Trust and prevent the theft or loss of sensitive data and intellectual property no matter where people are working. 20+ years in business. 2.7k employees. 150 countries. 11k+ customers. 300+ patents. If our mission excites you, you’re in the right place; we want you to bring your own energy to help us create a safer world. All we’re missing is you! SRE Role JD Job Description Forcepoint is seeking a Site Reliability Engineer to join our Site Reliability Engineering Team. The SRE role will focus on elevating application and service performance and availability in support of our organization’s fast-evolving enterprise technology needs. The SRE role actively targets risk to service availability for employees and customers by partnering with Engineering and Operations teams leveraging modern observability tooling and service restoration methodologies focused on automation and infrastructure as code where possible. The ideal candidate will have a broad background spanning both applications and infrastructure. They will have direct experience with multiple coding language, core SRE practices & methodologies. Job Description: Monitor, measure and improve the reliability, availability and scalability of IT Infrastructure, applications and services Identify manual routine operational practices and build robust automation capabilities using code and modern tools Collaborate with Product Developers and business stakeholders to gather requirements for enabling and improving performance monitoring for applications and services Engage in Incident response and participate in post-mortem analysis to investigate root cause and capture contributing factors for remediation Perform analytics on previous incidents and trend/usage patterns to better predict issues and take proactive actions Design and build custom tools as needed to support process optimization, challenging the status-quo and improving operational efficiency Participate in 24*7 rotational shifts & On-Call for handling production operation issues Engage in service capacity planning and demand forecasting, software performance analysis and system tuning Create meaningful dashboards/reports for application telemetry and infrastructure health for pro-actively identifying performance constraints and bottlenecks Requirements: University degree and 2+ years of related experience, or equivalent work experience. Strong understanding of cloud-based architecture and cloud operations. Hands-on experience with Amazon Web Services and/or equivalent public cloud technology Experience in administration/build/management of Linux systems Foundational understanding of Infrastructure and Platform Technology stacks Strong understanding of Networking concepts and theories, such as different protocols (TCP/IP, UDP, routing protocols, etc), VLAN configuration, DNS, OSI layers, and load balancing Understanding of security architecture and certificate management Working knowledge of Infrastructure and Application monitoring platforms such as Grafana Cloud, Solarwinds, NewRelic, DataDog etc. Working knowledge of Incident Response and Alerting platforms such as PagerDuty, Opsgenie, XMatters etc. Understanding of the core DevOps practices (CI/CD pipeline, release management etc.) Ability to write code using any one modern programming language (Phython, JavaScript, Ruby etc.). Additional scripting skills are preferred Configuration management platform understanding and experience (Chef/Puppet/Ansible) Prior experience in Cloud management automation tools (Terraform/CloudFormation etc.) is crucial Experience with source code management software and API automation is crucial Cloud certifications or equivalent experience is highly regarded Service availability oriented mindset with a pro-active approach to problem solving. An ideal candidate should be able to develop automated solutions to prevent recurring problems Possesses the ability and willingness to challenge the status-quo and optimize current procedures and processes Strong sense of ownership and an ability to drive cross-functional process improvement Possesses excellent inter-personal, written and verbal communications skills Analytical and logical approach to problem-solving and a willingness to automate repetitive tasks and reduce manual/re-active workload Ability and willingness to coach and mentor Team members and colleagues Don’t meet every single qualification? Studies show people are hesitant to apply if they don’t meet all requirements listed in a job posting. Forcepoint is focused on building an inclusive and diverse workplace – so if there is something slightly different about your previous experience, but it otherwise aligns and you’re excited about this role, we encourage you to apply. You could be a great candidate for this or other roles on our team. The policy of Forcepoint is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to affirmatively seek to advance the principles of equal employment opportunity. Forcepoint is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by sending an email to recruiting@forcepoint.com. Applicants must have the right to work in the location to which you have applied. Show more Show less
Posted 1 day ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role : Lead DevOps Engineer Location : Hyderabad , Ahmedabad Early Joiners Preferred. Responsibilities: Lead the design, implementation, and management of enterprise Container orchestration platform using Rafey and Kubernetes. Oversee the onboarding and deployment of applications on Rafey platforms utilizing AWS EKS and Azure AKS. Develop and maintain CI/CD pipelines to ensure efficient and reliable application deployment using Azure DevOps. Collaborate with cross-functional teams to ensure seamless integration and operation of containerized applications. Implement and manage infrastructure as code using tools such as Terraform. Ensure the security, reliability, and scalability of containerized applications and infrastructure. Mentor and guide junior DevOps engineers, fostering a culture of continuous improvement and innovation. Monitor and optimize system performance, troubleshooting issues as they arise. Stay up-to-date with industry trends and best practices, incorporating them into the team's workflows. Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 10+ years of experience in DevOps, with a focus on Container orchestration platform. Extensive hands-on experience on Kubernetes, EKS, AKS. Good to have knowledge on Rafey platform (A Kubernetes Management Platform) Proven track record of onboarding and deploying applications on Kubernetes platforms including AWS EKS and Azure AKS. Hands-on experience to write Kubernetes manifest files. Strong knowledge on Kubernetes Ingress and Ingress Controllers . Strong knowledge of Azure DevOps CI/CD pipelines and automation tools Proficiency in infrastructure as code tools (e.g., Terraform). Excellent problem-solving skills and the ability to troubleshoot complex issues. Should have knowledge on Secret Management and RBAC configuration Hands-on experience on Helm Charts Strong communication and collaboration skills. Experience with cloud platforms (AWS, Azure) and container orchestration. Knowledge of security best practices in a DevOps environment Preferred Skills: Strong Cloud knowledge (AWS & Azure) Strong Kubernetes knowledge Experience with other enterprise Container orchestration platform and tools. Familiarity with monitoring and logging tools (e.g., Datadog). Understanding of network topology and system architecture. Ability to work in a fast-paced, dynamic environment. Good to Have: Rafey platform knowledge ( A K8s Management Platform) Hands-on experience on GitOps technology Show more Show less
Posted 1 day ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Application Operations Engineer Job Description Years of experience: 3+ years Shift Time: 11 AM to 8 PM or 12 PM to 9 PM Employment Type: Full Time Work Model: Hybrid Benefits and Perks: Cutting-Edge Technology. Medical Insurance (Which covers spouse, up to two children, parents or parent-in- laws). Continuing Education Program Policy. Annual EBITDA Bonus. Paid Maternity Leave. Paid Parental Leave. About the Position: An Application Operations Engineer is responsible for ensuring the smooth operation and performance of software applications, primarily in a production environment. This role typically combines both development and operations skills to software deployment, systems operation, systems monitor, support applications , monitor their performance, and handle incident management. Responsibilities: Deployment of applications to different environments (Quality Assurance, Alpha, Release Candidate, Production). Execute automated Smoke Tests. System monitoring and response time to meet SLAs. Monitoring and acting on system monitor alerts. Development, testing, documentation and communication of system monitor. Working closely with developers to ensure that new features and applications meet operational standards. Qualifications And Skills Required: 3+ years of experience as an Application Operations Engineer or similar role. Computer Science, IT Operations or related bachelor's degree – may be substituted for relevant work experience. Understanding of application deployment strategies including CI/CD using pipelines. Knowledge of Backend, Frontend and API development. Experience with programming languages preferably typescript. Knowledge of application hosting services. (ex. Kubernetes, NGINX, Azure App Services...) Understanding of SQL databases, and database deployment strategies. (SQL Azure). Familiarity with version control systems (e.g., Git ). Knowledge of monitoring tools ( Kibana Dashboard, Site 24*7, Datadog or Azure Monitor). Strong troubleshooting skills and the ability to quickly identify the root cause of application issues. Good communication skills for reporting incidents, collaborating on optimizations, and presenting to team. Familiarity with application lifecycle management and operational practices. Experience with JIRA . Interested candidates can share their resume at harsha.palan@transimpact.com Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: AWS Cloud Engineer Experience Required: 3–5 Years Location: On-Site (Gurugram) Job Type: Full-Time Job Summary: We are looking for a skilled and proactive AWS Cloud Engineer with 3–5 years of hands-on experience in cloud computing to join our IT team. The successful candidate will be responsible for designing, implementing, and maintaining cloud infrastructure on Amazon Web Services (AWS) to support our scalable and secure digital platforms. Key Responsibilities: Design, deploy, and manage cloud infrastructure using AWS services such as EC2, S3, RDS, Lambda, CloudFormation, VPC, and more. Implement and maintain Infrastructure as Code (IaC) using tools like Terraform, AWS CloudFormation, or AWS CDK. Monitor and improve cloud system performance, reliability, and cost efficiency using tools such as AWS CloudWatch, CloudTrail, and Trusted Advisor. Set up and manage CI/CD pipelines using Jenkins, AWS CodePipeline, or GitLab CI/CD. Ensure cloud infrastructure is secure and compliant with security standards through proper configuration of IAM roles, policies, security groups, and encryption. Collaborate with development, DevOps, and security teams to optimize cloud architecture and support application deployments. Automate operational tasks to improve system performance and reduce manual interventions. Troubleshoot cloud-based issues and provide 2nd/3rd level support for AWS environments. Keep up to date with AWS service updates and recommend new solutions where appropriate. Required Skills: Strong experience with AWS core services (EC2, S3, RDS, IAM, VPC, etc.). Proficiency in scripting and automation (Python, Bash, or PowerShell). Experience with Docker and container orchestration (ECS, EKS, or Kubernetes). Good understanding of networking concepts like VPNs, subnets, routing, and firewalls in AWS. Familiarity with version control tools (Git, GitHub, Bitbucket). Strong problem-solving and troubleshooting abilities. Experience working in Agile/Scrum development environments. Preferred Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field. AWS Certification (e.g., Solutions Architect Associate, SysOps Administrator, or DevOps Engineer). Experience with monitoring tools (Datadog, Prometheus, ELK Stack, or similar). Exposure to hybrid or multi-cloud environments. Knowledge of serverless computing (AWS Lambda, API Gateway, DynamoDB). Soft Skills: Excellent communication and collaboration skills. Strong analytical thinking and attention to detail. Ability to manage tasks independently and drive initiatives forward. A passion for learning and staying current with cloud technologies. Show more Show less
Posted 1 day ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Role: Monitoring Tools - Datadog Job Location: Chennai/Bangalore/Hyderabad Exp Range: 7Years to 10Years Desired Competencies (Technical/Behavioral Competency): Administrators of monitoring applications, notification platforms and integrated systems. Governors of best practices, maintenance, support and upgrades to the monitoring systems, platforms, and applications. Provide technical expertise and assistance to all Application, Infrastructure, Security & Network support teams to enable swift monitoring solutions to safeguard the environment. Should be able to upskill & deliver with all technologies in the fast-evolving & ever-changing IT industry in and around monitoring space. Azure, AWS Monitors, etc,. Should possess strong OS knowledge (LINUX, UNIX & WINDOWS). Should possess average knowledge about Middleware & MQ related technologies Show more Show less
Posted 1 day ago
4.0 years
0 Lacs
India
Remote
Job Title- DevOps Engineer Exp-4+ Years Job Type : Remote Shift Timings : 4PM – 12PM Skills required: • Experience in infrastructure, DevOps, or SRE roles with increasing responsibility • Experience with Kubernetes, Terraform, containerizaAon, and at least one major cloud provider (AWS preferred) • Strong knowledge of system design, networking, and reliability principles • Experience with observability tools (e.g., Prometheus, Grafana, Datadog) and incident response pracAces • Web applicaAon building with databases, APIs, Kubernetes, authenAcaAon Nice to have: • Experience supporAng data pipelines, ML workloads, or complex orchestraAon systems • Familiarity with the data/ML tooling ecosystem (e.g., Airflow, dbt, Spark, Dremio, etc.) • Previous experience in a startup or high-growth environment Show more Show less
Posted 1 day ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description: The next evolution of AI powered cyber defense is here. With the rise of cloud and modern technologies, organizations struggle with the vast amount of data and thereby security alerts generated by their existing security tools. Cyberattacks continue to get more sophisticated and harder to detect in the sea of alerts and false positives. According to the Forrester 2023 Enterprise Breach Benchmark Report, a security breach costs organizations an average of $3M and takes organizations over 200 days to investigate and respond. AiStrike’s platform aims at reducing the time to investigate and respond to threats by over 90%. Our approach is to leverage the power of AI and machine learning to adopt an attacker mindset to prioritize and automate cyberthreat investigation and response. The platform reduces alerts by 100:5 and provides detailed context and link analysis capabilities to investigate the alert. The platform also provides collaborative workflow and no code automation to cut down the time to respond to threats significantly. If you have the desire to join the next evolution of cyber defense, are willing to work hard and learn fast, and be part of building something special, this is the company for you. We are seeking a highly skilled and experienced hands-on Principal Software Engineer with over 10+ years of proven expertise in the field. As a Principal Architect, you will play a crucial role in leading the architecture, designing, and implementing scalable cloud solutions for our Cloud-native SaaS products. The ideal candidate will have significant experience and a strong background in object-oriented design and coding skills, with hands-on experience in Java and Python. Roles and Responsibilities: Manage overarching product/platform architecture, and technology selection and make sure that the design and development of all projects follow the architectural vision Design and architect scalable cloud solutions for Cloud-native SaaS development projects in line with the latest technology and practices Successfully communicate, evangelize, and implement the architectural vision across teams and products Design and coordinate projects of significant size and complexity Work with containerization technologies and orchestration software such as Kubernetes on cloud platforms like AWS and Azure. Develop and implement Microservices-based architecture using Java, SpringBoot, ReactJS, NextJS, and other relevant technologies. Implement secure design principles and practices, ensuring the integrity and confidentiality of our systems. Collaborate with cross-geography cross-functional teams to define and refine requirements and specifications. Deploy workloads at scale in AWS EKS/ECS environments and others as needed Create automation and use monitoring tools to efficiently build, deploy and support cloud implementations. Implement DevOps methodologies and tools for continuous integration and delivery. Utilize APM and Monitoring Tools like ELK, Splunk, Datadog, Dynatrace, and Appdynamics for cloud-scale monitoring. Work with potential customers to understand their environment. Provide technical leadership, architecture guidance, and mentorship to the teams. Have a clear focus on scale, cost, security, and maintainability. Stay updated on industry best practices, emerging technologies, and cybersecurity trends. Skills and Qualifications: 10+ years of overall experience in software development and architecture. In depth knowledge and experience in Cloud-native SaaS development and architecture. Proficient in Java, Python, RESTful APIs, API Gateway, Kafka, and Microservices communications. Experience with RDBMS and NoSQL databases (e.g., Neo4J, MongoDB, Redis). Experience in working with Graph databases like Neo4J. Expertise in containerization technologies (Docker) and Kubernetes. Hands-on experience with secure DevOps practices. Familiarity with Multi-Factor Authentication and Single Sign-On principles. Excellent verbal and written communication skills. Self-starter with strong organizational and problem-solving skills. Prior experience in deploying workloads at scale in AWS EKS/ECS/Fargate. Knowledge of Cloud-scale APM and Monitoring Tools (ELK, Splunk, Datadog, etc.). Previous experience in Cybersecurity products is desirable but not mandatory. Preferred: AWS Certified Solutions Architect – Professional or similar certification, including certifications on other cloud platforms. Commitment, team player, integrity and customer focus AiStrike is committed to providing equal employment opportunities. All qualified applicants and employees will be considered for employment and advancement without regard to race, color, religion, creed, national origin, ancestry, sex, gender, gender identity, gender expression, physical or mental disability, age, genetic information, sexual or affectional orientation, marital status, status regarding public assistance, familial status, military or veteran status or any other status protected by applicable law. Show more Show less
Posted 1 day ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Construction is the 2nd largest industry in the world (4x the size of SaaS!). But unlike software (with observability platforms such as AppDynamics and Datadog), construction teams lack automated feedback loops to help projects stay on schedule and on budget. Without this observability, construction wastes a whopping $3T per year because glitches aren’t detected fast enough to recover. Doxel AI exists to bring computer vision to construction, so the industry can deliver what society needs to thrive. From hospitals to data centers, from foreman to VPs of construction, teams use Doxel to make better decisions everyday. In fact, Doxel has contributed to the construction of the facilities that provide many of the products and services you use everyday. We have classic computer vision, deep learning ML object detection, a low-latency 3D three.js web app, a complex data pipeline powering it all in the background. We’re building out new workflows, analytics dashboards, and forecasting engines. We’re at an exciting stage of scale as we build upon our growing market momentum. Our software is trusted by Shell Oil, Genentech, HCA healthcare, Kaiser, Turner, Layton and several others. Join us in bringing AI to construction! The Role: As a Backend Enginee r , your mission is to architect and build the resilient and scalable systems that power the intelligence behind Doxel’s AI-driven construction platform. You'll tackle complex infrastructure and data engineering challenges, shaping how terabytes of real-time jobsite data are processed, stored, and served. Your work will enable smarter decision-making for some of the world’s largest construction projects. You’ll collaborate closely with our product, frontend, computer vision, and 3D teams to design seamless end-to-end solutions—from field data capture to actionable insights. If you love solving real-world problems at scale and thrive in a fast-paced, mission-driven team—this role is for you. Who You Are: You’re a backend engineer with deep curiosity and a strong grasp of building distributed systems. You care about clean, efficient code, resilient infrastructure, and scalable architectures. You’re hungry to solve tough technical problems and improve systems that directly impact the real world—especially in industries like construction where every decision counts. Bonus: you’ve worked on data-heavy platforms or have experience integrating machine learning/computer vision pipelines into production. What You’ll Do: Design, build, and scale robust backend systems and pipelines for data ingestion, processing, and analytics Develop high-performance APIs that power real-time 3D visualizations, dashboards, and mobile tools Build and optimize data pipelines that support models and business logic at scale Collaborate across CV/ML, frontend, design, and product teams to deliver end-to-end features Ensure system reliability, observability, monitoring, and graceful degradation for mission-critical tools Leverage, AI tools, cloud infrastructure, containerization, and CI/CD best practices perform thoughtful code reviews, and drive backend engineering best practices What You Bring to Doxel: 4+ years of professional experience as a backend or systems engineer Strong proficiency in modern backend languages like Python, Go, or Node.js Have a test driven development approach. You are someone who believes testing is not an afterthought—it's the foundation. You write tests before code, care deeply about code quality, and value software you can trust Experience designing and maintaining scalable APIs and microservices Experience of working with Computer vision algos at scale is a Plus Proven track record of building and maintaining data pipelines, ideally with real-world complexity Deep knowledge of distributed systems, asynchronous processing, and message queues Experience working with cloud platforms (AWS/GCP/Azure) and containerized environments (Docker, Kubernetes) Comfortable with CI/CD, monitoring, testing, and automation Excellent debugging, profiling, and system design skills Great communicator and collaborator; able to break down complex problems clearly Bonus: experience working with computer vision models, 3D data, or unstructured media pipelines Perks & Benefits: Comprehensive health, dental, and vision insurance for you and your family Unlimited PTO + flexible work environment Generous parental leave Opportunity to work at the cutting edge of AI and real-world impact A culture that values autonomy, ownership, and meaningful engineering Doxel is an equal opportunity employer and actively seeks diversity across our team. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. Show more Show less
Posted 1 day ago
0 years
0 Lacs
India
Remote
Job Title: AWS Lead Engineer Location: Remote Employment Type: Full-time About the Role: We are seeking an AWS DevOps Engineer to design, deploy, and optimize a real-time data streaming platform on AWS. You will work with cutting-edge cloud technologies, ensuring scalability, security, and high performance using Kubernetes, Terraform, CI/CD, and monitoring tools . Key Responsibilities: ✔ Design & maintain AWS-based streaming solutions (Lambda, S3, RDS, VPC) ✔ Manage Kubernetes (EKS) – Helm, ArgoCD, IRSA ✔ Implement Infrastructure as Code (Terraform) ✔ Automate CI/CD pipelines ( GitHub Actions ) ✔ Monitor & troubleshoot using Datadog/Splunk ✔ Ensure security best practices ( Snyk, Sonar Cloud ) ✔ Collaborate with teams to integrate data products Must-Have Skills: 🔹 AWS (IAM, Lambda, S3, VPC, CloudWatch) 🔹 Kubernetes (EKS) & Helm/ArgoCD 🔹 Terraform (IaC) 🔹 CI/CD (GitHub Actions) 🔹 Datadog/Splunk Monitoring 🔹 Docker & Python/Go Scripting Nice-to-Have: 🔸 AWS Certifications (DevOps/Solutions Architect) 🔸 Splunk/SDLC experience Why Join Us? Work with modern cloud & DevOps tools Collaborative & innovative team Growth opportunities in AWS & DevOps Show more Show less
Posted 1 day ago
10.0 years
0 Lacs
India
Remote
Job Title: Senior Backend Engineer – Python & Microservices Location: Remote Experience Required: 8–10+ years 🚀 About the Role: We’re looking for a Senior Backend Engineer (Python & Microservices) to join a high-impact engineering team focused on building scalable internal tools and enterprise SaaS platforms. You'll play a key role in designing cloud-native services, leading microservices architecture, and collaborating closely with cross-functional teams in a fully remote environment. 🔧 Responsibilities: Design and build scalable microservices using Python (Flask, FastAPI, Django) Develop production-grade RESTful APIs and background job systems Architect modular systems and drive microservice decomposition Manage SQL & NoSQL data models (PostgreSQL, MongoDB, DynamoDB, ClickHouse) Implement distributed data pipelines using Kafka, RabbitMQ, and SQS Apply best practices in rate limiting, security, performance optimisation, logging, and observability (Grafana, Datadog, CloudWatch) Deploy services in cloud environments (AWS preferred, Azure/GCP acceptable) using Docker, Kubernetes, and EKS Contribute to CI/CD and Infrastructure as Code (Jenkins, Terraform, GitHub Actions) ✅ Requirements: 8–10+ years of hands-on backend development experience Strong proficiency in Python (Flask, FastAPI, Django, etc.) Solid experience with microservices and containerised environments (Docker, Kubernetes, EKS) Expertise in REST API design, rate limiting, and performance tuning Familiarity with SQL & NoSQL (PostgreSQL, MongoDB, DynamoDB, ClickHouse) Experience with cloud platforms (AWS preferred; Azure/GCP also considered) CI/CD and IaC knowledge (GitHub Actions, Jenkins, Terraform) Exposure to distributed systems and event-based architectures (Kafka, SQS) Excellent written and verbal communication skills 🎯 Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science or a related field Certifications in Cloud Architecture or System Design Experience integrating with tools like Zendesk, Openfire, or similar chat/ticketing platforms Show more Show less
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2