Jobs
Interviews

18017 Terraform Jobs - Page 48

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Cloud Network Developer, your main responsibility will be to ensure that automated and consumable solutions delivered through IaC constructs can support a stable, secure, resilient, and agile GCP network environment. You will collaborate with your Cloud Network Delivery peer group to provide operational support, contribute to technical design, and implement best-of-breed technical standards in an evolving manner. This role is crucial in the continued growth and technical advancement of our cloud environments, requiring agility to handle context switching during project delivery and operational support response. With a strong focus on customer satisfaction, high-quality engineering, and a curiosity-driven approach, this role plays a critical part in shaping our technical future. Key Qualifications: - Bachelor's Degree or equivalent practical experience. - 7 years of experience in software engineering or developer roles. - 5 years of experience in delivering GCP infrastructure as code. - Expert-level understanding of programming languages like Python or Go. - Advanced understanding of Terraform, including custom module creation. - Advanced understanding of GCP Cloud Run. - Proficiency in Git for version control in IaC, documentation, and policy applications. - Experience with GCP Network constructs. - Knowledge of traditional networking theory and foundational concepts. - Familiarity with Jira and ServiceNow. - Ability to provide baseline health monitoring to facilitate error budgeting and target specific SLI and SLO measurements. - Self-starting capabilities and thrive in an autonomous environment. Responsibilities: - Design and support modular IaC solutions focusing on consumable and repeatable architectures in the GCP network space. - Collaborate with various members of the Cloud Network Delivery team, including engineering and product management, to support ongoing network offerings and project-based deliverables. - Participate in an on-call rotation and serve as an escalation point when necessary to address production environment needs. - Implement projects using common automation and state-based constructs such as Terraform and Cloud Build. - Clearly communicate and articulate how technical components can support different environments within our cloud platform to peer groups, partner teams, and internal stakeholders. - Engage with vendor teams to influence and adopt future product offerings within the GCP network space, emphasizing automation opportunities to minimize manual operations.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Senior DevOps Engineer at CLOUDSUFI, a Google Cloud Premier Partner and Data Science organization, you will play a crucial role in designing, implementing, and maintaining robust infrastructure and deployment pipelines. Your expertise in modern DevOps tools and practices, including CI/CD pipelines, infrastructure as code, and cloud-native environments, will be instrumental in supporting our development and operations teams. With a focus on leveraging the power of data to transform businesses, we are looking for a passionate individual who values human relationships and aims to elevate the quality of lives for our family, customers, partners, and the community. Key Responsibilities: - Design, implement, and maintain scalable infrastructure and deployment pipelines. - Utilize your advanced expertise in Terraform for infrastructure as code. - Manage Kubernetes applications effectively using Helm. - Collaborate with the team using GitHub for version control and workflows. - Ensure efficient container orchestration and management with Kubernetes. - Leverage your in-depth understanding of Google Cloud Platform (GCP) services and architecture. - Implement strong scripting and automation skills (e.g., Python, Bash) to streamline processes. - Demonstrate problem-solving skills, attention to detail, and effective communication in agile environments. Mandatory Skills: - GCP, DevOps, Terraform, Kubernetes, Docker, CI/CD, GitHub Actions, Helm Charts Required Experience: - 4+ years in DevOps, infrastructure automation, or related fields. - Certification in Kubernetes (CKA/CKAD) or Google Cloud (GCP Professional DevOps Engineer). - Proficiency in additional CI/CD tools like Jenkins or GitLab CI/CD. - Knowledge of cloud platforms such as AWS or Azure. If you are a highly skilled Senior DevOps Engineer with a strong background in DevOps practices and a passion for leveraging data to drive business transformation, we encourage you to apply for this exciting opportunity at CLOUDSUFI.,

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Project Role : Cloud Platform Engineer Project Role Description : Designs, builds, tests, and deploys cloud application solutions that integrate cloud and non-cloud infrastructure. Can deploy infrastructure and platform environments, creates a proof of architecture to test architecture viability, security and performance. Must have skills : Data Modeling Techniques and Methodologies Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Cloud Platform Engineer, you will engage in the design, construction, testing, and deployment of cloud application solutions that seamlessly integrate both cloud and non-cloud infrastructures. Your typical day will involve collaborating with cross-functional teams to ensure the architecture's viability, security, and performance, while also creating proofs of concept to validate your designs. You will be responsible for deploying infrastructure and platform environments, ensuring that all components work harmoniously to meet organizational goals and client needs. Your role will require a proactive approach to problem-solving and a commitment to delivering high-quality solutions in a dynamic environment. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with team members to identify and address potential challenges in cloud application deployment. - Develop and maintain documentation related to cloud architecture and deployment processes. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Modeling Techniques and Methodologies. - Good To Have Skills: Experience with cloud service providers such as AWS, Azure, or Google Cloud Platform. - Strong understanding of cloud architecture principles and best practices. - Experience with infrastructure as code tools like Terraform or CloudFormation. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. Additional Information: - The candidate should have minimum 3 years of experience in Data Modeling Techniques and Methodologies. - This position is based in Pune. - A 15 years full time education is required.

Posted 1 week ago

Apply

15.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Cloud Technology Architecture Good to have skills : NA Minimum 15 Year(s) Of Experience Is Required Educational Qualification : Minimum 15 years of Graduation Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have Skills : Cloud Technology Architecture Good to Have Skills : No Technology Specialization Job Requirements : Key Responsibilities : Minimum 3 years' experience in developing Cloud-native containerized apps for GCP or AWS or Kubernetes, leveraging PAAS and SAAS offerings from GCP or AWS or Azure Minimum 5 years' experience as a Technical Architect and/or delivery lead for designing and delivering Distributed applications, Fault tolerance and recovery, Performance Engineering, Scaling, Low latency application designs Experience in Large/MEdium scale cloud migration and journey to cloud App modernization for movement to cloud Technical Experience : Automated Cloud deployments using GCP templates, AWS Cloud formation or Terraform Java/J2EE Creation of Landing zones and cloud foundation Cloud Services needed for app Cloud AWS 8 , GCP 4 , Azure 2 any cloud Containers DEVSECOPS Professional Attributes : Understanding of key high-level cloud concepts for Identity Access Management, Network security, Geo redundancy, Data synchronization, Encryption,Hybrid Multi-tenant cloud architectures, Service SLA monitoring, Cost optimization Proficient with Continuous Integration/Delivery pipelines, Mature Dev Educational Qualification: Minimum 15 years of Graduation Additional Info : A overall experience between 14-18 years B Architect or Qualified certifications from GCP AWS Azure

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Project Role : DevOps Engineer Project Role Description : Responsible for building and setting up new development tools and infrastructure utilizing knowledge in continuous integration, delivery, and deployment (CI/CD), Cloud technologies, Container Orchestration and Security. Build and test end-to-end CI/CD pipelines, ensuring that systems are safe against security threats. Must have skills : Kubernetes Good to have skills : Ansible on Microsoft Azure, Terraform, Jenkins Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a DevOps Engineer, you will be responsible for building and setting up new development tools and infrastructure. A typical day involves utilizing your expertise in continuous integration, delivery, and deployment, as well as cloud technologies and container orchestration. You will work on ensuring that systems are secure against potential threats while collaborating with various teams to enhance the development process and streamline operations. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor and optimize the performance of CI/CD pipelines. - Must To Have Skills: Proficiency in DevOps, EKS, Helm Charts, Ansible, Terraform and Docker Skills. - Experience and skills in setup the infrastructure on AWS cloud with EKS, Helm charts. - Proficient in developing CI/CD pipelines using Jenkins/Github or other CI/CD tool.s - Ability to debug and fix the issues in environment setup and in CI/CD pipelines. - Knowledge and experience doing automation of infra and application setup using Ansible and Terraform. - Good To Have Skills: Experience with continuous integration and continuous deployment tools. - Strong understanding of cloud services and infrastructure management. - Familiarity with containerization technologies such as Openshift or other containerization technical skills - Experience in scripting languages for automation and configuration management. Additional Information: - The candidate should have minimum 5 years of experience in DevOps. - This position is based in Hyderabad. - A 15 years full time education is required.

Posted 1 week ago

Apply

20.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Who is Forcepoint? Forcepoint simplifies security for global businesses and governments. Forcepoint’s all-in-one, truly cloud-native platform makes it easy to adopt Zero Trust and prevent the theft or loss of sensitive data and intellectual property no matter where people are working. 20+ years in business. 2.7k employees. 150 countries. 11k+ customers. 300+ patents. If our mission excites you, you’re in the right place; we want you to bring your own energy to help us create a safer world. All we’re missing is you! What You Will Be Doing Helping drive innovation through improvements to the CI/CD pipeline Appropriately and efficiently automating through tools such as Terraform, Ansible and Python Supporting teams who use our tools such as Engineering, Professional Services and Sales Engineering Running our infrastructure as code via git and webhooks Building a strong DevOps culture while also fostering strong collaboration with all areas of development, product, and QA Documenting all the things! Mentoring junior team members and foster a collaborative, process-mature team Being passionate about championing new tools, processes and helping adoption across our organization DevSecOps best practices championed across the team and organization Responsibilities Include Develop and support automated, scalable solutions to deploy and manage our global infrastructure. Implement effective CI/CD processes and pipelines. Build integrations between services to create fully automated processes. Maintaining and improving the functionality of automation tools for infrastructure provisioning, configuration, and deployment. Identify opportunities to optimize current solutions and perform hands-on troubleshooting of problems related systems and performance in Prod/QA/Dev environments. Preferred Skills BS in Computer Science or similar degree and 5+ years of related experience, or equivalent work experience Experience working with automated server configuration and deployment tools Strong working knowledge of AWS, certifications preferred Proficiency working in Linux environments, particularly with customer-facing systems Knowledge of IP networking, VPNs, DNS, load balancing, and firewalling. Strong working knowledge of Infrastructure as Code (Terraform /Cloud Formation) Nginx, Apache, HAproxy Object oriented programing experience, language such as Java, Python, C++ Deployment and support of containers such as Docker, Kubernetes GitHub, Jenkins, Artifactory Minimum Qualifications Strong practical Linux based systems administration skills in a Cloud or Virtualized environment. Scripting (BASH/ Python) Automation (Ansible/Chef/Puppet) CI/CD Tools, primarily Jenkins Familiarity with Continuous Integration and development pipeline processes. AWS, Google Cloud Compute, Azure (at least one of) Prior success in automating a real-world production environment. Experience in implementing monitoring tools and fine-tuning the metrics for optimal monitoring. Excellent written and oral communication skills; Ability to communicate effectively with technical and non-technical staff. Don’t meet every single qualification? Studies show people are hesitant to apply if they don’t meet all requirements listed in a job posting. Forcepoint is focused on building an inclusive and diverse workplace – so if there is something slightly different about your previous experience, but it otherwise aligns and you’re excited about this role, we encourage you to apply. You could be a great candidate for this or other roles on our team. The policy of Forcepoint is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to affirmatively seek to advance the principles of equal employment opportunity. Forcepoint is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by sending an email to recruiting@forcepoint.com. Applicants must have the right to work in the location to which you have applied.

Posted 1 week ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Devo, the cloud-native logging and security analytics company, empowers security and operations teams to maximize the value of all their data. Only the Devo platform delivers the powerful combination of real-time visibility, high-performance analytics, scalability, multi-tenancy, and low TCO crucial for monitoring and securing business operations as enterprises accelerate their shift to the cloud. Headquartered in Boston, Mass., Devo is backed by Insight Partners, Georgian, and Bessemer Venture Partners. Learn more at www.devo.com. Job Summary: The Cloud Monitoring Operator is responsible for continuously monitoring outages, faults, critical events, and abnormalities with the systems and services of Devo and his customers. This position handles level 1 technical support for the different systems and services. This is an entry level learning position. The Monitoring Operations Staff will receive close supervision and support from Monitoring Operations and Cloud Operations Teams. It will also be mentored and trained by senior peers. Responsibilities: Provide level 1 support for the different Devo systems and services. Responsible for monitoring all Devo systems. Prioritizes and addresses events detected by monitoring systems. Follows documented procedures to resolve issues. Independently run tasks, projects and lead initiatives. Ability to work both independently and as part of a team. Prioritize tasks effectively, set deadlines and document designs and procedures. Protect internal and external Enterprise Information Assets. Creates documentation for common issues and resolutions. Escalate issues and communicate details thoroughly. Monitor the platform and suggest future architectural needs and ensure scalability. Identify areas for improvement and take ownership. Operators can expect the possibility of working night shifts, weekends and public holidays. Requirements: Solid understanding of Linux sysadmin. Solid understanding of TCP/IP and networking technologies. Solid understanding of monitoring tools like Grafana, Prometheus and Dynatrace etc. Management of basic services such as web servers, application servers and databases. Familiarity with private/public/hybrid Cloud Computing technologies. Familiarity with at least one programming language. Good troubleshooting and debugging skills. Experience with Git. Knowledge of Information Security best practices. Infrastructure as Code knowledge is a plus (CloudFormation, Terraform, etc.). Knowledge of at least one of these Configuration Management tools is a plus: puppet, chef, salt or ansible. Why work at Devo? Focus on Security and Data Analytics: If you’re passionate about security operations, data analytics, or enterprise-level IT infrastructure, we will offer you a chance to be part of a platform that helps organizations monitor and secure their systems in an increasingly digital world. You will have the opportunity to work with innovative products that solve real-world challenges. Career growth: You’ll join a company where we value our people and provide the tremendous opportunities that come with a hyper-growth organization. To grow as a professional our development programs include: Company-paid job-related technical certifications. Personal development plans based on career paths. Full support for internal job movements as part of career development. Work-Life Balance: We promote a healthy work-life balance with flexible working conditions, including remote work opportunities. Multicultural environment: With offices and clients globally, we offer a chance to work in a multicultural environment, giving our employees international exposure and the opportunity to collaborate across regions. Comprehensive benefits: Medical health insurance: at Devo, we believe in taking care of not just our employees, but also their families Life insurance. Meal vouchers, commuting allowance, and childcare vouchers. Employee referral program — get a bonus for helping friends get jobs at Devo! Employee Stock Option Plan Gender and diversity initiatives to increase visibility, inclusion and sense of belonging

Posted 1 week ago

Apply

14.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Head Digital Works is a pioneering force in Indian online skill gaming, evolving from a 2006 garage startup to a leader with over 80 million users and brands like A23 Rummy, A23 Poker, and Adda52. Over nearly two decades, it has shaped India’s real money gaming market through innovation, player safety, and exceptional user experiences. Focused on sustainable growth and trust-driven relationships, HDW continues to invest in technology and talent to build immersive gaming ecosystems—and drive the future of digital entertainment in India. Role Overview: We’re looking for a seasoned and visionary leader to spearhead our cloud infrastructure function with deep expertise in DevOps, DevSecOps, and AWS technologies. This pivotal role blends strategic foresight, technical excellence, and a collaborative spirit to build secure, scalable, and innovative infrastructure that powers our enterprise. What You’ll Lead & Shape: Leadership & Strategy Define and deliver the strategic roadmap for cloud infrastructure, DevOps, and DevSecOps in alignment with business priorities. Lead a high-impact team of cloud engineers, SREs, and DevOps professionals. Champion infrastructure modernization, cloud-native transformation, and cost-efficient practices. Cloud Infrastructure (AWS) Architect multi-account AWS environments with automation, scalability, and resilience. Implement and audit AWS Well-Architected Framework principles. Oversee cloud operations, including monitoring, alerting, and incident response using CloudWatch, CloudTrail, and third-party tools. DevOps / DevSecOps Drive the full lifecycle of DevOps: infrastructure provisioning (IaC), CI/CD automation, and secure deployments. Leverage IaC tools like Terraform, AWS CDK, and CloudFormation. Integrate security practices such as vulnerability scanning, secrets management, and compliance automation (SOC2, ISO 27001, etc.). Governance, Security & Compliance Establish and enforce governance for IAM, security policies, and cloud configurations. Collaborate with InfoSec teams to uphold enterprise-grade security standards. Set up infrastructure health checks, anomaly detection, and regulatory compliance. Cross-Functional Collaboration Partner with engineering, cybersecurity, and product teams to craft efficient CI/CD pipelines. Influence engineering culture toward cloud-first and DevSecOps excellence. Act as a technical escalation point for complex infrastructure challenges. What You Bring & Your Expertise: 14+ years in cloud infrastructure and DevOps/DevSecOps domains; 4–6 years leading distributed technical teams. Expert-level proficiency in AWS (VPC, EC2, Lambda, IAM, S3, RDS, etc.). Proven hands-on experience with Terraform, GitOps practices, CI/CD using Jenkins or GitLab, and container orchestration (EKS/Kubernetes). Strong grasp of DevSecOps principles, secure software pipelines, and cloud cost governance. Preferred Qualifications AWS Professional Certifications (Solutions Architect, DevOps Engineer, or Security Specialty). Experience with compliance: SOC 2, PCI DSS, HIPAA, and ISO 27001. Exposure to hybrid/multi-cloud environments (Azure/GCP). Familiarity with SRE frameworks (SLI/SLO/SLA tracking). Domain experience in gaming or BFSI is a strong plus. Why Head Digital Works? At Head Digital Works, innovation meets ownership. Our engineering culture thrives on autonomy, trust, and transparency. You’ll engage with cutting-edge technologies and contribute to business-critical systems in a collaborative, diverse, and rapidly evolving environment. Expect openness, ideas-driven teams, and leadership that values your voice. What we offer— Industry-Leading Compensation Comprehensive Mediclaim Coverage Accelerated Career Growth Excellence-Driven Recognition Programs Inclusive & Collaborative Work Culture

Posted 1 week ago

Apply

0 years

0 Lacs

Gautam Buddha Nagar, Uttar Pradesh, India

On-site

The ideal candidate must be self-motivated with a proven track record as a Cloud Engineer (AWS) that can help in implementation, adoption, and day-to-day support of an AWS cloud Infrastructure environment distributed among multiple regions and Business Units. The individual in this role must be a technical expert on AWS who understands and practices the AWS Well Architected Framework and is familiar with a multi-account strategy deployment using Control Tower/Landing Zone setup. The ideal candidate can manage day-to-day operations, troubleshoot problems, provide routine maintenance, and can enhance system health monitoring on the cloud stack. Technical Skills Strong experience on AWS IaaS architectures Hands-on experience in deploying and supporting AWS services such as EC2, auto scaling, AMI management, snapshots, ELB, S3, Route 53, VPC, RDS, SES, SNS, Cloud Formation, Cloud Watch, IAM, Security Groups, Cloud Trail, Lambda, etc. Experience on building and supporting AWS Workspaces Experience in deploying and troubleshooting either Windows or Linux Operating systems Experience with AWS SSO and RBAC Understanding of DevOps tools such as Terraform, GitHub, and Jenkins Experience working on ITSM processes and tools such as Remedy, ServiceNow Ability to operate at all levels within the organization and cross-functionally within multiple Client organizations Responsibilities Responsibilities include planning, automation, implementations, and maintenance of the AWS platform and its associated services Provide SME / L2 and above level technical support Carry out deployment and migration activities Must be able to mentor and provide technical guidance to L1 engineers Monitoring of AWS infrastructure and perform routine maintenance, operational tasks Work on ITSM tickets and ensure adherence to support SLAs Work on change management processes Excellent analytical and problem-solving skills. Exhibits excellent service to others Location: Noida, Uttar Pradesh, India

Posted 1 week ago

Apply

0 years

0 Lacs

Gautam Buddha Nagar, Uttar Pradesh, India

On-site

The ideal candidate must be self-motivated with a proven track record as a Cloud Engineer (AWS) that can help in implementation, adoption, and day-to-day support of an AWS cloud Infrastructure environment distributed among multiple regions and Business Units. The individual in this role must be a technical expert on AWS who understands and practices the AWS Well Architected Framework and is familiar with a multi-account strategy deployment using Control Tower/Landing Zone setup. The ideal candidate can manage day-to-day operations, troubleshoot problems, provide routine maintenance, and can enhance system health monitoring on the cloud stack. Technical Skills Strong experience on AWS IaaS architectures Hands-on experience in deploying and supporting AWS services such as EC2, autoscaling, AMI management, snapshots, ELB, S3, Route 53, VPC, RDS, SES, SNS, Cloud Formation, Cloud Watch, IAM, Security Groups, CloudTrail, Lambda etc. Experience on building and supporting AWS Workspaces Experience in deploying and troubleshooting either Windows or Linux Operating systems. Experience with AWS SSO and RBAC Understanding of DevOps tools such as Terraform, GitHub, and Jenkins. Experience working on ITSM processes and tools such as Remedy, ServiceNow. Ability to operate at all levels within the organization and cross-functionally within multiple Client organizations. Responsibilities Responsibilities include planning, automation, implementations, and maintenance of the AWS platform and its associated services. Provide SME / L2 and above level technical support. Carry out deployment and migration activities. Must be able to mentor and provide technical guidance to L1 engineers. Monitoring of AWS infrastructure and perform routine maintenance, operational tasks. Work on ITSM tickets and ensure adherence to support SLAs. Work on change management processes. Excellent analytical and problem-solving skills. Exhibits excellent service to others. Location: Noida, Uttar Pradesh, India

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Reference # 322746BR Job Type Full Time Your role Are you innovative and passionate about building secure and reliable solutions? We are looking for Data Engineers and DevSecOps Engineers to join our team in building the Enterprise Data Mesh at UBS. We are open to adapting the role suited to your career aspirations and skillset. Responsibilities include: Design/document, develop, review, test, release, support Data Mesh components/platforms/environments. Contribute to agile ceremonies e.g. daily stand-ups, backlog refinement, iteration planning, iteration reviews, retrospectives. Comply with the firm’s applicable policies and processes. Collaborate with other teams and divisions using Data Mesh services, related guilds and other Data Mesh Services teams. Ensure delivery deadlines are met. Your team You will be part of a diverse global team consisting of data scientists, data engineers, full-stack developers, DevSecOps engineers and knowledge engineers within Group CTO working primarily in a local team with some interactions with other teams and divisions. We are providing many services as part of our Data Mesh strategy firmwide to automate and scale data management to improve time-to-market for data and reduce data downtime. We provide learning opportunities and a varied technology landscape. Technologies include Azure Cloud, AI (ML and GenAI models), web user interface (React), data storage (Postgres, Azure), REST APIs, Kafka, Great Expectations, ontology models. Your expertise Experience in the following (or similar transferrable skills): Hands-on delivery in any of the following (or related): data transformations, Spark, python, database design and development in any database, CI/CD pipelines, security risk mitigation, infrastructure as code (e.g. Terraform), monitoring, Azure development. Agile software practices and tools, performance testing, unit and integration testing. Identifying root-causes and designing and implementing the solution. Collaborating with other teams to achieve common goals. Learning and reskilling in new technologies. About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Reference # 322766BR Job Type Full Time Your role Are you innovative and passionate about building secure and reliable solutions? We are looking for Tech Engineers specializing in either DevSecOps, Data Engineering or Full-Stack web development to join our team in building firmwide Data Observability Components on Azure. We are open to adapting the role suited to your career aspirations and skillset. Responsibilities include: Design/document, develop, review, test, release, support Data Observability components/platforms/environments. Contribute to agile ceremonies e.g. daily stand-ups, backlog refinement, iteration planning, iteration reviews, retrospectives. Comply with the firm’s applicable policies and processes. Collaborate with other teams and divisions using Data Observability services, related guilds and other Data Mesh Services teams. Ensure delivery deadlines are met. Your team You will be part of a diverse global team consisting of data scientists, data engineers, full-stack developers, DevSecOps engineers and knowledge engineers within Group CTO working primarily in a local team with some interactions with other teams and divisions. We are providing Data Observability services as part of our firmwide Data Mesh strategy to automate and scale data management to improve time-to-market for data and reduce data downtime. We provide learning opportunities and a varied technology landscape. Technologies include Azure Cloud, AI (ML and GenAI models), web user interface (React), data storage (Postgres, Azure), REST APIs, Kafka, Great Expectations, ontology models. Your expertise Experience in the following (or similar transferrable skills): Hands-on delivery in any of the following (or related): full-stack web development (e.g. React, APIs), data transformations, Spark, python, database design and development in any database, CI/CD pipelines, security risk mitigation, infrastructure as code (e.g. Terraform), monitoring, Azure development. Agile software practices and tools, performance testing, unit and integration testing. Identifying root-causes and designing and implementing the solution. Collaborating with other teams to achieve common goals. Learning and reskilling in new technologies. About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About KOKO Networks KOKO Networks is a venture-backed climate technology company with a team of 1,000+ staff across East Africa, India, and the United Kingdom. Our mission is to imagine and deliver technology that improves life in the world’s fastest growing markets. Our core lines of business are: KOKO Fuel, a clean, affordable, and renewable bioethanol household cooking solution delivered via a network of smart fuel ATMs and leveraging existing downstream liquid fuels infrastructure; and KOKO Climate, which retails the certified emissions reductions that are generated by transitioning households from deforestation-based charcoal and other dirty fuels to KOKO Fuel. In 2021, KOKO was selected as the world’s leading emerging markets climate technology solution by the Financial Times and the IFC, and, in 2022, KOKO received the Keeling Curve Prize for climate impact. Your Role We are looking for a Senior Embedded Software Engineer to lead the development of firmware for KOKO’s IoT based hardware platforms that provide our customers with smart delivery of fuel and other goods through our unique last mile distribution network. As such you will play a key role in enabling KOKO’s long term success, including our expansion to new markets. In this position on the engineering team, you’ll have the opportunity to contribute to and add value across multiple KOKO's products, at different stages of the product life cycle, and be part of the wider engineering team in delivering best-in-class solutions What You Will Do Develop the embedded software that powers our network hardware and the tools that enable us to test, build, diagnose and repair them Collaborate cross-functionally with electronics engineers, software engineers, product managers, and others to integrate embedded software into products Support existing devices and manage firmware improvements in our installed networks Provide recommendations for continuous improvement in the functionality of our systemsWork alongside other engineers on the team to elevate technology and consistently apply best practices Deliver new features to production environments and support them in operation KOKO’s current technology stack includes (but is not limited to) Wordpress, Vue3.JS, Responsive layout, AWS, Kubernetes, Terraform, Python, Flask, Postgres, Kotlin, Java, Firebase, C++, Celery, Message Queues, Odoo ERP, Git.. What You Will Bring To KOKO Minimum relevant experience of 2 + years working as an Embedded Engineer Develop the embedded software that powers our network hardware and the tools that enable us to test, build, diagnose and repair them Collaborate cross-functionally with electronics engineers, software engineers, product managers, and others to integrate embedded software into products Support existing devices and manage firmware improvements in our installed networks Provide recommendations for continuous improvement in the functionality of our systems Work alongside other engineers on the team to elevate technology and consistently apply best practices Deliver new features to production environments and support them in operation Participate in building the engineering culture at KOKO Strong, proven experience coding (2+ years) and good design principles in C/C++ for embedded systems Solid and demonstrable understanding of Linux and programming for Linux (e.g. Raspberry Pi) Experience programming in both RTOS and bare metal environments Ability to developing and debug low level protocols (e.g. I2C, SPI) and higher level communication protocols (e.g. BLE, USB) Strong troubleshooting skills for investigating reported issues, debugging and fixing field problems Knowledge of IoT “edge” device development and strong understanding of network protocols (HTTP, MQTT) and interfaces (TCP/IP, TLS) Knowledge of Git and CI/CD pipelines A good grasp of electronics - ability to read schematics and data sheets Other skills: Experience in developing commercial products - proven experience developing for hardware at scale Strong analytical skills, working with engineering best practices to get the best results Team player, conscientious and with a strong collaborative ethos We are looking for a highly technical leader with significant experience in scaling and maintaining complex, mission critical systems, and making key architectural decisions along the way. Aside from being an expert in your core skills areas, you should be comfortable leading and coaching, building rapport across the team, doing what’s necessary to ensure KOKO’s products always deliver. What We Offer We believe that our people are critical for our ambitious growth plans in Kenya and beyond. We want to build an organization where people thrive, feel included, grow professionally, and enjoy having a high impact through their work Competitive salary plus a quarterly cash bonus Annual compensation reviews - we reward great work Discounted health insurance with no-cost financing for you and your dependents (in Kenya) 21 days of annual leave plus public holidays plus examination leave Ongoing investment in you and your skills, incl. full access to over 5,000 online courses The right equipment for the job - a choice of MacBook, Windows, or Linux laptop

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Reference # 322748BR Job Type Full Time Your role Are you innovative and passionate about building secure and reliable solutions? We are looking for Tech Engineers specializing in either DevSecOps, Data Engineering or Full-Stack web development to join our team in building firmwide Data Observability Components on Azure. We are open to adapting the role suited to your career aspirations and skillset. Responsibilities include: Design/document, develop, review, test, release, support Data Observability components/platforms/environments. Contribute to agile ceremonies e.g. daily stand-ups, backlog refinement, iteration planning, iteration reviews, retrospectives. Comply with the firm’s applicable policies and processes. Collaborate with other teams and divisions using Data Observability services, related guilds and other Data Mesh Services teams. Ensure delivery deadlines are met. Your team You will be part of a diverse global team consisting of data scientists, data engineers, full-stack developers, DevSecOps engineers and knowledge engineers within Group CTO working primarily in a local team with some interactions with other teams and divisions. We are providing Data Observability services as part of our firmwide Data Mesh strategy to automate and scale data management to improve time-to-market for data and reduce data downtime. We provide learning opportunities and a varied technology landscape. Technologies include Azure Cloud, AI (ML and GenAI models), web user interface (React), data storage (Postgres, Azure), REST APIs, Kafka, Great Expectations, ontology models. Your expertise Experience in the following (or similar transferrable skills): Hands-on delivery in any of the following (or related): full-stack web development (e.g. React, APIs), data transformations, Spark, python, database design and development in any database, CI/CD pipelines, security risk mitigation, infrastructure as code (e.g. Terraform), monitoring, Azure development. Agile software practices and tools, performance testing, unit and integration testing. Identifying root-causes and designing and implementing the solution. Collaborating with other teams to achieve common goals. Learning and reskilling in new technologies. About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.

Posted 1 week ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a proactive and technically skilled DevOps & Java Support Engineer to join our Core Operations team. The ideal candidate will have hands-on experience in Java-based application support, CI/CD pipelines, infrastructure automation, and production monitoring. This role is critical in ensuring the stability, scalability, and performance of our core systems. Primary Responsibilities Provide L2 support for Java & microservices based applications in production and staging environments Monitor system health and performance using tools like Prometheus, Grafana, ELK, or equivalent Troubleshoot and resolve incidents, perform root cause analysis, and implement preventive measures Develop and maintain CI/CD pipelines using Jenkins, GitLab CI, or similar tools Automate infrastructure provisioning and configuration using tools like Ansible, Terraform, or CloudFormation Collaborate with development, QA, and infrastructure teams to ensure smooth deployments and releases Participate in on-call rotations and incident response processes Maintain documentation for operational procedures, runbooks, and support guides Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualification B.E/B.Tech At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Reference # 322748BR Job Type Full Time Your role Are you innovative and passionate about building secure and reliable solutions? We are looking for Tech Engineers specializing in either DevSecOps, Data Engineering or Full-Stack web development to join our team in building firmwide Data Observability Components on Azure. We are open to adapting the role suited to your career aspirations and skillset. Responsibilities include: Design/document, develop, review, test, release, support Data Observability components/platforms/environments. Contribute to agile ceremonies e.g. daily stand-ups, backlog refinement, iteration planning, iteration reviews, retrospectives. Comply with the firm’s applicable policies and processes. Collaborate with other teams and divisions using Data Observability services, related guilds and other Data Mesh Services teams. Ensure delivery deadlines are met. Your team You will be part of a diverse global team consisting of data scientists, data engineers, full-stack developers, DevSecOps engineers and knowledge engineers within Group CTO working primarily in a local team with some interactions with other teams and divisions. We are providing Data Observability services as part of our firmwide Data Mesh strategy to automate and scale data management to improve time-to-market for data and reduce data downtime. We provide learning opportunities and a varied technology landscape. Technologies include Azure Cloud, AI (ML and GenAI models), web user interface (React), data storage (Postgres, Azure), REST APIs, Kafka, Great Expectations, ontology models. Your expertise Experience in the following (or similar transferrable skills): Hands-on delivery in any of the following (or related): full-stack web development (e.g. React, APIs), data transformations, Spark, python, database design and development in any database, CI/CD pipelines, security risk mitigation, infrastructure as code (e.g. Terraform), monitoring, Azure development. Agile software practices and tools, performance testing, unit and integration testing. Identifying root-causes and designing and implementing the solution. Collaborating with other teams to achieve common goals. Learning and reskilling in new technologies. About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Reference # 322746BR Job Type Full Time Your role Are you innovative and passionate about building secure and reliable solutions? We are looking for Data Engineers and DevSecOps Engineers to join our team in building the Enterprise Data Mesh at UBS. We are open to adapting the role suited to your career aspirations and skillset. Responsibilities include: Design/document, develop, review, test, release, support Data Mesh components/platforms/environments. Contribute to agile ceremonies e.g. daily stand-ups, backlog refinement, iteration planning, iteration reviews, retrospectives. Comply with the firm’s applicable policies and processes. Collaborate with other teams and divisions using Data Mesh services, related guilds and other Data Mesh Services teams. Ensure delivery deadlines are met. Your team You will be part of a diverse global team consisting of data scientists, data engineers, full-stack developers, DevSecOps engineers and knowledge engineers within Group CTO working primarily in a local team with some interactions with other teams and divisions. We are providing many services as part of our Data Mesh strategy firmwide to automate and scale data management to improve time-to-market for data and reduce data downtime. We provide learning opportunities and a varied technology landscape. Technologies include Azure Cloud, AI (ML and GenAI models), web user interface (React), data storage (Postgres, Azure), REST APIs, Kafka, Great Expectations, ontology models. Your expertise Experience in the following (or similar transferrable skills): Hands-on delivery in any of the following (or related): data transformations, Spark, python, database design and development in any database, CI/CD pipelines, security risk mitigation, infrastructure as code (e.g. Terraform), monitoring, Azure development. Agile software practices and tools, performance testing, unit and integration testing. Identifying root-causes and designing and implementing the solution. Collaborating with other teams to achieve common goals. Learning and reskilling in new technologies. About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Reference # 322766BR Job Type Full Time Your role Are you innovative and passionate about building secure and reliable solutions? We are looking for Tech Engineers specializing in either DevSecOps, Data Engineering or Full-Stack web development to join our team in building firmwide Data Observability Components on Azure. We are open to adapting the role suited to your career aspirations and skillset. Responsibilities include: Design/document, develop, review, test, release, support Data Observability components/platforms/environments. Contribute to agile ceremonies e.g. daily stand-ups, backlog refinement, iteration planning, iteration reviews, retrospectives. Comply with the firm’s applicable policies and processes. Collaborate with other teams and divisions using Data Observability services, related guilds and other Data Mesh Services teams. Ensure delivery deadlines are met. Your team You will be part of a diverse global team consisting of data scientists, data engineers, full-stack developers, DevSecOps engineers and knowledge engineers within Group CTO working primarily in a local team with some interactions with other teams and divisions. We are providing Data Observability services as part of our firmwide Data Mesh strategy to automate and scale data management to improve time-to-market for data and reduce data downtime. We provide learning opportunities and a varied technology landscape. Technologies include Azure Cloud, AI (ML and GenAI models), web user interface (React), data storage (Postgres, Azure), REST APIs, Kafka, Great Expectations, ontology models. Your expertise Experience in the following (or similar transferrable skills): Hands-on delivery in any of the following (or related): full-stack web development (e.g. React, APIs), data transformations, Spark, python, database design and development in any database, CI/CD pipelines, security risk mitigation, infrastructure as code (e.g. Terraform), monitoring, Azure development. Agile software practices and tools, performance testing, unit and integration testing. Identifying root-causes and designing and implementing the solution. Collaborating with other teams to achieve common goals. Learning and reskilling in new technologies. About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for developing and deploying machine learning algorithms. Evaluates accuracy and functionality of machine learning algorithms. Translates application requirements into machine learning problem statements. Analyzes and evaluates solutions both internally generated as well as third party supplied. Develops novel ways to use machine learning to solve problems and discover new products. Has in-depth experience, knowledge and skills in own discipline. Usually determines own work priorities. Acts as resource for colleagues with less experience. Job Description About the Role: We are seeking an experienced Data Scientist to join our growing Operational Intelligence team. You will play a key role in building intelligent systems that help reduce alert noise, detect anomalies, correlate events, and proactively surface operational insights across our large-scale streaming infrastructure. You’ll work at the intersection of machine learning, observability, and IT operations, collaborating closely with Platform Engineers, SREs, Incident Managers, Operators and Developers to integrate smart detection and decision logic directly into our operational workflows. This role offers a unique opportunity to push the boundaries of AI/ML in large-scale operations. We welcome curious minds who want to stay ahead of the curve, bring innovative ideas to life, and improve the reliability of streaming infrastructure that powers millions of users globally. What You’ll Do Design and tune machine learning models for event correlation, anomaly detection, alert scoring, and root cause inference Engineer features to enrich alerts using service relationships, business context, change history, and topological data Apply NLP and ML techniques to classify and structure logs and unstructured alert messages Develop and maintain real-time and batch data pipelines to process alerts, metrics, traces, and logs Use Python, SQL, and time-series query languages (e.g., PromQL) to manipulate and analyze operational data Collaborate with engineering teams to deploy models via API integrations, automate workflows, and ensure production readiness Contribute to the development of self-healing automation, diagnostics, and ML-powered decision triggers Design and validate entropy-based prioritization models to reduce alert fatigue and elevate critical signals Conduct A/B testing, offline validation, and live performance monitoring of ML models Build and share clear dashboards, visualizations, and reporting views to support SREs, engineers and leadership Participate in incident postmortems, providing ML-driven insights and recommendations for platform improvements Collaborate on the design of hybrid ML + rule-based systems to supportnamic correlation and intelligent alert grouping Lead and support innovation efforts including POCs, POVs, and explorationemerging AI/ML tools and strategies Demonstrate a proactive, solution-oriented mindset with the ability to navigate ambiguity and learn quickly Participate in on-call rotations and provide operational support as needed Qualifications Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, Statistics or a related field 5+ years of experience building and deploying ML solutions in production environments 2+ years working with AIOps, observability, or real-time operations data Strong coding skills in Python (including pandas, NumPy, Scikit-learn, PyTorch, or TensorFlow) Experience working with SQL, time-series query languages (e.g., PromQL), and data transformation in pandas or Spark Familiarity with LLMs, prompt engineering fundamentals, or embedding-based retrieval (e.g., sentence-transformers, vector DBs) Strong grasp of modern ML techniques including gradient boosting (XGBoost/LightGBM), autoencoders, clustering (e.g., HDBSCAN), and anomaly detection Experience managing structured + unstructured data, and building features from logs, alerts, metrics, and traces Familiarity with real-time event processing using tools like Kafka, Kinesis, or Flink Strong understanding of model evaluation techniques including precision/recall trade-offs, ROC, AUC, calibration Comfortable working with relational (PostgreSQL), NoSQL (MongoDB), and time-series (InfluxDB, Prometheus) databases Ability to collaborate effectively with SREs, platform teams, and participate in Agile/DevOps workflows Clear written and verbal communication skills to present findings to technical and non-technical stakeholders Comfortable working across Git, Confluence, JIRA, & collaborative agile environments Nice To Have Experience building or contributing to the AIOps platform (e.g., Moogsoft, BigPanda, Datadog, Aisera, Dynatrace, BMC etc.) Experience working in streaming media, OTT platforms, or large-scale consumer services Exposure to Infrastructure as Code (Terraform, Pulumi) and modern cloud-native tooling Working experience with Conviva, Touchstream, Harmonic, New Relic, Prometheus, & event- based alerting tools Hands-on experience with LLMs in operational contexts (e.g., classification of alert text, log summarization, retrieval-augmented generation) Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate) and embeddings-based search for observability data Experience using MLflow, SageMaker, or Airflow for ML workflow orchestration Knowledge of LangChain, Haystack, RAG pipelines, or prompt templating libraries Exposure to MLOps practices (e.g., model monitoring, drift detection, explainability tools like SHAP or LIME) Experience with containerized model deployment using Docker or Kubernetes Use of JAX, Hugging Face Transformers, or LLaMA/Claude/Command-R models in experimentation Experience designing APIs in Python or Go to expose models as services Cloud proficiency in AWS/GCP, especially for distributed training, storage, or batch inferencing Contributions to open-source ML or DevOps communities, or participation in AIOps research/benchmarking efforts. Certifications in cloud architecture, ML engineering, or data science specializations, confluence pages, white papers, presentations, test results, technical manuals, formal recommendations and reports. Contributes to the company by creating patents, Application Programming Interfaces (APIs). Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 5-7 Years

Posted 1 week ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Engineer, Cybersecurity NielsenIQ is maturing its Product Security programs and is recruiting a Product Security Engineer who will be responsible for supporting the rollout of DevSecOps capabilities and practises across all geographies and business units. As the Product Security Engineer, you will be responsible for integration, maintenance and analysis of the tools and technologies used in securing NIQ products/applications throughout their development. You will oversee application security capabilities within a multi-national matrixed environment. The Product Security Engineer will have the opportunity to replace the current Static and Dynamic Application Security Tool and advocate for the tech stack used for monitoring. This position will involve working closely with development/engineering teams, business units, technical and non-technical stakeholders, educating them and driving the adoption and maturity of the NIQ’s Product & Application Security programs. Responsibilities Collaborate within Product Security Engineering and Cybersecurity teams to support delivery of its strategic initiatives. Work with engineering teams (Developers, SREs & QAs) to ensure that products are secure on delivery and implement provided security capabilities. Actively contribute to building and maintaining Product Security team security tools and services, including integrations security tools in the CI/CD process Report on security key performance indicators (KPIs) to drive improvements across engineering teams’ security posture. Contribute to Product Security Engineering team security education program and become an advocate within the organization’s DevSecOps and application security community of practice. Review IaaS / PaaS architecture roadmaps for the cloud to and recommend baseline security controls and hardening requirements, supporting threat modelling of NIQ’s products. Qualifications 3+ years of experience working in a technical/hands-on application security, development, or DevOps professional environment. Working Knowledge of web stack, web security and common vulnerabilities (e.g. SQLi, XSS, & beyond.) Good coding experience (Python is most desirable, or similar programming language). Experience deploying containers using CI/CD pipeline tools like GitHub Actions, Gitlab Pipelines, Jenkins, and Terraform or Helm. Self-starter, technology and security hobbyist, enthusiast. Lifelong learner with endless curiosity. Bonus Points if you: Have experience building serverless functions in Cloud environments. Have knowledge of Cloud Workload Protection. Experience using SAST and DAST tools. Demonstrated engagement in security conferences, training, learning, associations is highly desired and fully supported. Ability to think like a hacker. Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms. Recharge and revitalize with help of wellness plans made for you and your family. Plan your future with financial wellness tools. Stay relevant and upskill yourself with career development opportunities. Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About The Role Grade Level (for internal use): 09 The Team The current team is composed of highly skilled engineers with solid development background who build and manage tier-0 platforms in AWS cloud environments. In their role, they will play a pivotal part in shaping the platform architecture and engineering. Their additional tasks include, exploring new innovative tools that benefit the organization needs, developing services and tools around the platform, establishing standards, creating reference implementations, and providing support for application integrations on need basis. The Impact This role is instrumental in constructing and maintaining dependable production systems within cloud environments. The team bears the crucial responsibility for ensuring high availability, minimizing latency, optimizing performance, enhancing efficiency, overseeing change management, implementing robust monitoring practices, responding to emergencies, and strategically planning for capacity. The impact of this team is pivotal for the organization, given its extensive application portfolio, necessitating a steadfast commitment to achieving and maintaining a 99.9% uptime, thus ensuring the reliability and stability of the firm's digital infrastructure. What’s In It For You S&P Global is an employee friendly company with various benefits and with primary focus on skill development. The technology division has a wide variety of yearly goals that help the employee train and certify in niche technologies like: Generative AI, Transformation of applications to CaaS, CI/CD/CD gold transformation, Cloud modernization, Develop leadership skills and business knowledge training. Essential Duties & Responsibilities As part of a global team of Engineers, deliver highly reliable technology products. Strong focus on developing robust solutions meeting high-security standards. Build and maintain new applications/platforms for growing business needs. Design and build future state architecture to support new use cases. Ensure scalable and reusable architecture as well as code quality. Integrate new use cases and work with global teams. Work with/support users to understand issues, develop root cause analysis and work with the product team for the development of enhancements/fixes. Become an integral part of a high performing global network of engineers/developers working from Colorado, New York, and India to help ensure 24x7 reliability for critical business applications. As part of a global team of engineers/developers, deliver continuous high reliability to our technology services. Strong focus towards developing permanent fixes to issues and heavy automation of manual tasks. Provide technical guidance to junior level resources. Works on analyzing/researching alternative solutions and developing/implementing recommendations accordingly. Qualifications Required: Bachelor / MS degree in Computer Science, Engineering or a related subject Good written and oral communication skills. Must have 3+ years of working experience in Java with Spring technology Must have API development experience Work experience with asynchronous/synchronous messaging using MQ, etc. Ability to use CICD flow and distribution pipelines to deploy applications Working experience with DevOps tools such as Git, Azure DevOps, Jenkins, Maven Solid understanding of Cloud technologies and managing infrastructures Experience in developing, deploying & debugging cloud applications Strong knowledge of Functional programming, Linux etc Nice To Have Experience in building single-page applications with Angular or ReactJS in conjunction with Python scripting. Working experience with API Gateway, Apache and Tomcat server, Helm, Ansible, Terraform, CI/CD, Azure DevOps, Jenkins, Git, Splunk, Grafana, Prometheus, Jaeger(or other OTEL products), Flux, LDAP, OKTA, Confluent Platform, Active MQ, AWS, Kubernetes Location: Hyderabad, India Hybrid model: twice a week work from office is mandatory. Shift time: 12 pm to 9 pm IST. About S&P Global Ratings At S&P Global Ratings, our analyst-driven credit ratings, research, and sustainable finance opinions provide critical insights that are essential to translating complexity into clarity so market participants can uncover opportunities and make decisions with conviction. By bringing transparency to the market through high-quality independent opinions on creditworthiness, we enable growth across a wide variety of organizations, including businesses, governments, and institutions. S&P Global Ratings is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit www.spglobal.com/ratings What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. S&P Global has a Securities Disclosure and Trading Policy (“the Policy”) that seeks to mitigate conflicts of interest by monitoring and placing restrictions on personal securities holding and trading. The Policy is designed to promote compliance with global regulations. In some Divisions, pursuant to the Policy’s requirements, candidates at S&P Global may be asked to disclose securities holdings. Some roles may include a trading prohibition and remediation of positions when there is an effective or potential conflict of interest. Employment at S&P Global is contingent upon compliance with the Policy. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 311026 Posted On: 2025-07-30 Location: Hyderabad, Telangana, India

Posted 1 week ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Role Description Role Proficiency: Resolve enterprise trouble tickets within agreed SLA and raise problem tickets for permanent resolution and/or provide mentorship (Hierarchical or Lateral) to junior associates Outcomes 1) Update SOP with updated troubleshooting instructions and process changes2) Mentor new team members in understanding customer infrastructure and processes3) Perform analysis for driving incident reduction4) Escalate high priority incidents to customer and organization stakeholders for quicker resolution5) Contribute to planning and successful migration of platforms 6) Resolve enterprise trouble tickets within agreed SLA and raise problem tickets for permanent resolution7) Provide inputs for root cause analysis after major incidents to define preventive and corrective actions Measures Of Outcomes 1) SLA Adherence2) Time bound resolution of elevated tickets - OLA3) Manage ticket backlog timelines - OLA4) Adhere to defined process – Number of NCs in internal/external Audits5) Number of KB articles created6) Number of incidents and change ticket handled 7) Number of elevated tickets resolved8) Number of successful change tickets9) % Completion of all mandatory training requirements Resolution Outputs Expected: Understand Priority and Severity based on ITIL practice resolve trouble ticket within agreed resolution SLA Execute change control tickets as documented in implementation plan Troubleshooting Troubleshooting based on available information from previous tickets or consulting with seniors Participate in online knowledge forums reference. Covert the new steps to KB article Perform logical/analytical troubleshooting Escalation/Elevation Escalate within organization/customer peer in case of resolution delay. Understand OLA between delivery layers (L1 L2 L3 etc) adhere to OLA. Elevate to next level work on elevated tickets from L1 Tickets Backlog/Resolution Follow up on tickets based on agreed timelines manage ticket backlogs/last activity as per defined process. Resolve incidents and SRs within agreed timelines. Execute change tickets for infrastructure Installation Install and configure tools software and patches Runbook/KB Update KB with new findings Document and record troubleshooting steps as knowledge base Collaboration Collaborate with different towers of delivery for ticket resolution (within SLA resolve L1 tickets with help from respective tower. Collaborate with other team members for timely resolution of tickets. Actively participate in team/organization-wide initiatives. Co-ordinate with UST ISMS teams for resolving connectivity related issues. Stakeholder Management Lead the customer calls and vendor calls. Organize meeting with different stake holders. Take ownership for function's internal communications and related change management. Strategic Define the strategy on data management policy management and data retention management. Support definition of the IT strategy for the function’s relevant scope and be accountable for ensuring the strategy is tracked benchmarked and updated for the area owned. Process Adherence Thorough understanding of organization and customer defined process. Suggest process improvements and CSI ideas. Adhere to organization’ s policies and business conduct. Process/efficiency Improvement Proactively identify opportunities to increase service levels and mitigate any issues in service delivery within the function or across functions. Take accountability for overall productivity efforts within the function including coordination of function specific tasks and close collaboration with Finance. Process Implementation Coordinate and monitor IT process implementation within the function Compliance Support information governance activities and audit preparations within the function. Act as a function SPOC for IT audits in local sites (incl. preparation interface to local organization mitigation of findings etc.) and work closely with ISRM (Information Security Risk Management). Coordinate overall objective setting preparation and facilitate process in order to achieve consistent objective setting in function Job Description. Coordination Support for CSI across all services in CIS and beyond. Training On time completion of all mandatory training requirements of organization and customer. Provide On floor training and one to one mentorship for new joiners. Complete certification of respective career paths. Performance Management Update FAST Goals in NorthStar track report and seek continues feedback from peers and manager. Set goals for team members and mentees and provide feedback Assist new team members to understand the customer environment Skill Examples 1) Good communication skills (Written verbal and email etiquette) to interact with different teams and customers. 2) Modify / Create runbooks based on suggested changes from juniors or newly identified steps3) Ability to work on an elevated server ticket and solve4) Networking:a. Trouble shooting skills in static and Dynamic routing protocolsb. Should be capable of running netflow analyzers in different product lines5) Server:a. Skills in installing and configuring active directory DNS DHCP DFS IIS patch managementb. Excellent troubleshooting skills in various technologies like AD replication DNS issues etc.c. Skills in managing high availability solutions like failover clustering Vmware clustering etc.6) Storage and Back up:a. Ability to give recommendations to customers. Perform Storage & backup enhancements. Perform change management.b. Skilled in in core fabric technology Storage design and implementation. Hands on experience on backup and storage Command Line Interfacesc. Perform Hardware upgrades firmware upgrades Vulnerability remediation storage and backup commissioning and de-commissioning replication setup and management.d. Skilled in server Network and virtualization technologies. Integration of virtualization storage and backup technologiese. Review the technical diagrams architecture diagrams and modify the SOP and documentations based on business requirements.f. Ability to perform the ITSM functions for storage & backup team and review the quality of ITSM process followed by the team.7) Cloud:a. Skilled in any one of the cloud technologies - AWS Azure GCP.8) Tools:a. Skilled in administration and configuration of monitoring tools like CA UIM SCOM Solarwinds Nagios ServiceNow etcb. Skilled in SQL scriptingc. Skilled in building Custom Reports on Availability and performance of IT infrastructure building based on the customer requirements9) Monitoring:a. Skills in monitoring of infrastructure and application components10) Database:a. Data modeling and database design Database schema creation and managementb. Identify the data integrity violations so that only accurate and appropriate data is entered and maintained.c. Backup and recoveryd. Web-specific tech expertise for e-Biz Cloud etc. Examples of this type of technology include XML CGI Java Ruby firewalls SSL and so on.e. Migrating database instances to new hardware and new versions of software from on premise to cloud based databases and vice versa.11) Quality Analysis: a. Ability to drive service excellence and continuous improvement within the framework defined by IT Operations Knowledge Examples Good understanding of customer infrastructure and related CIs. 2) ITIL Foundation certification3) Thorough hardware knowledge 4) Basic understanding of capacity planning5) Basic understanding of storage and backup6) Networking:a. Hands-on experience in Routers and switches and Firewallsb. Should have minimum knowledge and hands-on with BGPc. Good understanding in Load balancers and WAN optimizersd. Advance back and restore knowledge in backup tools7) Server:a. Basic to intermediate powershell / BASH/Python scripting knowledge and demonstrated experience in script based tasksb. Knowledge of AD group policy management group policy tools and troubleshooting GPO sc. Basic AD object creation DNS concepts DHCP DFSd. Knowledge with tools like SCCM SCOM administration8) Storage and Backup:a. Subject Matter Expert in any of the Storage & Backup technology9) Tools:a. Proficient in the understanding and troubleshooting of Windows and Linux family of operating systems10) Monitoring:a. Strong knowledge in ITIL process and functions11) Database:a. Knowledge in general database management b. Knowledge in OS System and networking skills Additional Comments Role - Cloud Engineer Primary Responsibilities Engineer and support a portfolio of tools including: o HashiCorp Vault (HCP Dedicated), Terraform (HCP), Cloud Platform o GitHub Enterprise Cloud (Actions, Advanced Security, Copilot) o Ansible Automation Platform, Env0, Docker Desktop o Elastic Cloud, Cloudflare, Datadog, PagerDuty, SendGrid, Teleport Manage infrastructure using Terraform, Ansible, and scripting languages such as Python and PowerShell Enable security controls including dynamic secrets management, secrets scanning workflows, and cloud access quotas Design and implement automation for self-service adoption, access provisioning, and compliance monitoring Respond to user support requests via ServiceNow and continuously improve platform support documentation and onboarding workflows Participate in Agile sprints, sprint planning, and cross-team technical initiatives Contribute to evaluation and onboarding of new tools (e.g., remote developer access, artifact storage) Key Projects You May Lead or Support GitHub secrets scanning and remediation with integration to HashiCorp Vault Lifecycle management of developer access across tools like GitHub and Teleport Upgrades to container orchestration environments and automation platforms (EKS, AKS) Technical Skills and Experience Proficiency with Terraform (IaC) and Ansible Strong scripting experience in Python, PowerShell, or Bash Experience operating in cloud environments (AWS, Azure, or GCP) Familiarity with secure development practices and DevSecOps tooling Exposure to or experience with: o CI/CD automation (GitHub Actions) o Monitoring and incident management platforms (Datadog, PagerDuty) o Identity providers (AzureAD, Okta) o Containers and orchestration (Docker, Kubernetes) o Secrets management and vaulting platforms Soft Skills and Attributes Strong cross-functional communication skills with technical and non-technical stakeholders Ability to work independently while knowing when to escalate or align with other engineers or teams. Comfort managing complexity and ambiguity in a fast-paced environment Ability to balance short-term support needs with longer-term infrastructure automation and optimization. Proactive, service-oriented mindset focused on enabling secure and scalable development Detail-oriented, structured approach to problem-solving with an emphasis on reliability and repeatability. Skills Terraform,Ansible,Python,PowershellorBash,AWS,AzureorGCP,CI/CDautomation

Posted 1 week ago

Apply

7.0 years

0 Lacs

Patna, Bihar, India

On-site

Roles and Responsibilities: Ensure the reliability, performance, and scalability of our database infrastructure. Work closely with application teams to ship solutions that integrate seamlessly with our database systems. Analyze solutions and implement best practices for supported data stores (primarily MySQL and PostgreSQL). Develop and enforce best practices for database security, backup, and recovery. Work on the observability of relevant database metrics and make sure we reach our database objectives. Provide database expertise to engineering teams (for example, through reviews of database migrations, queries, and performance optimizations). Work with peers (DevOps, Application Engineers) to roll out changes to our production environment and help mitigate database-related production incidents. Work on automation of database infrastructure and help engineering succeed by providing self-service tools. OnCall support on rotation with the team. Support and debug database production issues across services and levels of the stack. Document every action so your learning turns into repeatable actions and then into automation. Perform regular system monitoring, troubleshooting, and capacity planning to ensure scalability. Create and maintain documentation on database configurations, processes, and procedures. Mandatory Qualifications: Have at least 7 years of experience running MySQL/PostgreSQL databases in large environments. Awareness of cloud infrastructure (AWS/GCP). Have knowledge of the internals of MySQL/PostgreSQL. Knowledge of load balancing solutions such as ProxySQL to distribute database traffic efficiently across multiple servers. Knowledge of tools and methods for monitoring database performance. Strong problem-solving skills and ability to work in a fast-paced environment. Excellent communication and collaboration skills to work effectively within cross-functional teams. Knowledge of caching (Redis / Elasticache) Knowledge of scripting languages (Python) Knowledge of infrastructure automation (Terraform/Ansible) Familiarity with DevOps practices and CI/CD pipelines.

Posted 1 week ago

Apply

0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Req ID: 332013 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a AWS Devops Engineer to join our team in Pune, Mahārāshtra (IN-MH), India (IN). Key Responsibilities Development & Build: Build and maintain a robust, scalable real-time data streaming platform leveraging AWS and Confluent Cloud Infrastructure. AWS Services: strong knowledge of AWS services, particularly those relevant to stream processing and serverless components like Lambda Functions. Performance Monitoring: Continuously monitor and troubleshoot streaming platform performance issues to ensure optimal functionality. Collaboration: Work closely with cross-functional teams to onboard various data products into the streaming platform and support existing implementations. Version Control: Manage code using Git, ensuring best practices in version control are followed. Infrastructure as Code (IaC): Apply expertise in Terraform for efficient infrastructure management. CI/CD Practices: Implement robust CI/CD pipelines using GitHub Actions to automate deployment workflows. Monitor expiration of service principal secrets or certificates, use Azure DevOps REST API to automate renewal, implement alerts and documentation for debugging failed connections. Mandatory Skillsets The candidate must have: Strong proficiency in AWS services , including IAM Roles, Access Control RBAC, S3, Containerized Lambda Functions, VPC, Security Groups, RDS, MemoryDB, NACL, CloudWatch, DNS, Network Load Balancer, Directory Services and Identity Federation, AWS Tagging Configuration, Certificate Management, etc. Hands-on experience in Kubernetes (EKS) , with expertise in Imperative and Declarative approaches for managing resources/services like Pods, Deployments, Secrets, ConfigMaps, DaemonSets, Services, IRSA, Helm Charts, and deployment tools like ArgoCD. Expertise in Datadog , including integration, monitoring key metrics and logs, and creating meaningful dashboards and alerts. Strong understanding of Docker , including containerisation and image creation. Excellent programming skills in Python and Go , capable of writing efficient scripts. Familiarity with Git concepts for version control. Deep knowledge of Infrastructure as Code principles , particularly Terraform. Experience with CI/CD tools , specifically GitHub Actions. Understanding of security best practices , including knowledge of Snyk, Sonar Cloud, and Code Scene. Nice-to-Have Skillsets Prior experience with streaming platforms, particularly Apache Kafka (including producer and consumer applications). Knowledge of unit testing around Kafka topics, consumers, and producers. Experience with Splunk integration for logging and monitoring. Familiarity with Software Development Life Cycle (SDLC) principles. About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here .

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Nice to meet you! We’re a leader in data and AI. Through our software and services, we inspire customers around the world to transform data into intelligence - and questions into answers. We’re also a debt-free multi-billion-dollar organization on our path to IPO-readiness. If you're looking for a dynamic, fulfilling career coupled with flexibility and world-class employee experience, you'll find it here. About The Job Role : DevOps Engineer Responsibilities Enable stream-aligned DevOps teams to deliver rapid value by delivering platforms they can build on Enable Continuous Integration and Delivery by providing a standardized pipeline experience for teams Design novel solutions using both public and private cloud platforms to solve business needs Work with Information Security and others to understand what needs to be handled by the platforms we support Construct Infrastructure as Code routines that are leveraged to ensure cloud services have configuration needed for ongoing support Support legacy environments while we work with teams in migrating to DevOps practices and cloud adoption What We’re Looking For You’re curious, passionate, authentic, and accountable. These are our values and influence everything we do. You have a Bachelor's degree in computer science, information technology, or a similar quantitative field. You have a passion for automation and empowering others through self-help You have experience writing scripts and/or APIs in a modern language (Python, Go, PowerShell, etc.) You have experience delivering solutions in one or more public clouds, i.e., Microsoft Azure, Amazon AWS, etc. You have familiarity with Continuous Integration and Continuous Delivery (CI/CD) You have familiarity with fundamental cloud, security, networking, and distributed computing environment concepts You have familiarity administering one of the following platforms: Apache, Atlassian Bamboo, Boomi, Cloud Foundry, Harbor, RabbitMQ, or Tomcat The nice to haves Experience providing services as a platform to enable other teams to build from Experience with monitoring and writing self-healing routines to ensure platform uptime Experience with Python or PowerShell: writing scripts, applications, or APIs Experience with Ansible, Terraform, or other Infrastructure as Code/Configuration Management applications Experience with developing/managing applications in Microsoft’s Azure Cloud Experience with git, GitHub, and GitHub Actions for source control and Continuous Integration/Delivery Experience with supporting applications in an enterprise environment on both Linux and Windows Experience working in an Agile sprint-based environment Knowledge with the use and/or administration in Kubernetes Other Knowledge, Skills, And Abilities Strong oral and written communication skills Strong prioritization and analytical skills Ability to work independently and as part of a global team Ability to manage time across multiple projects Ability to communicate designs and decisions to peers and internal customers Ability to produce clear and concise system and process documentation Ability and willingness to participate in an afterhours on-call rotation Required Skills : Apache, Atlassian Bamboo, Boomi, Cloud Foundry, Harbor, RabbitMQ, Tomcat , Python , Powershell , CI\CD Cloud : Azure Experience : 3 to 7 years Diverse and Inclusive At SAS, it’s not about fitting into our culture – it’s about adding to it. We believe our people make the difference. Our diverse workforce brings together unique talents and inspires teams to create amazing software that reflects the diversity of our users and customers. Our commitment to diversity is a priority to our leadership, all the way up to the top; and it’s essential to who we are. To put it plainly: you are welcome here.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies