Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
New Delhi, Delhi, India
On-site
We’re not just building better tech. We’re rewriting how data moves and what the world can do with it. With Confluent, data doesn’t sit still. Our platform puts information in motion, streaming in near real-time so companies can react faster, build smarter, and deliver experiences as dynamic as the world around them. It takes a certain kind of person to join this team. Those who ask hard questions, give honest feedback, and show up for each other. No egos, no solo acts. Just smart, curious humans pushing toward something bigger, together. One Confluent. One Team. One Data Streaming Platform. About The Role Solutions Engineers at Confluent drive not only the early-stage evaluation within the sales process, but also play a crucial role in enabling ongoing value-realization for customers, all while helping them move up the adoption maturity curve. In this role you’ll partner with Account Executives to be the key technical advisor in service of the customer. You’ll be instrumental in surfacing the customers’ stated or implicit Business Needs, and coming up with Technical Designs to best meet these needs. You may find yourself at times facilitating art of the possible discussions and storytelling to inspire customers in adopting new patterns with confidence, and at other times driving creative solutioning to help get past difficult technical roadblocks. Overall, we look upon Solutions Engineers to be a key cog within the Customer Success Team that help foster an environment of sustained success for the customer and incremental adoption of Confluent’s Technology. What You Will Do Help advance new & innovative data streaming use-cases from conception to go-live Execute on and lead technical proof of concepts Conduct discovery & whiteboard Sessions to develop new use-cases Provide thought Leadership by delivering technical talks and workshops Guide customers with hands-on help and best practice to drive operational maturity of their Confluent deployment Analyze customer consumption trends and identify optimization opportunities Work closely with product and engineering teams, and serve as a key product advocate across the customer, partner and Industry ecosystem Forge strong relationships with key customer stakeholders and serve as a dependable partner for them What You Will Bring 5+ years of Sales/Pre-Sales/Solutions Engineering or similar customer facing experience in the software sales or implementation space Experience with event-driven architecture, data integration & processing techniques, database & data warehouse technologies, or related fields First-Hand exposure to cloud architecture, migrations, deployment & application development Experience with DevOps/Automation, GitOps or Kubernetes Ability to read & write Java, Python or SQL Clear, consistent demonstration of self-starter behavior, a desire to learn new things and tackle hard technical problems Exceptional presentation and communications capabilities. Confidence presenting to a highly skilled and experienced audience, ranging from developers to enterprise architects and up to C-level executives What Gives You An Edge Technical certifications - cloud developer/architect, data engineering & integration Familiarity with solution or value Selling A challenger mindset and an ability to positively influence peoples’ opinions Ready to build what's next? Let’s get in motion. Come As You Are Belonging isn’t a perk here. It’s the baseline. We work across time zones and backgrounds, knowing the best ideas come from different perspectives. And we make space for everyone to lead, grow, and challenge what’s possible. We’re proud to be an equal opportunity workplace. Employment decisions are based on job-related criteria, without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other classification protected by law.
Posted 1 month ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About SpotDraft SpotDraft is an end-to-end CLM for high-growth companies. We are building a product to ensure convenient, fast and easy contracting for businesses. We know the potential to be unlocked if legal teams are equipped with the right kind of tools and systems. So here we are, building them. Currently, customers like PhonePe, Chargebee, Unacademy, Meesho and Cred use SpotDraft to streamline contracting within their organisations. On average, SpotDraft saves legal counsels within the company 10 hours per week and helps close deals 25% faster. Job Summary As a Jr. DevOps Engineer, you will be responsible for planning, building and optimizing the Cloud Infrastructure and CI/CD pipelines for the applications which power SpotDraft. You will be closely working with Product Teams across the organization and help them ship code and reduce manual processes. You will directly work with the Engineering Leaders including the CTO to deliver the best experience for users by ensuring high availability of all systems. We follow the GitOps pattern to deploy infrastructure using Terraform and ArgoCD. We leverage tools like Sentry, DataDog and Prometheus to efficiently monitor our Kubernetes Cluster and Workload. Key Responsibilities Developing and maintaining CI/CD workflows on Github Provisioning and maintaining cloud infrastructure on GCP and AWS using Terraform Set up logging, monitoring and alerting of applications and infrastructure using DataDog and GCP Automate deployment of applications to Kubernetes using ArgoCD, Helm, Kustomize and Terraform Design and promote efficient DevOps process and practices Continuously optimize infrastructure to reduce cloud costs Requirements Proficiency with Docker and Kubernetes Proficiency in git Proficiency in any scripting language (bash, python, etc..) Experience with any of the major clouds Experience working on linux based infrastructure Experience with open source monitoring tools like Prometheus Experience with any ingress controllers (nginx, traefik, etc..) Working at SpotDraft When you join SpotDraft, you will be joining an ambitious team that is passionate about creating a globally recognized legal tech company. We set each other up for success and encourage everyone in the team to play an active role in building the company. An opportunity to work alongside one of the most talent-dense teams. An opportunity to build your professional network through interacting with influential and highly sought-after founders, investors, venture capitalists and market leaders. Hands-on impact and space for complete ownership of end-to-end processes. We are an outcome-driven organisation and trust each other to drive outcomes whilst being audacious with our goals. ‘ Our Core Values Our business is to delight Customers Be Transparent. Be Direct Be Audacious Outcomes over everything else Be 1% better every day Elevate each other Be passionate. Take Ownership
Posted 1 month ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Augnito is the next-gen Voice AI, powering the healthcare industry. We empower medical professionals and streamline clinical workflows with cloud-based, AI speech recognition that offers ergonomic data entry with 99%+ accuracy, without the need for voice profile training, from any device, anywhere. Augnito helps streamline clinical workflows, makes healthcare intelligence securely accessible, and ensures that physicians have more time to concentrate on their primary concern: patient care. Their solutions are currently in use at more than 500 hospitals, across more than 25 countries. We don't adhere to the traditional 9-to-5 work style; instead, we are a closely-knit group of dreamers, builders, and innovators firmly committed to pushing the boundaries of what's possible in healthcare. What You’ll Do Manage Cloud & Containerized Environments – Administer and optimize multi-cloud infrastructure, leveraging Docker and Kubernetes to ensure scalability, security, and high availability. Automate & Implement Infrastructure as Code (IaC) – Streamline on-premises setups through automation and adopt IaC (Terraform, CFT) for efficient provisioning and configuration management. Oversee & Troubleshoot On-Prem Infrastructure – Deploy, configure, and resolve issues across Windows & Linux environments, ensuring system stability and minimal downtime. Enhance Monitoring & Incident Response – Set up robust monitoring and alerting systems (Prometheus, Grafana) to improve observability and respond proactively to incidents. Drive Continuous Learning & Innovation – Expand expertise in cloud operations, automation, and DevOps while exploring new tools and best practices under mentorship. What You Bring Educational Background – Bachelor's/Master’s degree in Software Engineering, Computer Science, IT, or a related field. On-Prem & Cloud Expertise – 2+ years of experience managing Windows & Linux systems along with cloud infrastructure (AWS/GCP/Azure hands-on required). Containerization & Orchestration – Strong knowledge of Docker, Kubernetes, Terraform/CFT, Kops, and monitoring tools like Prometheus & Grafana. Automation & Scripting – Proficiency in Bash & Python, with experience in CI/CD pipelines (Jenkins, GitOps, etc.). Cloud & Infrastructure Services – Solid understanding of serverless architectures, cloud networking, storage, and automation across major cloud platforms Augnito India Pvt. Ltd. is an equal opportunities employer .We are committed to providing equal opportunities throughout employment including in the recruitment, training and development of employees (including promotion, transfers, assignments and beliefs). Augnito will not tolerate any act of discrimination in the workplace including but not limited to Gender, Gender identity, National or ethnic origins, Marital or Domestic Partnership status, Pregnancy Status, Carer’s responsibilities, Sexual orientation, Race, Color, Religious belief, Disability, Age, Any other grounds of discrimination. In order to provide equal employment and advancement opportunities to all individuals, employment decisions at Augnito will be based on merit, qualifications, and abilities. Our objective is to attract job applications and applications for development from the best possible candidates and to retain the best people
Posted 1 month ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
dunnhumby is the global leader in Customer Data Science, empowering businesses everywhere to compete and thrive in the modern data-driven economy. We always put the Customer First. Our mission: to enable businesses to grow and reimagine themselves by becoming advocates and champions for their Customers. With deep heritage and expertise in retail – one of the world’s most competitive markets, with a deluge of multi-dimensional data – dunnhumby today enables businesses all over the world, across industries, to be Customer First. dunnhumby employs nearly 2,500 experts in offices throughout Europe, Asia, Africa, and the Americas working for transformative, iconic brands such as Tesco, Coca-Cola, Meijer, Procter & Gamble and Metro. We’re looking for a Backend Engineer (.NET) who expects more from their career . This role offers an opportunity to build scalable, high-performance, and multi-tenant solutions within the platform engineering team. You'll play a key role in enhancing code quality and engineering excellence, contributing to critical architecture decisions, and enabling multi-tenant capabilities across distributed systems in a data-driven environment. Key Technical Skills Required Strong proficiency in C#, .NET Core, and RESTful API development. Experience with asynchronous programming, concurrency control, and event-driven architecture (Pub/Sub, Kafka, etc.). Deep understanding of object-oriented programming, data structures, and algorithms. Experience with unit testing frameworks and a TDD approach to development. Hands-on experience with Docker and Kubernetes (K8s) for containerized applications. Strong knowledge of performance tuning, security best practices, and observability (monitoring/logging/alerting). Experience with CI/CD pipelines, GitOps workflows, and infrastructure-as-code (Terraform, Helm, or similar). Exposure to multi-tenant architectures with strong understanding of NFRs, including tenant isolation strategies, secure resource partitioning, performance profiling across tenants, shared vs. isolated resource models, and scalable, resilient design patterns for onboarding and operating multiple tenants concurrently. Proficiency in relational databases (PostgreSQL preferred) and exposure to NoSQL solutions. What You Can Expect From Us We won’t just meet your expectations. We’ll defy them. So you’ll enjoy the comprehensive rewards package you’d expect from a leading technology company. But also, a degree of personal flexibility you might not expect. Plus, thoughtful perks, like flexible working hours and your birthday off. You’ll also benefit from an investment in cutting-edge technology that reflects our global ambition. But with a nimble, small-business feel that gives you the freedom to play, experiment and learn. And we don’t just talk about diversity and inclusion. We live it every day – with thriving networks including dh Gender Equality Network, dh Proud, dh Family, dh One, dh Enabled and dh Thrive as the living proof. We want everyone to have the opportunity to shine and perform at your best throughout our recruitment process. Please let us know how we can make this process work best for you. Our approach to Flexible Working At dunnhumby, we value and respect difference and are committed to building an inclusive culture by creating an environment where you can balance a successful career with your commitments and interests outside of work. We believe that you will do your best at work if you have a work / life balance. Some roles lend themselves to flexible options more than others, so if this is important to you please raise this with your recruiter, as we are open to discussing agile working opportunities during the hiring process. For further information about how we collect and use your personal information please see our Privacy Notice which can be found (here)
Posted 1 month ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Position: OpenShift Engineer Experience: Maximum 7 years Key Responsibilities: Design, implement, and manage Red Hat OpenShift Container Platform environments, ensuring reliable operation and optimal performance. Develop and maintain automation scripts and tools using technologies such as Ansible, Terraform, and scripting languages to enhance efficiency in deployment, management, and scaling of OpenShift clusters. Continuously monitor OpenShift infrastructure and deployed applications, proactively addressing potential issues and providing solutions to minimize downtime. Work closely with application development teams to facilitate seamless deployments, scalability, and performance improvements within containerized environments. Create and regularly update detailed documentation including system configurations, troubleshooting methodologies, and best practices to ensure operational clarity and knowledge transfer. Skills Required: Extensive hands-on experience with Red Hat OpenShift Container Platform (OCP). Strong proficiency in Kubernetes, Docker, and related container orchestration technologies. Familiarity with scripting languages such as Bash, Python, or automation tools like Ansible and Terraform. Solid knowledge of Linux system administration, networking fundamentals, and infrastructure security principles. Comprehensive understanding of CI/CD pipelines, GitOps practices, and DevOps methodologies. Exceptional analytical and troubleshooting skills combined with the ability to solve complex technical problems efficiently. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 month ago
8.0 - 13.0 years
22 - 37 Lacs
Bengaluru
Work from Office
We are Hiring for Technical Architect Position with a deep understanding of designing Cloud-Native Architecture . Should have Strong experience in AWS(Primary) ,Microservices, Serverless architecture, REST APIs, GraphQL, Event-driven systems, Scalable SaaS containerized platforms. Will drive architectural vision across Designing, development, deployment, monitoring and optimization. Role: Technical Architect (Designing Cloud Native Architecture) Location : Bangalore Experience : 8 Years - 15 Years Responsibilities: Design and implement cloud-native, scalable, and secure architectures using AWS (Azure/GCP experience is a plus). Architect and guide the development of microservices, serverless applications, REST APIs, and GraphQL interfaces. Design event-driven systems and containerized SaaS platforms with high availability and resilience. Lead technical design reviews, and ensure alignment with business and technical goals. Collaborate with DevOps teams to establish CI/CD pipelines and GitOps-based infrastructure. Apply Infrastructure as Code (IaC) principles using tools such as Terraform and YAML. Promote engineering best practices including security, performance optimization, and monitoring. Champion secure coding practices, including OAuth, JWT, and data encryption strategies. Support and mentor engineering teams through the full development cyclefrom design to deployment and monitoring. Leverage monitoring and observability tools like CloudWatch, Prometheus, and Grafana to ensure platform reliability. Embrace and promote the use of AI developer tools (e.g., GitHub Copilot, CodeWhisperer) for productivity enhancement. (Optional) Work on specialized projects involving media workflows (image, 3D, video) or collaboration tool. Requirement: Proven experience designing cloud-native architectures, primarily on AWS (Azure or GCP is a plus). Strong hands-on expertise in: Programming Languages: Java, Python, Node.js, TypeScript Databases: PostgreSQL, MongoDB, DynamoDB CI/CD Pipelines: GitOps, Terraform, YAML Deep understanding of microservices, serverless, and event-driven paradigms. Strong foundation in security best practices, including authentication and encryption. Experience with monitoring and observability tools like CloudWatch, Prometheus, and Grafana Interested candidates can share their Cv on sayali.dhomse@acxtech.co.in or +91 8356033048
Posted 1 month ago
2.2 - 3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: DevOps Engineer (Kubernetes) CTC: 4-5.5 LPA Location: Noida Sector-62 Job Summary We are looking for a skilled DevOps Engineer with Kubernetes expertise to join our team. The ideal candidate will be responsible for designing, implementing, and maintaining scalable cloud infrastructure, automating deployments, and ensuring high availability of applications. You will work closely with development and operations teams to optimize CI/CD pipelines and enhance system performance. Key Responsibilities Design, deploy, and manage Kubernetes clusters for containerized applications. Automate application deployment and infrastructure provisioning. Monitor and optimize Kubernetes workloads for scalability, security, and performance. Implement and manage CI/CD pipelines using Jenkins or GitHub CI/CD. Ensure high availability, disaster recovery, and fault tolerance of cloud-based systems. Collaborate with development teams to containerize applications and improve software delivery processes. Implement and manage observability tools like Prometheus, Grafana, and ELK Stack for logging and monitoring. Manage security policies, RBAC (Role-Based Access Control), and network policies within Kubernetes. Optimize cloud costs and resource utilization in Kubernetes environments. Required Skills & Qualifications Bachelor's degree in Computer Science, IT, or a related field (or equivalent experience). 2.2-3 years of experience in DevOps, Cloud, AWS and Infrastructure Engineering. Hands-on experience with Kubernetes (K8s), Docker, and container orchestration. Proficiency in cloud platforms like AWS (preferred). Strong scripting skills in Bash, Python, or Go. Experience with Infrastructure as Code (IaC) tools like Terraform, Ansible, or CloudFormation. Knowledge of networking concepts, security best practices, and microservices architecture. Familiarity with GitOps workflows and tools like ArgoCD or Flux. Excellent problem-solving skills and the ability to work in a fast-paced environment.
Posted 1 month ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role Overview We are looking for experienced DevOps Engineers (4+ years) with a strong background in cloud infrastructure, automation, and CI/CD processes. The ideal candidate will have hands-on experience in building, deploying, and maintaining cloud solutions using Infrastructure-as-Code (IaC) best practices. The role requires expertise in containerization, cloud security, networking, and monitoring tools to optimize and scale enterprise-level applications. Key Responsibilities Design, implement, and manage cloud infrastructure solutions on AWS, Azure, or GCP. Develop and maintain Infrastructure-as-Code (IaC) using Terraform, CloudFormation, or similar tools. Implement and manage CI/CD pipelines using tools like GitHub Actions, Jenkins, GitLab CI/CD, BitBucket Pipelines, or AWS CodePipeline. Manage and orchestrate containers using Kubernetes, OpenShift, AWS EKS, AWS ECS, and Docker. Work on cloud migrations, helping organizations transition from on-premises data centers to cloud-based infrastructure. Ensure system security and compliance with industry standards such as SOC 2, PCI, HIPAA, GDPR, and HITRUST. Set up and optimize monitoring, logging, and alerting using tools like Datadog, Dynatrace, AWS CloudWatch, Prometheus, ELK, or Splunk. Automate deployment, configuration, and management of cloud-native applications using Ansible, Chef, Puppet, or similar configuration management tools. Troubleshoot complex networking, Linux/Windows server issues, and cloud-related performance bottlenecks. Collaborate with development, security, and operations teams to streamline the DevSecOps process. Must-Have Skills 3+ years of experience in DevOps, cloud infrastructure, or platform engineering. Expertise in at least one major cloud provider: AWS, Azure, or GCP. Strong experience with Kubernetes, ECS, OpenShift, and container orchestration technologies. Hands-on experience in Infrastructure-as-Code (IaC) using Terraform, AWS CloudFormation, or similar tools. Proficiency in scripting/programming languages like Python, Bash, or PowerShell for automation. Strong knowledge of CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD, or BitBucket Pipelines. Experience with Linux operating systems (RHEL, SUSE, Ubuntu, Amazon Linux) and Windows Server administration. Expertise in networking (VPCs, Subnets, Load Balancing, Security Groups, Firewalls). Experience in log management and monitoring tools like Datadog, CloudWatch, Prometheus, ELK, Dynatrace. Strong communication skills to work with cross-functional teams and external customers. Knowledge of Cloud Security best practices, including IAM, WAF, GuardDuty, CVE scanning, vulnerability management. Good-to-Have Skills Knowledge of cloud-native security solutions (AWS Security Hub, Azure Security Center, Google Security Command Center). Experience in compliance frameworks (SOC 2, PCI, HIPAA, GDPR, HITRUST). Exposure to Windows Server administration alongside Linux environments. Familiarity with centralized logging solutions (Splunk, Fluentd, AWS OpenSearch). GitOps experience with tools like ArgoCD or Flux. Background in penetration testing, intrusion detection, and vulnerability scanning. Experience in cost optimization strategies for cloud infrastructure Skills: ecs,terraform,cd,dynatrace,elk,monitoring tools,cloud infrastructure,subnets,management,jenkins,ci,datadog,kubernetes,vpcs,gcp,cve scanning,log management,guardduty,vulnerability management,aws cloudformation,load balancing,aws,ci/cd tools,cloud security,waf,windows server,infrastructure-as-code (iac),prometheus,platform engineering,openshift,powershell,bash,linux,cloudwatch,azure,infrastructure,bitbucket pipelines,devops,security,gitlab ci/cd,python,scripting,firewalls,github actions,cloud,networking,iam,security groups
Posted 1 month ago
10.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Who Are We❓ Step into the world of Mrsool—where convenience meets innovation! As one of the largest delivery platforms in the Middle East and North Africa (MENA) region, Mrsool has captivated users with its unique and seamless experience, earning it the highest ratings among all major delivery platforms on both Apple's App Store and Google's Play Store. What sets Mrsool apart is its commitment to providing an unmatched "order anything from anywhere" experience. Using Generative AI, we analyze customer instructions in real-time and search across 100,000+ restaurants and stores to find exactly what they need. Our cutting-edge technology, combined with a vast fleet of dedicated on-demand couriers, ensures fast and reliable delivery—no matter how far or remote the location may be. But don't just take our word for it—Mrsool is consistently rated among the highest of all major delivery platforms, earning top reviews on both the Apple App Store and Google Play Store. Our commitment to a flawless, personalized experience has earned the trust of millions across the region, making Mrsool the go-to delivery app for a generation that demands both convenience and excellence. Whether it's a late-night craving, a forgotten item, or a special gift for a loved one, Mrsool is here to deliver, quite literally. We take pride in the convenience we offer, empowering you to get what you need when you need it, all at the tap of a button. The Job in a Nutshell💡 We are seeking a skilled and experienced DevOps Lead to drive and optimize our software development and deployment pipelines. The ideal candidate will have strong expertise in automation, CI/CD, cloud infrastructure, and security best practices. You will collaborate closely with development, operations, and security teams to enhance system reliability, scalability, and efficiency. If you're eager to take on this rewarding opportunity, we'd love to hear from you. Apply today! What You Will Do💡 Lead the design, implementation, and maintenance of scalable, secure, and resilient DevOps infrastructure Architect and manage CI/CD pipelines to streamline software development and deployment Oversee cloud infrastructure (AWS) and ensure optimal performance and cost efficiency Implement and maintain infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible Monitor system performance, security, and uptime using advanced observability tools Automate repetitive tasks to improve efficiency and reduce human error Collaborate with software engineers to enhance development workflows and ensure smooth production deployments Define and enforce DevOps best practices, security policies, and compliance standards Manage incident response and troubleshoot production issues to ensure high availability Mentor and guide junior DevOps engineers to build a high-performing DevOps culture Requirements What Are We Looking For❓ Bachelor's/Master's degree in Computer Science, IT, or related field 10+ years of experience in a DevOps role Strong expertise in cloud platforms (AWS) and container orchestration (Kubernetes, Docker) Hands-on experience with CI/CD tools (Jenkins, GitHub Actions, GitLab CI, CircleCI, etc.) Proficiency in scripting languages (Python, Bash, or Shell) and automation frameworks Deep understanding of monitoring, logging, and alerting tools (Prometheus, Grafana, ELK, Datadog, etc.) Experience with configuration management tools like Ansible, Puppet, or Chef Strong knowledge of networking, security best practices, and compliance frameworks Experience with version control systems (Git) and GitOps principles Excellent problem-solving, communication, and leadership skills Who Will Excel❓ Certifications in AWS, Kubernetes, or DevOps-related domains Experience with serverless computing and microservices architecture Familiarity with Service Mesh (Istio, Linkerd) and API Gateway technologies Experience with Chaos Engineering and disaster recovery planning Benefits What We Offer You❗ Inclusive and Diverse Environment: We foster an inclusive and diverse workplace that values innovation and provides flexibility Competitive Compensation: Our compensation packages are competitive and include potential share options. Additionally, you will benefit from a performance-based commission/ incentive structure, rewarding your achievements Personal Growth and Development: We are committed to your professional development, offering regular training and an annual learning stipend to help you advance your career in a fast-paced, dynamic environment Autonomy and Mentorship: You'll enjoy a degree of autonomy in your role, supported by mentorship and ambitious goals that drive both your personal success and the company's growth.
Posted 1 month ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Who We Are: Airia is an enterprise AI full-stack platform to quickly and securely modernize allworkflows, deploy industry-leading models, provide instant time to value and create impactful ROI. Airia provides complete AI lifecycle integration, protects corporate data and simplifies AI adoption across the enterprise. Who You Are: You are a proactive, solutions-driven engineer who thrives in high-speed environments where change is constant and growth is exponential. You’re energized by the challenge of building infrastructure from scratch, and you don’t shy away from ambiguity or responsibility. You take ownership, solve problems hands-on, and move fast. You’re excited about working at the intersection of DevOps and AI, supporting cutting-edge applications that directly solve real customer problems. What You Will Do: As a DevOps Engineer at Airia, you will design, build, and maintain scalable cloud infrastructure on Azure, leveraging tools like Terraform for automation. You’ll develop and manage CI/CD pipelines using GitHub Actions, orchestrate containerized applications with Kubernetes, and implement monitoring solutions with Prometheus and Grafana. Collaborating closely with cross-functional teams, you’ll integrate DevOps practices into the development process, ensuring security, compliance, and high performance. Additionally, you’ll troubleshoot infrastructure issues, document processes, and continuously seek ways to optimize and improve the overall system. Core Responsibilities Design and Implement Infrastructure as Code (IaC): Utilize Terraform to automate and manage infrastructure across Azure and Cloudflare, ensuring consistent and scalable deployments. Orchestrate Containerized Applications: Leverage Kubernetes for deploying, scaling, and managing containers, ensuring high availability and resilience. Monitor, Optimize, and Ensure Observability of System Performance: Implement solutions using Prometheus and Grafana to track and optimize system performance, identify bottlenecks, and maintain system health. Collaborate on CI/CD Pipeline Development: Develop and maintain continuous integration and continuous deployment pipelines, automating the software delivery process and streamlining workflows. Ensure Security and Compliance (DevSecOps): Apply best practices for security and compliance within the DevOps lifecycle, including managing secrets, conducting security scans, and adhering to regulatory requirements. Troubleshoot and Resolve Infrastructure and Application Issues: Diagnose and resolve infrastructure-related problems across development and production environments, ensuring minimal disruption. What We Need From You BS/MS in Computer Science, Information Technology, or a related Engineering field 5+ years of industry experience as a DevOps Engineer or in a similar role, supporting cloud-based infrastructure and automation Infrastructure as Code (IaC): Proven experience with Terraform for automating infrastructure provisioning and management Containerization and Orchestration: Deep understanding of Kubernetes and Docker for deploying, managing, and scaling containerized applications Cloud Experience: Hands-on experience with Azure or AWS in production environments GitOps: Practical experience with GitOps (preferably FluxCD) Monitoring and Observability: Hands-on experience using Prometheus and Loki for monitoring, alerting, and visualizing system performance metrics CI/CD Pipelines: Strong experience in designing, maintaining, and automating CI/CD pipelines (preferably with GitHub Actions) Scripting and Automation: Proficiency in scripting languages like Bash, Python, or PowerShell to automate tasks and improve efficiency Security Knowledge: Experience implementing security measures within the DevOps lifecycle, including managing secrets and ensuring compliance Problem-Solving Skills: Ability to troubleshoot and resolve complex infrastructure issues in a timely and efficient manner Communication Skills: Ability to work effectively within a cross-functional team, sharing knowledge and collaborating on projects. Nice To Have Experience with DevSecOps, security compliance, and penetration testing Familiarity with Active Directory, cloud networking, and Azure-specific networking setups Exposure to high-load systems and MLOps pipelines Skills: azure,cloudflare,ci,gitops,bash,infrastructure,kubernetes,prometheus,cd,cloud,github actions,docker,grafana,devops,powershell,security,python,terraform
Posted 1 month ago
0 years
0 Lacs
India
On-site
Job Description: As an L3 AWS Support Engineer, you will be responsible for providing advanced technical support for complex AWS-based solutions. You will troubleshoot and resolve critical issues, architect solutions, and provide technical leadership to the support team. Key Responsibilities: Architectural Oversight: Design, implement, and optimize cloud architectures for performance, security, and scalability Conduct Well-Architected Framework reviews Complex Troubleshooting: Resolve critical issues involving hybrid environments, multi-region setups, and service interdependencies Debug Lambda functions, API Gateway configurations, and other advanced AWS services Security: Implement advanced security measures like GuardDuty, AWS WAF, and Security Hub Conduct regular security audits and compliance checks (e.g., SOC2, GDPR) Automation & DevOps: Develop CI/CD pipelines using CodePipeline, Jenkins, or GitLab Automate infrastructure scaling, updates, and monitoring workflows Automate the provisioning of EKS clusters and associated AWS resources using Terraform or CloudFormation Develop and maintain Helm charts for consistent application deployments Implement GitOps workflows Disaster Recovery & High Availability: Design and test failover strategies and disaster recovery mechanisms for critical applications Cluster Management and Operations Design, deploy, and manage scalable and highly available EKS clusters Manage Kubernetes objects like Pods, Deployments, StatefulSets, ConfigMaps, and Secrets Implement and manage Kubernetes resource scheduling, scaling, and lifecycle management Team Leadership: Provide technical guidance to Level 1 and 2 engineers Create knowledge-sharing sessions and maintain best practices documentation Cost Management: Implement resource tagging strategies and cost management tools to reduce operational expenses Required Skills and Qualifications: Technical Skills: Deep understanding of AWS core services and advanced features Strong expertise in AWS automation, scripting (Bash, Python, PowerShell), and CLI Experience with AWS CloudFormation and Terraform Knowledge of AWS security best practices, identity and access management, and networking Capacity Planning: Analyze future resource needs and plan capacity accordingly Performance Optimization: Identify and resolve performance bottlenecks Migration and Modernization: Lead complex migration and modernization projects Soft Skills: Excellent problem-solving and analytical skills Strong communication and interpersonal skills Ability to work independently and as part of a team Customer-focused approach Certifications (Preferred): AWS Certified Solutions Architect - Professional AWS Certified DevOps Engineer - Professional AWS Certified Security - Specialty
Posted 1 month ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Description: As an L3 AWS Support Engineer, you will be responsible for providing advanced technical support for complex AWS-based solutions. You will troubleshoot and resolve critical issues, architect solutions, and provide technical leadership to the support team. Key Responsibilities: Architectural Oversight: Design, implement, and optimize cloud architectures for performance, security, and scalability Conduct Well-Architected Framework reviews Complex Troubleshooting: Resolve critical issues involving hybrid environments, multi-region setups, and service interdependencies Debug Lambda functions, API Gateway configurations, and other advanced AWS services Security: Implement advanced security measures like GuardDuty, AWS WAF, and Security Hub Conduct regular security audits and compliance checks (e.g., SOC2, GDPR) Automation & DevOps: Develop CI/CD pipelines using CodePipeline, Jenkins, or GitLab Automate infrastructure scaling, updates, and monitoring workflows Automate the provisioning of EKS clusters and associated AWS resources using Terraform or CloudFormation Develop and maintain Helm charts for consistent application deployments Implement GitOps workflows Disaster Recovery & High Availability: Design and test failover strategies and disaster recovery mechanisms for critical applications Cluster Management and Operations Design, deploy, and manage scalable and highly available EKS clusters Manage Kubernetes objects like Pods, Deployments, StatefulSets, ConfigMaps, and Secrets Implement and manage Kubernetes resource scheduling, scaling, and lifecycle management Team Leadership: Provide technical guidance to Level 1 and 2 engineers Create knowledge-sharing sessions and maintain best practices documentation Cost Management: Implement resource tagging strategies and cost management tools to reduce operational expenses Required Skills and Qualifications: Technical Skills: Deep understanding of AWS core services and advanced features Strong expertise in AWS automation, scripting (Bash, Python, PowerShell), and CLI Experience with AWS CloudFormation and Terraform Knowledge of AWS security best practices, identity and access management, and networking Capacity Planning: Analyze future resource needs and plan capacity accordingly Performance Optimization: Identify and resolve performance bottlenecks Migration and Modernization: Lead complex migration and modernization projects Soft Skills: Excellent problem-solving and analytical skills Strong communication and interpersonal skills Ability to work independently and as part of a team Customer-focused approach Certifications (Preferred): AWS Certified Solutions Architect - Professional AWS Certified DevOps Engineer - Professional AWS Certified Security - Specialty
Posted 1 month ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role Overview We are looking for experienced DevOps Engineers (4+ years) with a strong background in cloud infrastructure, automation, and CI/CD processes. The ideal candidate will have hands-on experience in building, deploying, and maintaining cloud solutions using Infrastructure-as-Code (IaC) best practices. The role requires expertise in containerization, cloud security, networking, and monitoring tools to optimize and scale enterprise-level applications. Key Responsibilities Design, implement, and manage cloud infrastructure solutions on AWS, Azure, or GCP. Develop and maintain Infrastructure-as-Code (IaC) using Terraform, CloudFormation, or similar tools. Implement and manage CI/CD pipelines using tools like GitHub Actions, Jenkins, GitLab CI/CD, BitBucket Pipelines, or AWS CodePipeline. Manage and orchestrate containers using Kubernetes, OpenShift, AWS EKS, AWS ECS, and Docker. Work on cloud migrations, helping organizations transition from on-premises data centers to cloud-based infrastructure. Ensure system security and compliance with industry standards such as SOC 2, PCI, HIPAA, GDPR, and HITRUST. Set up and optimize monitoring, logging, and alerting using tools like Datadog, Dynatrace, AWS CloudWatch, Prometheus, ELK, or Splunk. Automate deployment, configuration, and management of cloud-native applications using Ansible, Chef, Puppet, or similar configuration management tools. Troubleshoot complex networking, Linux/Windows server issues, and cloud-related performance bottlenecks. Collaborate with development, security, and operations teams to streamline the DevSecOps process. Must-Have Skills 3+ years of experience in DevOps, cloud infrastructure, or platform engineering. Expertise in at least one major cloud provider: AWS, Azure, or GCP. Strong experience with Kubernetes, ECS, OpenShift, and container orchestration technologies. Hands-on experience in Infrastructure-as-Code (IaC) using Terraform, AWS CloudFormation, or similar tools. Proficiency in scripting/programming languages like Python, Bash, or PowerShell for automation. Strong knowledge of CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD, or BitBucket Pipelines. Experience with Linux operating systems (RHEL, SUSE, Ubuntu, Amazon Linux) and Windows Server administration. Expertise in networking (VPCs, Subnets, Load Balancing, Security Groups, Firewalls). Experience in log management and monitoring tools like Datadog, CloudWatch, Prometheus, ELK, Dynatrace. Strong communication skills to work with cross-functional teams and external customers. Knowledge of Cloud Security best practices, including IAM, WAF, GuardDuty, CVE scanning, vulnerability management. Good-to-Have Skills Knowledge of cloud-native security solutions (AWS Security Hub, Azure Security Center, Google Security Command Center). Experience in compliance frameworks (SOC 2, PCI, HIPAA, GDPR, HITRUST). Exposure to Windows Server administration alongside Linux environments. Familiarity with centralized logging solutions (Splunk, Fluentd, AWS OpenSearch). GitOps experience with tools like ArgoCD or Flux. Background in penetration testing, intrusion detection, and vulnerability scanning. Experience in cost optimization strategies for cloud infrastructure. Passion for mentoring teams and sharing DevOps best practices. Skills: gitlab ci/cd,ci/cd,cloud security,infrastructure-as-code (iac),terraform,github actions,windows server,devops,security,cloud infrastructure,linux,puppet,ci,aws,chef,scripting (python, bash, powershell),infrastructure,azure,cloud,networking,gcp,automation,monitoring tools,kubernetes,log management,ansible,cd,jenkins,containerization,monitoring tools (datadog, prometheus, elk)
Posted 1 month ago
0.0 - 3.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Category: Administration Main location: India, Tamil Nadu, Chennai Position ID: J0625-1026 Employment Type: Full Time Position Description: Company Profile: Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Job Title: Devops Automation Position: Senior Systems Engineer/Lead Analyst Experience: 7+ yrs Category: IT Infrastructure Main location: Bangalore Position ID: J0625-1026 Employment Type: Full Time Qualification: Bachelor's degree in Computer Science or related field or higher with minimum 3 years of relevant experience. Job Description: Deep Knowledge of Source version control (GitHub, Gitlab, Bitbucket etc..) Knowledge of Container Registry management (Quay Registry, Gitlab registry etc..) Deep Knowledge of Kubernetes, manifests and resources management (Secret, Config Map, Storage Class, Service, Replica set, Deployment, Ingress, Route, Stateful Set, Persistent Volumes etc..) Knowledge of Cloud Native CI/CD (continuous integration/continuous delivery) with (Jenkins, Tekton etc..) Knowledge of Containerization (docker, podman, etc.) Building and manage Cloud Native applications using (Podman, Docker, Buildah, Build packs etc..) Knowledge of application deployment using Gitops/ArgoCD Deep Knowledge of creating application images , k8s deployment scripts , extensive usage of (Helm, customize and OpenShift) templates. Experience with technologies such as SSO, SAML 2.0, OAuth 2.0, OpenID Connect, Role-Based Access Control (RBAC) Experience with AzureAD Experience with ( JavaScript, .Net, Java, Node JS, Quarkus, Web Servers, Application Servers, Directory Servers etc..) Microservice Architectures Knowledge of Application deployment strategies (Blue/Green, A/B etc..) Knowledge of Containers Security Agile practice experience. Provide clear documentation for the activities and possess communication skills. Must to have Skills: DevOps GitOps DevSecops Service mesh Helm, Kustomize Kubernetes Linux (RedHat Enterprise Linux, CentOS, Debian or Ubuntu.) Git Good to have Skills : Excellent customer interfacing skills. Excellent written and verbal communication skills. Participating in Daily Standups and weekly reviews Strong attention to detail and outstanding analytical and Problem-solving skills. Understanding of Business, emerging technologies in relevant industry (Banking/CIAM ) , strong understanding of trends (market and technology) in areas of specialization. CGI is an equal opportunity employer. In addition, CGI is committed to providing accommodations for people with disabilities in accordance with provincial legislation. Please let us know if you require a reasonable accommodation due to a disability during any aspect of the recruitment process and we will work with you to address your needs. Life at CGI: It is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons Come join our team, one of the largest IT and business consulting services firms in the world Skills: English Linux OpenShift Server - Linux Unix What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 1 month ago
3.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Kenvue is currently recruiting for a: Sr Engineer What we do At Kenvue, we realize the extraordinary power of everyday care. Built on over a century of heritage and rooted in science, we’re the house of iconic brands - including NEUTROGENA®, AVEENO®, TYLENOL®, LISTERINE®, JOHNSON’S® and BAND-AID® that you already know and love. Science is our passion; care is our talent. Who We Are Our global team is ~ 22,000 brilliant people with a workplace culture where every voice matters, and every contribution is appreciated. We are passionate about insights, innovation and committed to delivering the best products to our customers. With expertise and empathy, being a Kenvuer means having the power to impact millions of people every day. We put people first, care fiercely, earn trust with science and solve with courage – and have brilliant opportunities waiting for you! Join us in shaping our future–and yours. Role reports to: Senior Manager Location: Asia Pacific, India, Karnataka, Bangalore Work Location: Hybrid What you will do Who we are: At Kenvue, we believe there is extraordinary power in everyday care. Built on over a century of heritage and propelled forward by science, our iconic brands—including NEUTROGENA®, AVEENO®, TYLENOL®, LISTERINE®, JOHNSON’S® and BAND-AID® —are category leaders trusted by millions of consumers who use our products to improve their daily lives. Our employees share a digital-first mindset, an approach to innovation grounded in deep human insights, and a commitment to continually earning a place for our products in consumers’ hearts and homes. What will you do: The Senior Engineer Kubernetes is a hands-on engineer responsible for designing, implementing, and managing Cloud Native Kubernetes-based platform ecosystem and solutions for organization. This includes developing and implementing containerization strategies, developer workflows, designing and deploying Kubernetes platform, and ensuring high availability and scalability of Kubernetes infrastructure aligned with modern GitOps practices. Key Responsibilities: Implement platform capabilities and containerization plan using Kubernetes, Docker, service mesh and other modern containerization tools and technologies. Design and collaborate with other engineering stakeholders in developing architecture patterns and templates for application runtime platform such as K8s Cluster topology, traffic shaping, API, CI CD, and observability aligned with DevSecOps principles. Automate Kubernetes infrastructure deployment and management using tools such as Terraform, Jenkins, Crossplane to develop self-service platform workflows. Serve as member of micro-services platform team to closely work with Security and Compliance organization to define controls. Develop self-service platform capabilities focused on developer workflows such as API, service mesh, external DNS, cert management and K8s life cycle management in general. Participate in a cross-functional IT Architecture group discussion that reviews design from an enterprise cloud platform perspective. Optimize Kubernetes platform infrastructure for high availability and scalability. What we are looking for Qualifications Bachelor’s Degree required, preferably in STEM field. 5+ years of progressive experience in a combination of development, design in areas of cloud computing. 3+ years of experience in developing cloud native platform capabilities based of Kubernetes (Preferred EKS and/or AKS). Strong Infrastructure as a Code (IaC) experience on public Cloud (AWS and/or Azure) Experience in working on a large scale, highly available, cloud native, multi-tenant, infrastructure platforms on public cloud, preferably in a consumer business Expertise in building platform using tools like Kubernetes, Istio, OpenShift, Linux, Helm, Terraform, CI/CD. Experience in working high scale, critically important products running across public clouds (AWS, Azure) and private data centers is a plus. Strong hand-on development experience with one or more of the following languages: Go, Scala, Java, Ruby, Python Prior experience on working in a team involved in re-architecting and migrating monolith applications to microservices will be a plus Prior experience of Observability, through tools such as Prometheus, Elasticsearch, Grafana, DataDog or Zipkin is a plus. Must have a solid understanding of Continuous Development and Deployment in AWS and/or Azure. Understanding of basic Linux kernel and window server operating system Experience in working with bash, PowerShell scripting. Must be results-driven, a quick learner, and a self-starter Cloud engineering experience is a plus. If you are an individual with a disability, please check our Disability Assistance page for information on how to request an accommodation.
Posted 1 month ago
9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview We are seeking an Associate Manager - Data IntegrationOps to support and assist in managing data integration and operations (IntegrationOps) programs within our growing data organization. In this role, you will help maintain and optimize data integration workflows, ensure data reliability, and support operational excellence. This position requires a solid understanding of enterprise data integration, ETL/ELT automation, cloud-based platforms, and operational support. Support the management of Data IntegrationOps programs by assisting in aligning with business objectives, data governance standards, and enterprise data strategies. Monitor and enhance data integration platforms by implementing real-time monitoring, automated alerting, and self-healing capabilities to help improve uptime and system performance under the guidance of senior team members. Assist in developing and enforcing data integration governance models, operational frameworks, and execution roadmaps to ensure smooth data delivery across the organization. Support the standardization and automation of data integration workflows, including report generation and dashboard refreshes. Collaborate with cross-functional teams to help optimize data movement across cloud and on-premises platforms, ensuring data availability, accuracy, and security. Provide assistance in Data & Analytics technology transformations by supporting full sustainment capabilities, including data platform management and proactive issue identification with automated solutions. Contribute to promoting a data-first culture by aligning with PepsiCo’s Data & Analytics program and supporting global data engineering efforts across sectors. Support continuous improvement initiatives to help enhance the reliability, scalability, and efficiency of data integration processes. Engage with business and IT teams to help identify operational challenges and provide solutions that align with the organization’s data strategy. Develop technical expertise in ETL/ELT processes, cloud-based data platforms, and API-driven data integration, working closely with senior team members. Assist with monitoring, incident management, and troubleshooting in a data operations environment to ensure smooth daily operations. Support the implementation of sustainable solutions for operational challenges by helping analyze root causes and recommending improvements. Foster strong communication and collaboration skills, contributing to effective engagement with cross-functional teams and stakeholders. Demonstrate a passion for continuous learning and adapting to emerging technologies in data integration and operations. Responsibilities Support and maintain data pipelines using ETL/ELT tools such as Informatica IICS, PowerCenter, DDH, SAP BW, and Azure Data Factory under the guidance of senior team members. Assist in developing API-driven data integration solutions using REST APIs and Kafka to ensure seamless data movement across platforms. Contribute to the deployment and management of cloud-based data platforms like Azure Data Services, AWS Redshift, and Snowflake, working closely with the team. Help automate data pipelines and participate in implementing DevOps practices using tools like Terraform, GitOps, Kubernetes, and Jenkins. Monitor system reliability using observability tools such as Splunk, Grafana, Prometheus, and other custom monitoring solutions, reporting issues as needed. Assist in end-to-end data integration operations by testing and monitoring processes to maintain service quality and support global products and projects. Support the day-to-day operations of data products, ensuring SLAs are met and assisting in collaboration with SMEs to fulfill business demands. Support incident management processes, helping to resolve service outages and ensuring the timely resolution of critical issues. Assist in developing and maintaining operational processes to enhance system efficiency and resilience through automation. Collaborate with cross-functional teams like Data Engineering, Analytics, AI/ML, CloudOps, and DataOps to improve data reliability and contribute to data-driven decision-making. Work closely with teams to troubleshoot and resolve issues related to cloud infrastructure and data services, escalating to senior team members as necessary. Support building and maintaining relationships with internal stakeholders to align data integration operations with business objectives. Engage directly with customers, actively listening to their concerns, addressing challenges, and helping set clear expectations. Promote a customer-centric approach by contributing to efforts that enhance the customer experience and empower the team to advocate for customer needs. Assist in incorporating customer feedback and business priorities into operational processes to ensure continuous improvement. Contribute to the work intake and Agile processes for data platform teams, ensuring operational excellence through collaboration and continuous feedback. Support the execution of Agile frameworks, helping drive a culture of adaptability, efficiency, and learning within the team. Help align the team with a shared vision, ensuring a collaborative approach while contributing to a culture of accountability. Mentor junior technical team members, supporting their growth and ensuring adherence to best practices in data integration. Contribute to resource planning by helping assess team capacity and ensuring alignment with business objectives. Remove productivity barriers in an agile environment, assisting the team to shift priorities as needed without compromising quality. Support continuous improvement in data integration processes by helping evaluate and suggest optimizations to enhance system performance. Leverage technical expertise in cloud and computing technologies to support business goals and drive operational success. Stay informed on emerging trends and technologies, helping bring innovative ideas to the team and supporting ongoing improvements in data operations. Qualifications 9+ years of technology work experience in a large-scale, global organization - CPG (Consumer Packaged Goods) industry preferred. 4+ years of experience in Data Integration, Data Operations, and Analytics, supporting and maintaining enterprise data platforms. 4+ years of experience working in cross-functional IT organizations, collaborating with teams such as Data Engineering, CloudOps, DevOps, and Analytics. 1+ years of leadership/management experience supporting technical teams and contributing to operational efficiency initiatives. 4+ years of hands-on experience in monitoring and supporting SAP BW processes for data extraction, transformation, and loading (ETL). Managing Process Chains and Batch Jobs to ensure smooth data load operations and identifying failures for quick resolution. Debugging and troubleshooting data load failures and performance bottlenecks in SAP BW systems. Validating data consistency and integrity between source systems and BW targets. Strong understanding of SAP BW architecture, InfoProviders, DSOs, Cubes, and MultiProviders. Knowledge of SAP BW process chains and event-based triggers to manage and optimize data loads. Exposure to SAP BW on HANA and knowledge of SAP’s modern data platforms. Basic knowledge of integrating SAP BW with other ETL/ELT tools like Informatica IICS, PowerCenter, DDH, and Azure Data Factory. Knowledge of ETL/ELT tools such as Informatica IICS, PowerCenter, Teradata, and Azure Data Factory. Hands-on knowledge of cloud-based data integration platforms such as Azure Data Services, AWS Redshift, Snowflake, and Google BigQuery. Familiarity with API-driven data integration (e.g., REST APIs, Kafka), and supporting cloud-based data pipelines. Basic proficiency in Infrastructure-as-Code (IaC) tools such as Terraform, GitOps, Kubernetes, and Jenkins for automating infrastructure management. Understanding of Site Reliability Engineering (SRE) principles, with a focus on proactive monitoring and process improvements. Strong communication skills, with the ability to explain technical concepts clearly to both technical and non-technical stakeholders. Ability to effectively advocate for customer needs and collaborate with teams to ensure alignment between business and technical solutions. Interpersonal skills to help build relationships with stakeholders across both business and IT teams. Customer Obsession: Enthusiastic about ensuring high-quality customer experiences and continuously addressing customer needs. Ownership Mindset: Willingness to take responsibility for issues and drive timely resolutions while maintaining service quality. Ability to support and improve operational efficiency in large-scale, mission-critical systems. Some experience leading or supporting technical teams in a cloud-based environment, ideally within Microsoft Azure. Able to deliver operational services in fast-paced, transformation-driven environments. Proven capability in balancing business and IT priorities, executing solutions that drive mutually beneficial outcomes. Basic experience with Agile methodologies, and an ability to collaborate effectively across virtual teams and different functions. Understanding of master data management (MDM), data standards, and familiarity with data governance and analytics concepts. Openness to learning new technologies, tools, and methodologies to stay current in the rapidly evolving data space. Passion for continuous improvement and keeping up with trends in data integration and cloud technologies.
Posted 1 month ago
5.0 years
0 Lacs
Greater Kolkata Area
Remote
Requirements 5+ years of experience in DevOps. Proficient with Azure (compute, storage, networking, AKS) and AWS services. Strong hands-on with Terraform, Ansible, and Kubernetes. Experience with Argo CD, Helm, Traefik, and Consul. Solid understanding of CI/CD using Azure DevOps, GitHub, or GitLab. Familiarity with Databricks integration and management. Monitoring and observability experience with Dynatrace. Strong scripting skills (e.g., Bash, Python, PowerShell). Roles And Responsibilities CI/CD Pipeline Development : Design, implement, and maintain scalable CI/CD pipelines using GitHub Actions, GitLab CI, or Azure DevOps to automate build, test, and deployment processes. Infrastructure as Code (IaC) : Utilize Terraform or Python scripting to automate infrastructure provisioning and management on Azure Cloud. Containerization & Orchestration : Deploy and manage containerized applications using Kubernetes (AKS), ensuring high availability and scalability. GitOps Implementation : Implement and manage GitOps workflows using ArgoCD to automate application deployments and maintain configuration consistency. Monitoring & Alerting : Set up and maintain monitoring and alerting systems using tools like Dynatrace, Prometheus, Grafana, or Azure Monitor to ensure system reliability and performance. Incident Management : Participate in on-call rotations to respond to and resolve production incidents promptly. Collaboration : Work closely with development and operations teams to integrate DevOps best practices and ensure smooth application delivery. About US We turn customer challenges into growth opportunities. Material is a global strategy partner to the worlds most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a part of an Awesome Tribe Why work for Material In addition to fulfilling, high-impact work, company culture and benefits are integral to determining if a job is a right fit for you. Heres a bit about who we are and highlights around What we offer. Who We Are & What We Care About Material is a global company and we work with best-of-class brands worldwide. We also create and launch new brands and products, putting innovation and value creation at the center of our practice. Our clients are in the top of their class, across industry sectors from technology to retail, transportation, finance and healthcare. Material employees join a peer group of exceptionally talented colleagues across the company, the country, and even the world. We develop capabilities, craft and leading-edge market offerings across seven global practices including strategy and insights, design, data & analytics, technology and tracking. Our engagement management team makes it all hum for clients. We prize inclusion and interconnectedness. We amplify our impact through the people, perspectives, and expertise we engage in our work. Our commitment to deep human understanding combined with a science & systems approach uniquely equips us to bring a rich frame of reference to our work. A community focused on learning and making an impact. Material is an outcomes focused company. We create experiences that matter, create new value and make a difference in people's lives. What We Offer Professional Development and Mentorship. Hybrid work mode with remote friendly workplace. (6 times in a row Great Place To Work (Certified). Health and Family Insurance. 40+ Leaves per year along with maternity & paternity leaves. Wellness, meditation and Counselling sessions. (ref:hirist.tech)
Posted 1 month ago
5.0 years
5 - 9 Lacs
Cochin
On-site
We are looking for someone who thrives in automation, system observability, and high-scale operations, while also supporting CI/CD and deployment pipelines. You will blend operational execution with engineering rigor to support system reliability, incident response, and automation at scale. This role provides a unique opportunity to grow into full-fledged SRE responsibilities while working in tight coordination with our global reliability strategy. Responsibilities: Maintain, standardize, and enhance CI/CD pipelines (GitHub Actions, Azure Pipelines, GitLab). Automate testing, deployment, and rollback processes. Champion end-to-end CI/CD workflow reliability—including build validation, environment consistency, and deployment rollbacks. Deploy and manage observability tools (Datadog, Grafana, Prometheus, ELK). Assist in root cause analysis using telemetry and logs. Maintain alerting systems and participate in incident drills. Shadow and support Houston-based SRE team during follow-the-sun incident response. Create postmortem documentation for incidents and track remediation tasks. Develop scripts and tooling to reduce operational toil. Contribute to performance tuning of PostgreSQL and containerized services. Assist in distributed system optimization efforts (AKKA.NET knowledge is a bonus). Participate in rollout strategies, canary releases, and availability planning. Requirements: 5+ years in DevOps, SRE, or Infrastructure Engineering. Strong scripting ability (Python, Bash, PowerShell). Experience in managing Kubernetes clusters and container-based deployments. Working knowledge of SQL databases and performance optimization. Hands-on experience with CI/CD tools and source control systems (GitHub, GitLab). Exposure to monitoring and observability platforms (Datadog, Prometheus, ELK). Experience with incident management and postmortems. Familiarity with distributed systems (bonus: AKKA.NET or similar frameworks). Infrastructure as Code (Terraform) and GitOps practices. Exposure to global operations teams and 24/7 handover workflows. Every day, the oil and gas industry’s best minds put more than 150 years of experience to work to help our customers achieve lasting success. We Power the Industry that Powers the World Throughout every region in the world and across every area of drilling and production, our family of companies has provided the technical expertise, advanced equipment, and operational support necessary for success—now and in the future. Global Family We are a global family of thousands of individuals, working as one team to create a lasting impact for ourselves, our customers, and the communities where we live and work. Purposeful Innovation Through purposeful business innovation, product creation, and service delivery, we are driven to power the industry that powers the world better. Service Above All This drives us to anticipate our customers’ needs and work with them to deliver the finest products and services on time and on budget.
Posted 1 month ago
4.0 years
0 Lacs
India
Remote
**Immediate joining ( WFH ) InfraSingularity aims to revolutionize the Web3 ecosystem as a pioneering investor and builder. Our long-term vision is to establish ourselves as the first-of-its-kind in this domain, spearheading the investment and infrastructure development for top web3 protocols. At IS, we recognize the immense potential of web3 technologies to reshape industries and empower individuals. By investing in top web3 protocols, we aim to fuel their growth and support their journey towards decentralization. Additionally, our plan to actively build infrastructure with these protocols sets us apart, ensuring that they have the necessary foundations to operate in a decentralized manner effectively. We embrace collaboration and partnership as key drivers of success. By working alongside esteemed web3 VCs like WAGMI and more, we can leverage their expertise and collective insights to maximize our impact. Together, we are shaping the future of the Web3 ecosystem, co-investing, and co-building infrastructure that accelerates the adoption and growth of decentralized technologies. Together with our portfolio of top web3 protocols (Lava, Sei, and Anoma) and our collaborative partnerships with top protocols (EigenLayer, Avail, PolyMesh, and Connext), we are creating a transformative impact on industries, society, and the global economy. Join us on this groundbreaking journey as we reshape the future of finance, governance, and technology. About the Role We are looking for a Senior Site Reliability Engineer (SRE) to take ownership of our multi-cloud blockchain infrastructure and validator node operations. This role is critical in ensuring high performance, availability, and resilience across a range of L1/L2 blockchain protocols. If you're passionate about infrastructure automation, system reliability, and emerging Web3 technologies, we’d love to talk. What You’ll Do Own and operate validator nodes across multiple blockchain networks, ensuring uptime, security, and cost-efficiency. Architect, deploy, and maintain infrastructure on AWS, GCP, and bare-metal for protocol scalability and performance. Implement Kubernetes-native tooling (Helm, FluxCD, Prometheus, Thanos) to manage deployments and observability. Collaborate with our Protocol R&D team to onboard new blockchains and participate in testnets, mainnets, and governance. Ensure secure infrastructure with best-in-class secrets management (HashiCorp Vault, KMS) and incident response protocols. Contribute to a robust monitoring and alerting stack to detect anomalies, performance drops, or protocol-level issues. Act as a bridge between software, protocol, and product teams to communicate infra constraints or deployment risks clearly. Continuously improve deployment pipelines using Terraform, Terragrunt, GitOps practices. Participate in on-call rotations and incident retrospectives, driving post-mortem analysis and long-term fixes. Our Stack Cloud & Infra: AWS, GCP, bare-metal Containerization: Kubernetes, Helm, FluxCD IaC: Terraform, Terragrunt Monitoring: Prometheus, Thanos, Grafana, Loki Secrets & Security: HashiCorp Vault, AWS KMS Languages: Go, Bash, Python, Typescript Blockchain: Ethereum, Polygon, Cosmos, Solana, Foundry, OpenZeppelin What You Bring 4+ years of experience in SRE/DevOps/Infra roles—ideally within FinTech, Cloud, or high-reliability environments. Proven expertise managing Kubernetes in production at scale. Strong hands-on experience with Terraform, Helm, GitOps workflows . Deep understanding of system reliability, incident management, fault tolerance, and monitoring best practices. Proficiency with Prometheus and PromQL for custom dashboards, metrics, and alerting. Experience operating secure infrastructure and implementing SOC2/ISO27001-aligned practices . Solid scripting in Bash, Python, or Go . Clear and confident communicator—capable of interfacing with both technical and non-technical stakeholders. Nice-to-Have First-hand experience in Web3/blockchain/crypto environments . Understanding of staking, validator economics, slashing conditions , or L1/L2 governance mechanisms. Exposure to smart contract deployments or working with Solidity, Foundry, or similar toolchains. Experience with compliance-heavy or security-certified environments (SOC2, ISO 27001, HIPAA). Why Join Us? Work at the bleeding edge of Web3 infrastructure and validator tech. Join a fast-moving team that values ownership, performance, and reliability. Collaborate with protocol engineers, researchers, and crypto-native teams. Get exposure to some of the most interesting blockchain ecosystems in the world.
Posted 1 month ago
0 years
0 Lacs
Greater Kolkata Area
On-site
Position Overview We are looking for a skilled and proactive DevOps Engineer to join our team. This role is integral to ensuring the reliability, scalability, and efficiency of our development and production environments. The ideal candidate will have experience with Terraform, Azure cloud infrastructure, Azure Kubernetes Service (AKS), and Azure DevOps. You'll be part of a collaborative team focused on automating processes, optimizing infrastructure, and supporting a robust CI/CD pipeline. Required Skills And Qualifications Proven experience with Terraform for infrastructure provisioning and management. Strong expertise in Azure, including Azure Kubernetes Service (AKS), networking, and resource management. Hands-on experience with Azure DevOps, including creating and managing pipelines. Solid understanding of containerization and orchestration using Docker and Kubernetes. Experience with monitoring tools and best practices for cloud infrastructure (Prometheus is preferred). Familiarity with GitOps principles and version control systems (e.g., Git). Strong troubleshooting skills and a proactive approach to identifying and resolving issues. (ref:hirist.tech)
Posted 1 month ago
4.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Description Profile Description We’re seeking someone to join our team as (Associate) AI Platform Engineer for pricing model implementation projects required to support and enhance mission critical Credit Risk data infrastructure, as well as to contribute to strategic initiatives. Enterprise Technology Enterprise Technology & Services (ETS) delivers shared technology services for Morgan Stanley supporting all business applications and end users. ETS provides capabilities for all stages of Morgan Stanley’s software development lifecycle, enabling productive coding, functional and integration testing, application releases, and ongoing monitoring and support for over 3,000 production applications. ETS also delivers all workplace technologies (desktop, mobile, voice, video, productivity, intranet/internet) in integrated configurations that boost the personal productivity of employees. Application and end user functions are delivered on a scalable, secure, and reliable infrastructure composed of seamlessly integrated datacenter, network, compute, cloud, storage, and database functions. Architecture & Modernization Architecture & Modernization Drives development of the global firm strategy to define modern architectures and guardrails to reduce legacy debt, while partnering with app dev to accelerate the adoption of modern capabilities. Software Engineering This is Associate position that develops and maintains software solutions that support business needs. Morgan Stanley is an industry leader in financial services, known for mobilizing capital to help governments, corporations, institutions, and individuals around the world achieve their financial goals. At Morgan Stanley India, we support the Firm’s global businesses, with critical presence across Institutional Securities, Wealth Management, and Investment management, as well as in the Firm’s infrastructure functions of Technology, Operations, Finance, Risk Management, Legal and Corporate & Enterprise Services. Morgan Stanley has been rooted in India since 1993, with campuses in both Mumbai and Bengaluru. We empower our multi-faceted and talented teams to advance their careers and make a global impact on the business. For those who show passion and grit in their work, there’s ample opportunity to move across the businesses for those who show passion and grit in their work. Interested in joining a team that’s eager to create, innovate and make an impact on the world? Read on… What you’ll do in the role: Develop tooling and self-service capabilities for deploying AI solutions for the firm. Collaborate with other developers to enhance the developer experience when building and deploying AI applications. Have a platform mindset and build common, reusable solutions to scale Generative AI use cases using pre-trained models as well as fine-tuned models. Leverage Kubernetes/OpenShift to develop modern containerized workloads. Leverage container registries like JFrog artifactory, container packaging/configuration management technologies like Helm & Customize, and GitOps deployment methods to orchestrate, manage and deploy these workloads. Integrate with capabilities such as large-scale vector stores for embeddings. Author best practices on the Generative AI ecosystem, when to use which tools, available models such as GPT, Llama, Hugging Face etc. and libraries such as Langchain. Analyse, investigate, and implement GenAI solutions focusing on Agentic Orchestration and Agent Builder frameworks. Contribute to major design decisions and product selection for building Generative AI solutions. Inclusive of app authentication, service communication, state externalization, container layering strategy and immutability. Ensure AI platform are reliable, scalable, and operational; (e.g. blueprints for upgrade/release strategies (E.g. Blue/Green); logging/monitoring/metrics; automation of system management tasks)Participate in all team’s Agile/ Scrum ceremonies. What you’ll bring to the role: At least 4 years’ relevant experience would generally be expected to find the skills required for this role Strong hands-on Application Development background in at least one prominent programming language, preferably Python Flask or FAST Api. Broad understanding of data engineering (SQL, NoSQL, Big Data, Kafka, Redis), data governance, data privacy and security. Experience in development, management, and deployment of Kubernetes workloads, preferably on OpenShift. Experience with designing, developing, and managing RESTful services for large-scale enterprise solutions. Hands-on experience with multiprocessing, multithreading, asynchronous I/O, performance profiling in at least one prominent programming language, preferably python. Practitioner of unit testing, performance testing and BDD/acceptance testing. Understanding of OAuth 2.0 protocol for secure authorization. Proficiency with Open Telemetry tools including Grafana, Loki, Prometheus, and Cortex. Demonstrated experience in DevOps, understanding of CI/CD (Jenkins) and GitOps. Ability to articulate technical concepts effectively to diverse audiences. Strong desire and ability to influence development teams and help them adopt AI. Demonstrated ability to work effectively and collaboratively in a global organization, across time zones, and across organizations Understanding of deep learning , understanding of Machine Learning frameworks such as TensorFlow or PyTorch. Understanding of Information Security, Secure coding practices. Experience in building cloud and container native applications. Knowledge of DevOps and Agile practices. Excellent communication skills Desired Skills Good knowledge of Microservice based architecture, industry standards, for both public and private cloud. Good understanding of modern Application configuration techniques. Hands on experience with Cloud Application Deployment patterns like Blue/Green. Good understanding of State sharing between scalable cloud components (Kafka, dynamic distributed caching).]Good knowledge of various DB engines (SQL, Redis, Kafka, etc) for cloud app storage. Experience building AI applications, preferably Generative AI and LLM based apps. Deep understanding of AI agents, Agentic Orchestration, Multi-Agent Workflow Automation, along with hands-on experience in Agent Builder frameworks such Lang Chain and Lang Graph. Experience working with Generative AI development, embeddings, fine tuning of Generative AI models. Understanding of ModelOps/ ML Ops/ LLM Op. Understanding of SRE techniques. What You Can Expect From Morgan Stanley We are committed to maintaining the first-class service and high standard of excellence that have defined Morgan Stanley for over 89 years. Our values - putting clients first, doing the right thing, leading with exceptional ideas, committing to diversity and inclusion, and giving back - aren’t just beliefs, they guide the decisions we make every day to do what's best for our clients, communities and more than 80,000 employees in 1,200 offices across 42 countries. At Morgan Stanley, you’ll find an opportunity to work alongside the best and the brightest, in an environment where you are supported and empowered. Our teams are relentless collaborators and creative thinkers, fueled by their diverse backgrounds and experiences. We are proud to support our employees and their families at every point along their work-life journey, offering some of the most attractive and comprehensive employee benefits and perks in the industry. There’s also ample opportunity to move about the business for those who show passion and grit in their work. To learn more about our offices across the globe, please copy and paste https://www.morganstanley.com/about-us/global-offices into your browser. Morgan Stanley is an equal opportunities employer. We work to provide a supportive and inclusive environment where all individuals can maximize their full potential. Our skilled and creative workforce is comprised of individuals drawn from a broad cross section of the global communities in which we operate and who reflect a variety of backgrounds, talents, perspectives, and experiences. Our strong commitment to a culture of inclusion is evident through our constant focus on recruiting, developing, and advancing individuals based on their skills and talents.
Posted 1 month ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely even if theyre daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the V Team Life. What Youll Be Doing... You will be part of a World Class Container Platform team that builds and operates highly scalable Kubernetes based container platforms (EKS, OCP, OKE, and GKE) at a large scale for Global Technology Solutions at Verizon, a top 20 Fortune 500 company. This individual will have a high level of technical expertise and daily hands-on implementation working in a product team developing services in two-week sprints using agile principles. This entails programming and orchestrating the deployment of feature sets into the Kubernetes CaaS platform along with building Docker containers via a fully automated CI/CD pipeline utilizing AWS, Jenkins, Ansible playbooks, CI/CD tools and process (Jenkins, JIRA, GitLab, ArgoCD), Python, Shell Scripts, or any other scripting technologies. You will have autonomous control over day-to-day activities allocated to the team as part of agile development of new services. Automation and testing of different platform deployments, maintenance, and decommissioning Full Stack Development Participate in POC (Proof of Concept) technical evaluations for new technologies for use in the cloud What were looking for... Youll need to have: Bachelors degree or four or more years of experience. GitOps CI/CD workflows (ArgoCD, Flux) and working in Agile Ceremonies Model Address Jira tickets opened by platform customers Strong expertise in SDLC and Agile Development Experience in design, development, and implementation of scalable React/Node based applications (Full stack developer) Experience with development of HTTP/RESTful APIs, Microservices Experience with Serverless Lambda Development, AWS Event Bridge, AWS Step Functions, DynamoDB, Python Database experience (RDBMS, NoSQL, etc.) Familiarity integrating with existing web application portals Strong backend development experience with languages including Golang (preferred), Spring Boot, and Python. Experience with GitLab CI/CD, Jenkins, Helm, Terraform, Artifactory Strong development of K8S tools/components which may include standalone utilities/plugins, cert-manager plugins, etc. Development and working experience with Service Mesh lifecycle management and configuring, troubleshooting applications deployed on Service Mesh and Service Mesh-related issues Strong Terraform and/or Ansible and Bash scripting experience Effective code review, quality, performance tuning experience, test-driven development Certified Kubernetes Application Developer (CKAD) Excellent cross-collaboration and communication skills Even better if you have one or more of the following: Working experience with security tools such as Sysdig, Crowdstrike, Black Duck, Xray, etc. Experience with OWASP rules and mitigating security vulnerabilities using security tools like Fortify, Sonarqube, etc. Experience with monitoring tools like NewRelic (NRDOT), OTLP Certified Kubernetes Administrator (CKA) Certified Kubernetes Security Specialist (CKS) Red Hat Certified OpenShift Administrator Development experience with the Operator SDK Experience creating validating and/or mutating webhooks Familiarity with creating custom EnvoyFilters for Istio service mesh and cost optimization tools like Kubecost, CloudHealth to implement right sizing recommendations If Verizon and this role sound like a fit for you, we encourage you to apply even if you dont meet every even better qualification listed above. Where youll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Diversity and Inclusion Were proud to be an equal opportunity employer. At Verizon, we know that diversity makes us stronger. We are committed to a collaborative, inclusive environment that encourages authenticity and fosters a sense of belonging. We strive for everyone to feel valued, connected, and empowered to reach their potential and contribute their best. Check out our diversity and inclusion page to learn more. Locations Hyderabad, India Chennai, India
Posted 1 month ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely even if theyre daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the V Team Life. What Youll Be Doing... You will be part of a World Class Container Platform team that builds and operates highly scalable Kubernetes based container platforms (EKS, OCP, OKE and GKE) at a large scale for Global Technology Solutions at Verizon, a top 20 Fortune 500 company. This individual will have a sound technical expertise and daily hands-on implementation working in a product team developing services in two week sprints using agile principles. This entitles programming and orchestrating the deployment of feature sets into the Kubernetes CaaS platform along with building Docker containers via a fully automated CI/CD pipeline utilizing AWS, Jenkins Ansible playbooks, AWS, CI/CD tools and process (Jenkins, JIRA, GitLab, ArgoCD), Python, Shell Scripts or any other scripting technologies. You will have autonomous control over day-to-day activities allocated to the team as part of agile development of new services. Automation and testing of different platform deployments, maintenance and decommissioning Full Stack Development What were looking for... Youll need to have: Bachelors degree or two or more years of experience. Address Jira tickets opened by platform customers GitOps CI/CD workflows (ArgoCD, Flux) and Working in Agile Ceremonies Model Expertise of SDLC and Agile Development Design, develop and implement scalable React/Node based applications (Full stack developer) Experience with development with HTTP/RESTful APIs, Microservices Experience with Serverless Lambda Development, AWS Event Bridge, AWS Step Functions, DynamoDB, Python, RDBMS, NoSQL, etc. Experience with OWASP rules and mitigating security vulnerabilities using security tools like Fortify, Sonarqube, etc. Familiarity integrating with existing web application portals and backend development experience with languages to include Golang (preferred), Spring Boot, and Python. Experience with GitLab, GitLab CI/CD, Jenkins, Helm, Terraform, Artifactory Development of K8S tools/components which may include standalone utilities/plugins, cert-manager plugins, etc. Development and Working experience with Service Mesh lifecycle management and configuring, troubleshooting applications deployed on Service Mesh and Service Mesh related issues Experience with Terraform and/or Ansible Experience with Bash scripting experience Effective code review, quality, performance tuning experience, Test Driven Development. Certified Kubernetes Application Developer (CKAD) Excellent cross collaboration and communication skills Even better if you have one or more of the following: GitOps CI/CD workflows (ArgoCD, Flux) and Working in Agile Ceremonies Model Working experience with security tools such as Sysdig, Crowdstrike, Black Duck, Xray, etc. Networking of Microservices Solid understanding of Kubernetes networking and troubleshooting Experience with monitoring tools like NewRelic Working experience with Kiali, Jaeger Lifecycle management and assisting app teams on how they could leverage these tools for their observability needs K8S SRE Tools for Troubleshooting Certified Kubernetes Administrator (CKA) Certified Kubernetes Security Specialist (CKS) Red Hat Certified OpenShift Administrator Your benefits package will vary depending on the country in which you work. *subject to business approval Where youll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Diversity and Inclusion Were proud to be an equal opportunity employer. At Verizon, we know that diversity makes us stronger. We are committed to a collaborative, inclusive environment that encourages authenticity and fosters a sense of belonging. We strive for everyone to feel valued, connected, and empowered to reach their potential and contribute their best. Check out our diversity and inclusion page to learn more. Locations: Hyderabad, India; Chennai, India
Posted 1 month ago
5.0 years
0 Lacs
Kochi, Kerala, India
On-site
Job Description We are looking for someone who thrives in automation, system observability, and high-scale operations, while also supporting CI/CD and deployment pipelines. You will blend operational execution with engineering rigor to support system reliability, incident response, and automation at scale. This role provides a unique opportunity to grow into full-fledged SRE responsibilities while working in tight coordination with our global reliability strategy. Responsibilities Maintain, standardize, and enhance CI/CD pipelines (GitHub Actions, Azure Pipelines, GitLab). Automate testing, deployment, and rollback processes. Champion end-to-end CI/CD workflow reliability—including build validation, environment consistency, and deployment rollbacks. Deploy and manage observability tools (Datadog, Grafana, Prometheus, ELK). Assist in root cause analysis using telemetry and logs. Maintain alerting systems and participate in incident drills. Shadow and support Houston-based SRE team during follow-the-sun incident response. Create postmortem documentation for incidents and track remediation tasks. Develop scripts and tooling to reduce operational toil. Contribute to performance tuning of PostgreSQL and containerized services. Assist in distributed system optimization efforts (AKKA.NET knowledge is a bonus). Participate in rollout strategies, canary releases, and availability planning. Requirements 5+ years in DevOps, SRE, or Infrastructure Engineering. Strong scripting ability (Python, Bash, PowerShell). Experience in managing Kubernetes clusters and container-based deployments. Working knowledge of SQL databases and performance optimization. Hands-on experience with CI/CD tools and source control systems (GitHub, GitLab). Exposure to monitoring and observability platforms (Datadog, Prometheus, ELK). Experience with incident management and postmortems. Familiarity with distributed systems (bonus: AKKA.NET or similar frameworks). Infrastructure as Code (Terraform) and GitOps practices. Exposure to global operations teams and 24/7 handover workflows. About Us Every day, the oil and gas industry’s best minds put more than 150 years of experience to work to help our customers achieve lasting success. We Power the Industry that Powers the World Throughout every region in the world and across every area of drilling and production, our family of companies has provided the technical expertise, advanced equipment, and operational support necessary for success—now and in the future. Global Family We are a global family of thousands of individuals, working as one team to create a lasting impact for ourselves, our customers, and the communities where we live and work. Purposeful Innovation Through purposeful business innovation, product creation, and service delivery, we are driven to power the industry that powers the world better. Service Above All This drives us to anticipate our customers’ needs and work with them to deliver the finest products and services on time and on budget.
Posted 1 month ago
3.0 years
0 Lacs
India
Remote
Location: Remote (Work from Anywhere) Type: Full-Time | Contract-Based | Flexible Experience: 1–3 Years Industry: SaaS, AI, GenAI, Startup Tech Education: Bachelor’s degree in Computer Science, Engineering, or related field (required) About HYI.AI HYI.AI is a Virtual Assistance and GenAI platform built for startups, entrepreneurs, and tech innovators. We specialize in offering virtual talent solutions, GenAI tools, and custom AI/ML deployments to help founders and businesses scale smarter and faster. We’re on a mission to power the next wave of digital startups globally - and we’re looking for talented Full Stack Developers to join us remotely. Role Overview We are seeking a DevOps Engineer with strong experience in cloud infrastructure, CI/CD pipelines, and system automation. You will play a critical role in designing, building, and maintaining scalable, secure, and high-performing infrastructure for product teams. This role is ideal for professionals who are passionate about automation, system reliability, and driving continuous delivery in a startup environment. Key Responsibilities Set up and maintain CI/CD pipelines for development and production environments Manage cloud infrastructure using Infrastructure as Code (IaC) tools Monitor, troubleshoot, and optimize system performance and uptime Implement automated deployment strategies and rollback processes Ensure security, compliance, and disaster recovery standards are met Collaborate with engineering teams to streamline build, release, and deployment cycles What We’re Looking For Hands-on experience in DevOps, Cloud Engineering, or Infrastructure Automation Proficient with at least one cloud platform (AWS, GCP, or Azure) Experience with containerization (Docker) and orchestration (Kubernetes) Strong knowledge of CI/CD tools (e.g., Jenkins, GitHub Actions, GitLab CI, CircleCI) Skilled in scripting and automation (e.g., Bash, Python, YAML) Familiarity with configuration management tools (e.g., Ansible, Terraform, Helm) Preferred Skills Version control and GitOps workflows Monitoring and alerting tools (e.g., Prometheus, Grafana, Datadog) Experience with cloud cost optimization and scaling strategies Security best practices for DevOps environments Exposure to Agile methodologies and working with distributed teams What You’ll Gain Flexible freelance engagements with high-growth global startups Autonomy in driving infrastructure and DevOps strategies A remote-first professional environment with purpose-driven work Access to premium opportunities - no bidding, no uncertainty
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough