Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
0 Lacs
Anupgarh, Rajasthan, India
On-site
32890BR Noida Job Description Overview of Job Function: This (Technical Manager) Individual will provide strategic leadership to the Cloud Infra and deployment team based in India. This technically advanced manager provides subject matter expertise, advice and consulting around Automation, PlatformOps and Cloud Infra best practices. This role coordinates works with managers and all engineering stakeholders. This manager will be responsible to exceed expectations around Continuous Delivery, Deployments ( BGD / ZDT / Canary), Maintenance Window management and schedules, Deployment SLA’s\SLI`s\SLO`s, Incident Response, Change Failure Rate, Lead Time and MTTR, People management and strategy implementations in a fast-paced Public Cloud environment, preferably AWS. Responsibilities Principal Duties and Essential Responsibilities: As a Technical Manager – Infra Management, you will lead a group of Infra Engineers, responsible for OS Patching/Upgrades, Reboot Orchestration, DB upgrades, Security upgrades and patching, Golden image management and updates, Maintenance Window Schedule and Management, Reporting, MS support interactions, Public Cloud Support interactions and Provisioning/Decommissioning Cloud Infra, based in Bengaluru, working closely with their counterparts in US to drive the adoption, operations, orchestration, securing, monitoring and optimization of Verint’s Cloud platforms. You will contribute to and communicate our vision and mission in close collaboration with client counterpart. Additionally, you will play a key role in planning and delivery of capabilities that contribute to objectives and initiatives at the department level. You will be responsible for growing and coaching engineers, unlocking creativity, and inspiring them to build the best solutions, reduce toil extensively for developers and engineers, and internal clients. You are comfortable with ambiguity, yet you excel at learning and driving clarity. You take end-to-end ownership of your area and embrace iteration, believing that failure—and fail fast; fail early {Its okay to fail}— is a key part of building great team and tech. You will work to break down silos, collaborating closely with platform leaders, product leaders and engineering leaders across Verint to ensure alignment with our vision. People Leadership Inspire and empower multiple multi-functional and cross-functional teams Directly lead Architect and Engineers in multiple teams Nurture, grow and develop management and engineering talent in the team Charter and create L & D paths Technical Leadership Continuous Delivery and Deployment Maintenance Window Management Building Automation and Reducing manual toil extensively Cloud Infrastructure Optimization, right-sizing and Cost Management Incident management, Monitoring and Alerting improvements Continuous Quality and Process Improvement Concentric focus on DevSecOps – for securing Cloud Infrastructure and remediating vulnerabilities Architecture and Product Strategy Thought partner for Product to define, shape and deliver the roadmap Stakeholder engagement and management Architectural and Security Guidance Drive innovation in own team Qualifications Minimum Requirements: BS degree in Computer Science or related technical degree, or equivalent 14+ years of industry experience required. 10+ years managing and maintaining Infrastructure – Public Cloud – Preferably AWS, Azure 5+ Years managing people 5+ years in leadership and architectural role in CD, Automation and Delivery 3+ Years in implementing highly efficient PaaS, SRE and DevOps practices 3+ years in servicing customers in fast paced agile environments 3+ years in – one or many - Terraform, AWS, K8s, Datadog, Change Management, GitOps, Harness, Jenkins, Ansible 3+ years in leading Cloud Infra control, optimization initiatives and cost management Additional Requirements: Strong oral and written communication skills Ability to transform technical knowledge into business language Ability to communicate technical topics to non-technical personnel Strong analytical and problem-solving skills Strong leadership skills with the ability to perform complex scheduling, control and planning functions Demonstrated ability to influence individuals, lead teams and work effectively with executive management. Qualifications Bachelor's Degree Range of Year Experience-Min Year 10 Range of Year Experience-Max Year 17 Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
We’re looking for problem solvers, innovators, and dreamers who are searching for anything but business as usual. Like us, you’re a high performer who’s an expert at your craft, constantly challenging the status quo. You value inclusivity and want to join a culture that empowers you to show up as your authentic self. You know that success hinges on commitment, that our differences make us stronger, and that the finish line is always sweeter when the whole team crosses together. The Senior Software Development Engineer in Test (Senior SDET) will join a team of highly passionate engineers who will be responsible for delivering of Data Analytics products as part of the Alteryx Analytics Cloud Product Suite. Responsibilities Analyse requirements and technical documentation to plan, develop and implement tests. Create, execute, track and maintain manual and automated functional and regression test suites. Develop and maintain testing frameworks and utilities as needed to support testing activities. Lead the testing efforts within a project or team, often coordinating the activities of other SDETs and Developers. Provide detailed problem analysis to the development teams and help them driving the product quality. Participate in code reviews, design reviews, and other team activities to ensure the quality of the codebase. Continuously improve testing processes, methodologies, and practices. Provide guidance and mentorship to junior SDETs. Required Skills 5+ years of experience as a Senior Software Development Engineer in Test or Senior Software Quality Engineer. Master's/Bachelor's degree in computer science, computer engineering or a related field. Extensive experience with API/UI test automation, preferably with frameworks such as PyTest, TestCafe. Excellent designing and programming skills in any object oriented language like Java, Python, JavaScript/TypeScript. Experience with testing REST APIs. Experience with web-based application testing using multiple browsers in different platforms. Excellent communication skills and ability to collaborate successfully with both local and remote team members. Valued/Preferred Skills Experience using GIT or equivalent SCM. Experience with applications and microservices hosted in AWS/Azure/GCP. Experience with containerisation & orchestration technologies like Docker/Kubernetes. Experience with GitOps and package manager tools like ArgoCD, Helm etc. Experience in observability and application monitoring tools like Datadog. Experience with Python, PyTest framework. Knowledge of big data ecosystem, datawarehouse, ETL etc. Experience with relational databases, unix/linux commands, shell scripting. Experience working in an Agile/Scrum driven development environment. Find yourself checking a lot of these boxes but doubting whether you should apply? At Alteryx, we support a growth mindset for our associates through all stages of their careers. If you meet some of the requirements and you share our values, we encourage you to apply. As part of our ongoing commitment to a diverse, equitable, and inclusive workplace, we’re invested in building teams with a wide variety of backgrounds, identities, and experiences. This position involves access to software/technology that is subject to U.S. export controls. Any job offer made will be contingent upon the applicant’s capacity to serve in compliance with U.S. export controls. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Greater Lucknow Area
On-site
Kyndryl Software Engineering Chennai, Tamil Nadu, India Hyderabad, Telangana, India Bengaluru, Karnataka, India Gurugram, Haryana, India Pune, Maharashtra, India Greater Noida, Uttar Pradesh, India Posted on May 19, 2025 Apply now Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Within our Networking DevOps engineering team at Kyndryl, you'll be a master of managing and administering the backbone of our technological infrastructure. You'll be the architect of the system, shaping the base definition, structure, and documentation to ensure the long-term success of our business operations. Responsibilities Includes Requirement Gathering and Analysis: Collaborate with stakeholders to gather automation requirements, understanding business objectives, and network infrastructure needs. Analyse existing network configurations and processes to identify areas for automation and optimization. Analyse existing automation, opportunities to reuse/redeploy them with required modifications. End-to-End Automation Development: Design, develop and implement automation solutions for network provisioning, configuration management, monitoring and troubleshooting. Utilize programming languages such as Ansible, Terraform, Python, PHP to automate network tasks and workflows. Ensure scalability, reliability, and security of automation solutions across diverse network environments. Testing and Bug Fixing: Develop comprehensive test plans and procedures to validate the functionality and performance of automation scripts and frameworks. Identify and troubleshoot issues, conduct root cause analysis and implement corrective actions to resolve bugs and enhance automation stability. Collaborative Development: Work closely with cross-functional teams, including network engineers, software developers, and DevOps teams, to collaborate on automation projects and share best practices. Reverse Engineering and Framework Design: Reverse engineer existing Ansible playbooks, Python scripts and automation frameworks to understand functionality and optimize performance. Design and redesign automation frameworks, ensuring modularity, scalability, and maintainability for future enhancements and updates. Network Design and Lab Deployment: Provide expertise in network design, architecting interconnected network topologies, and optimizing network performance. Setup and maintain network labs for testing and development purposes, deploying lab environments on demand and ensuring their proper maintenance and functionality. Documentation and Knowledge Sharing: Create comprehensive documentation, including design documents, technical specifications, and user guides, to facilitate knowledge sharing and ensure continuity of operations. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career, from Junior Administrator to Architect. We have training and upskilling programs that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. One of the benefits of Kyndryl is that we work with customers in a variety of industries, from banking to retail. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical And Professional Experience Minimum 5+ years of relevant experience as a Network DevOps SME / Automation Engineer Hands On Experience In Below Technologies Data Network: Strong experience in configuring, managing, and troubleshooting Cisco, Juniper, HP, and Nokia routers and switches. Hands-on experience with SDWAN & SDN technologies (e.g., Cisco Viptela, Versa, VMWare NSX, Cisco ACI, DNAC, etc.) Network Security: Experience in configuring, managing, and troubleshooting firewalls and load balancers, including Firewalls: Palo Alto, Checkpoint, Cisco ASA/FTD, Juniper SRX Load Balancers: F5 LTM/GTM, Citrix NetScaler, A10. Deep understanding of network security principles, firewall policies, NAT, VPN (IPsec/SSL), IDS/IPS. Programming & Automation: Proficiency in Ansible development and testing for network automation. Strong Python or Shell scripting skills for automation. Experience with REST APIs, JSON, YAML, Jinja2 templates and GitHub for version control. Cloud & Linux Skills: Hands-on experience with Linux server administration (RHEL, CentOS, Ubuntu). Experience working with cloud platforms such as Azure, AWS, or GCP. DevOps: Basic understanding of CI/CD pipelines, GitOps, and automation tools. Familiarity with Docker, Kubernetes, Jenkins, and Terraform in a DevOps environment. Experience working with Infrastructure as Code (IaC) and configuration management tools Ansible Architecture & Design: Ability to design, deploy, and recommend network setups or labs independently. Strong problem-solving skills in troubleshooting complex network and security issues. Certifications Required: CCNP Security / CCNP Enterprise (Routing & Switching) Preferred Technical And Professional Experience Bachelor’s degree and above. Experience in Terraform experience is a plus (for infrastructure as code). Experience in Zabbix template development is a plus. Certifications Preferred: CCIE-level working experience (Enterprise, Security, or Data Center) – PCNSE (Palo Alto), CCSA (Checkpoint), Automation & Cloud, Python, Ansible, Terraform. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address. Apply now See more open positions at Kyndryl Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
About The Job The Red Hat Solution Architecture team is seeking a Senior Specialist Solution Architect with a minimum of 9 years of experience to join our existing OpenShift SSA team in India helping customers from across the country. In this role, you will guide customers through their Red Hat journey by creating opportunities, solving technical challenges, and building strong relationships with their engineering, development, and operations teams. You will collaborate closely with customers to understand their business needs and align Red Hat’s solutions to drive operational efficiency and innovation. What Will You Do As a Specialist Solution Architect, you will focus on the Red Hat OpenShift, Application Services, and OpenShift AI product portfolio. Your role will involve delivering presentations, demos, proofs of concepts, and workshops to showcase Red Hat's solutions. Additionally, you will partner with sales, account architects, and the extended Red Hat teams to help customers make informed investments, ensuring their systems are scalable, flexible, and high-performing. Your ability to manage relationships and work with minimal supervision will be essential in helping customers achieve success. Collaborate with Red Hat account teams to present technical solutions and develop sales strategies. Gather requirements, analyze solutions architecture, and design, as well as present solutions to meet customer needs through workshops and other supporting activities. Research and respond to technical sections of RFIs and RFPs. Build strong relationships with customer teams, including technical influencers and executives. Serve as a technical advisor, leading discussions and guiding the implementation of cloud-native architectures. Stay updated on industry trends and continuously enhance your skills. Contribute to the team effort by sharing knowledge, documenting customer success stories, helping maintain the team lab environment, and participating in subject matter expert team initiatives. Willingness to travel up to 50%. What Will You Bring Proficiency in Kubernetes and cloud-native architectures like containers, service mesh, and GitOps etc Experience with development tooling used to refactor, migrate, or develop applications, and understanding of application development methodologies and frameworks. Understanding of virtualization technologies such as KVM and VMware. Experience with infrastructure activities like installation, configuration, hardening, and components like networking, security, and high availability. Strong problem-solving abilities. Excellent communication and presentation skills, with the ability to explain complex technical concepts to both technical and non-technical audiences. Minimum of 3 years in a customer-facing role; pre-sales experience is an added advantage. This role offers the opportunity to work on exciting technologies and contribute to our customers' success. About Red Hat Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact. Inclusion at Red Hat Red Hat’s culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village. Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law. Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee. Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application-assistance@redhat.com. General inquiries, such as those regarding the status of a job application, will not receive a reply. Show more Show less
Posted 4 weeks ago
3 - 5 years
0 Lacs
Delhi, India
Remote
About The Job: Red Hat's Services team is seeking an experienced and highly skilled support engineer or systems administrator with an overall 3-5 years of experience, to join us as Technical Account Manager for our enterprise customers covering Middleware and Red Hat OpenShift Container Platform. In this role, you'll provide personalized, proactive technology engagement and guidance, and cultivate high-value relationships with clients as you seek to understand and meet their needs with the complete Red Hat portfolio of products. As a Technical Account Manager, you will provide a level of premium advisory-based support that builds, maintains, and grows long-lasting customer loyalty by tailoring support for each of our customer's environments, facilitating collaboration with their other vendors, and advocating on the customer's behalf. At the same time, you'll work closely with our Engineering, R&D, Product Management, Global Support, Sales & Services teams to debug, test, and resolve issues. What Will You Do: Perform technical reviews & share knowledge to proactively identify & prevent issues Understand your customers' technical infrastructures, hardware, processes, and offerings Perform initial or secondary investigations and respond to online and phone support requests Deliver key portfolio updates and assist customers with upgrades Manage customer cases and maintain clear and concise case documentation Create customer engagement plans & keep documentation on customer environments updated Ensure a high level of customer satisfaction with each qualified engagement through the complete adoption life cycle of our offerings Engage with Red Hat's field teams, customers to ensure a positive Red Hat product & technology experience and a successful outcome resulting in long-term success Communicate how specific Red Hat product road-map align to customer use cases Capture Red Hat product capabilities and identify gaps as related to customer use cases through a closed-loop process for each step of the engagement life cycle Engage with Red Hat's product engineering teams to help develop solution patterns based on customer engagements and personal experience that guide platform adoption Contribute internally to the Red Hat team, share knowledge and best practices with team members, contribute to internal projects and initiatives, and serve as a subject matter expert (SME) and mentor for specific technical and process areas Travel to visit customers, partners, conferences, and other events as needed. What Will You Do: Bachelor's degree in science or a technical field; engineering or computer science Competent reading and writing skills in English Ability to effectively manage and grow existing enterprise customers by delivering proactive, relationship-based, best-in-class support Upstream involvement in open source projects is a plus Indian citizenship or authorization to work in India Middleware Java coding skills and solid understanding of JEE platform, Java Programming APIs Hands-on experience with Java application platform and JBoss, WebSphere, and WebLogic Integration Experience working with Red Hat 3Scale API Management, SSO Runtimes Experience working with microservices development using Spring Boot and developing cloud-native applications RHOCP Experience in Red Hat OpenShift Container Platform (RHOCP) or Kubernetes or Dockers cluster management Understanding of RHOCP Architecture & different types of RHOCP installations Experience in RHOCP Troubleshooting and data/logs collection Strong working knowledge of Prometheus, Grafana, Gitops ArgoCD , ACS , ACM will be considered a plus About Red Hat Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact. Inclusion at Red Hat Red Hat’s culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village. Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law. Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee. Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application-assistance@redhat.com. General inquiries, such as those regarding the status of a job application, will not receive a reply. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Vadodara, Gujarat, India
On-site
We are in search of an experienced Senior DevOps Engineer with specialized expertise in Kubernetes, GitOps, and cloud services. This individual will play a crucial role in the design and management of advanced CI/CD pipelines, guaranteeing seamless integration and deployment of software artifacts within varied environments in Kubernetes clusters. Key Responsibilities: Pipeline Construction & Management: Build and maintain efficient build pipelines. Deploy artifacts to Kubernetes with advanced deployment strategies. Docker & Helm Expertise: Develop and manage Docker images and Helm charts. Handle Helm repositories and deploy charts to Kubernetes clusters. GitOps Proficiency: Employ GitOps tools like ArgoCD, ArgoEvents, and ArgoRollouts. Coordinate with development and QA teams in managing GitOps repositories. Kubernetes & Cloud Services: Administer Kubernetes clusters, including knowledge of CSI, CNI drivers, backup/restore solutions. Monitor clusters using New Relic, ensuring reliability and availability. Proficiency in AWS services such as EKS, IAM, VPC, RDS/Aurora, Load Balancer configurations. Security & Compliance: Uphold security standards within Kubernetes clusters. IAC and Deployment Tracking: Manage Infrastructure as Code (IAC) and oversee deployment tracking, linking with CI/CD pipelines. Collaboration & Coordination: Collaborate with development teams for artifact generation pipelines. Coordinate with QA teams for environment setup (DEV, QA, Staging, UAT, Production). Technical Expertise: Skilled in both on-prem and cloud-managed Kubernetes clusters. Required Skills & Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field. 5-10 years of experience in DevOps with a focus on Kubernetes and GitOps. In-depth understanding of CI/CD principles, especially Gitlab/Jankins and ArgoCD. Advanced skills in AWS cloud services and Kubernetes security practices. Good knowledge of IAC, infrastructure provisioning and configuration management tool like Ansible and Terraform Proficient in working with YAML files and shell scripting. Experience in programming (Python or other relevant languages). Strong automation skills with an ability to streamline processes. Excellent problem-solving abilities and teamwork skills. Apply Now Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Be Part of Building the Future Dremio is the unified lakehouse platform for self-service analytics and AI, serving hundreds of global enterprises, including Maersk, Amazon, Regeneron, NetApp, and S&P Global. Customers rely on Dremio for cloud, hybrid, and on-prem lakehouses to power their data mesh, data warehouse migration, data virtualization, and unified data access use cases. Based on open source technologies, including Apache Iceberg and Apache Arrow, Dremio provides an open lakehouse architecture enabling the fastest time to insight and platform flexibility at a fraction of the cost. Learn more at www.dremio.com. About The Role Dremio’s SREs ensure that internal and externally visible services have reliability and uptime appropriate to users' needs and a fast rate of improvement. You will be joining a small but mighty team of experienced SREs helping to deliver a world class experience to Dremo Cloud customers. Our systems, like many, are joint-cognitive, made up of both people and software: complex and therefore intrinsically hazardous. We understand and expect that catastrophe is always just around the corner. What You’ll Be Doing Drive continuous improvements to our usage of Kubernetes, our Operators, and the GitOps deployment paradigm. Extend our networking, service mesh and Kubernetes systems to support connectivity between GCP, AWS and Azure. Collaborate with Engineering teams to support services before they go live through activities such as system design consulting, developing software platforms and frameworks, monitoring/alerting, capacity planning, production readiness and service reviews. Help define and instrument Service Level indicators and objectives (SLIs/SLOs) with service owners in the Engineering teams. Develop SLO-based on-call strategies for service owners and their teams. Collaborate within our virtual Observability team: develop and improve observability (tracing, events, metrics, profiling, logging and exceptions) of the Dremio Cloud product. Ability to debug and optimize code written by others and automate routine tasks. You recognize complexity and are familiar with multiple techniques to manage it but recognize the folly in complete rewrites. Evangelize and advocate for resilience engineering and reliability practices across our organization. Scale systems sustainably through automation and evolve systems by pushing for changes that improve reliability and velocity. Join an on-call rotation for systems and services that the SRE team owns. Practice sustainable incident response and post-incident investigation analysis. Drive the cultural, technical, and process changes to move towards a true continuous delivery model within the company. What We’re Looking For 10+ years of relevant experience in the following areas: SRE, DevOps, Distributed Systems, Cloud Operations, Software Engineering. Expertise in Kubernetes, Istio, Terraform, Terragrunt, ArgoCD/Flux. Expertise with software defined networking infrastructure: dedicated and partner interconnects, VPNs, BGP. Excellent command of cloud services on GCP/AWS/Azure, CI/CD pipelines. Have moderate-advanced experience in Python/Go, and at least reading knowledge of Java. You are interested in designing, analyzing and troubleshooting large-scale distributed systems. You have a systematic problem-solving approach, coupled with strong communication skills and a sense of ownership, drive, and determination. You have a great ability to debug and optimize code and automate routine tasks. You have a solid background in software development and architecting resilient and reliable applications. Bonus points if you have Hands-on experience with large-scale production Kubernetes clusters ( You have developed SLIs/SLOs for production systems. Return to Office Philosophy Workplace Wednesdays - to break down silos, build relationships and improve cross-team communication. Lunch catering / meal credits provided in the office and local socials align to Workplace Wednesdays. In general, Dremio will remain a hybrid work environment. We will not be implementing a 100% (5 days a week) return to office policy for all roles. What We Value At Dremio, we hold ourselves to high standards when it comes to People, Thinking, and Action. Our Gnarlies (that's what we call our employees) communicate with clarity, drive accountability, and are respectful towards each other. We confront brutal facts and focus on results while operating with a sense of urgency and building a "flywheel". People who like to jump in and drive momentum will thrive in our #GnarlyLife. Dremio is an equal opportunity employer supporting workforce diversity. We do not discriminate on the basis of race, religion, color, national origin, gender identity, sexual orientation, age, marital status, protected veteran status, disability status, or any other unlawful factor. Dremio is committed to providing any necessary accommodations for individuals with disabilities within our application and interview process. To request accommodation due to a disability, please inform your recruiter. Dremio has policies in place to protect the personal information that employees and applicants disclose to us. Please click here to review the privacy notice. Important Security Notice for Candidates At Dremio, we uphold trust and transparency as paramount values in all our interactions with customers, partners, employees, and the general public. We have been targeted by individuals creating fake domains similar to ours to scam prospects and candidates. Please note that all official communications from us will be from an @dremio.com domain. If you suspect you've been targeted by a scam, it's imperative to report the incident to your local law enforcement agencies. For more information about this type of scam, please refer to Dremio's official statement here. Dremio is not responsible for any fees related to unsolicited resumes and will not pay fees to any third-party agency or company that does not have a signed agreement with the Company. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Summary: We are looking for a DevOps Engineer with 5-8 years of experience to join our team. In this role, you will be responsible for designing, implementing, and managing the software delivery pipeline and infrastructure to ensure continuous delivery of high-quality software. The ideal candidate should have a strong background in software development, systems administration, cloud infrastructure, and automation. Key Responsibilities: Infrastructure Automation: Design and implement Infrastructure as Code (IaC) using tools like Terraform, CloudFormation, Pulumi, or Ansible. Manage cloud environments (AWS, Azure, GCP) and automate resource provisioning, scaling, and monitoring. Create and manage Kubernetes clusters and containerized applications using Docker. CI/CD Pipeline Management: Develop, maintain, and optimize Continuous Integration/Continuous Deployment (CI/CD) pipelines using tools like Jenkins, GitHub Actions, or Azure DevOps or TeamCity. Ensure seamless code integration, testing, and deployment processes. Integrate automated testing, code quality checks, and security scans into CI/CD workflows. Monitoring & Performance Optimization: Implement and maintain monitoring and alerting systems using tools like Prometheus, Grafana, Datadog, AppD, or CloudWatch. Identify and troubleshoot performance bottlenecks in systems, applications, and infrastructure. Conduct regular audits to ensure optimal system health, uptime, and performance. Security & Compliance: Implement best practices for cloud security, identity, and access management (IAM), data protection, and network security. Perform vulnerability assessments, penetration testing, and remediation. Ensure compliance with industry standards and regulations such as GDPR, HIPAA, or SOC 2, ISO27001. Collaboration & Support: Work closely with development, QA, and operations teams to ensure smooth delivery of software. Provide technical guidance and mentorship to junior engineers and team members. Participate in on-call rotations to provide 24/7 support for critical systems and infrastructure. Configuration Management: Manage configuration and deployment of software and infrastructure using tools like Ansible, or Chef. Create and manage scripts for automation and task orchestration (Bash, Python, PowerShell). Skills & Qualifications: Technical Skills: Cloud Platforms: Expertise in AWS, Azure, or GCP, including compute, networking, storage, and security services. Containerization: Strong experience with Docker and Kubernetes, EKS, GKE including container orchestration and management. CI/CD Tools: Deep knowledge of CI/CD tools like Jenkins, GitHub Actions, or ArgoCD and TeamCity . Infrastructure as Code (IaC): Hands-on experience with Terraform, CloudFormation, Pulumi or similar tools. Programming & Scripting: Proficiency in one or more programming languages (Python, Go, Java) and scripting languages (Bash, PowerShell). Version Control: Advanced skills in Git for version control and branching strategies. Monitoring & Automation: Familiarity with monitoring tools like Prometheus, Grafana, Datadog, New Relic, or AppD. Experience with logging, metrics collection, and observability best practices. Security: Knowledge of security best practices for cloud environments. Experience with DevSecOps tools for automated security testing (SonarQube, OWASP ZAP, Snyk). Soft Skills: Strong communication and interpersonal skills, with the ability to collaborate effectively across teams. Problem-solving skills with a proactive attitude towards finding solutions. Time management skills and the ability to handle multiple projects simultaneously. Leadership and mentoring skills for guiding junior engineers. Preferred Qualifications: Certification in cloud platforms (AWS Certified Solutions Architect, Azure DevOps Expert, GCP Professional Cloud DevOps Engineer). Familiarity with GitOps tools like Argo CD or Flux. Knowledge of serverless architecture (AWS Lambda, Azure Functions, Google Cloud Functions). Education & Experience: Bachelor's degree in Computer Science, Information Technology, or related field. 5+ years of experience in DevOps, cloud engineering, or a related field. About Picarro: We are the world's leader in timely, trusted, and actionable data using enhanced optical spectroscopy. Our solutions are used in various applications, including natural gas leak detection, ethylene oxide emissions monitoring, semiconductor fabrication, pharmaceutical, petrochemical, atmospheric science, air quality, greenhouse gas measurements, food safety, hydrology, ecology, and more. Our software and hardware are designed and manufactured in Santa Clara, California. They are used in over 90 countries worldwide based on over 65 patents related to cavity ring-down spectroscopy (CRDS) technology. They are unparalleled in their precision, ease of use, and reliability. At Picarro, we are committed to fostering a diverse and inclusive workplace. All qualified applicants will receive consideration for employment without regard to race, sex, color, religion, national origin, protected veteran status, gender identity, social orientation, or disability. Posted positions are not open to third-party recruiters/agencies, and unsolicited resume submissions will be considered free referrals. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
India
On-site
Senior Software Engineer I - DevOps Engineer Exceptional software engineering is challenging. Amplifying it to ensure that multiple teams can concurrently create and manage a vast, intricate product escalates the complexity. As a Senior Software Engineer within the Release Engineering team at Sumo Logic, your task will be to develop and sustain automated tooling for the release processes of all our services. You will contribute significantly to establishing automated delivery pipelines, empowering autonomous teams to create independently deployable services. Your role is integral to our overarching strategy of enhancing software delivery and progressing Sumo Logic’s internal Platform-as-a-Service. What You Will Do Own the Delivery pipeline and release automation framework for all Sumo services Educate and collaborate with teams during both design and development phases to ensure best practices. Mentor a team of Engineers (Junior to Senior) and improve software development processes. Evaluate, test, and provide technology and design recommendations to executives. Write detailed design documents and documentation on system design and implementation. Ensuring the engineering teams are set up to deliver quality software quickly and reliably. Enhance and maintain infrastructure and tooling for development, testing and debugging What You Already Have B.S. or M.S. Computer Sciences or related discipline Ability to influence: Understand people’s values and motivations and influence them towards making good architectural choices. Collaborative working style: You can work with other engineers to come up with good decisions. Bias towards action: You need to make things happen. It is essential you don’t become an inhibitor of progress, but an enabler. Flexibility: You are willing to learn and change. Admit past approaches might not be the right ones now. Technical Skills 4+ years of experience in the design, development, and use of release automation tooling, DevOps, CI/CD, etc. 2+ years of experience in software development in Java/Scala/Golang or similar 3+ years of experience on software delivery technologies like jenkins including experience writing and developing CI/CD pipelines and knowledge of build tools like make/gradle/npm etc. Experience with cloud technologies, such as AWS/Azure/GCP Experience with Infrastructure-as-Code and tools such as Terraform Experience with scripting languages such as Groovy, Python, Bash etc. Knowledge of monitoring tools such as Prometheus/Grafana or similar tools Understanding of GitOps and ArgoCD concepts/workflows Understanding of security and compliance aspects of DevSecOps About Us Sumo Logic, Inc. empowers the people who power modern, digital business. Sumo Logic enables customers to deliver reliable and secure cloud-native applications through its Sumo Logic SaaS Analytics Log Platform, which helps practitioners and developers ensure application reliability, secure and protect against modern security threats, and gain insights into their cloud infrastructures. Customers worldwide rely on Sumo Logic to get powerful real-time analytics and insights across observability and security solutions for their cloud-native applications. For more information, visit www.sumologic.com. Sumo Logic Privacy Policy. Employees will be responsible for complying with applicable federal privacy laws and regulations, as well as organizational policies related to data protection. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Delhi, India
On-site
About AlphaSense The world’s most sophisticated companies rely on AlphaSense to remove uncertainty from decision-making. With market intelligence and search built on proven AI, AlphaSense delivers insights that matter from content you can trust. Our universe of public and private content includes equity research, company filings, event transcripts, expert calls, news, trade journals, and clients’ own research content. The acquisition of Tegus by AlphaSense in 2024 advances our shared mission to empower professionals to make smarter decisions through AI-driven market intelligence. Together, AlphaSense and Tegus will accelerate growth, innovation, and content expansion, with complementary product and content capabilities that enable users to unearth even more comprehensive insights from thousands of content sets. Our platform is trusted by over 4,000 enterprise customers, including a majority of the S&P 500. Founded in 2011, AlphaSense is headquartered in New York City with more than 2,000 employees across the globe and offices in the U.S., U.K., Finland, India, Singapore, Canada, and Ireland. Come join us! About The Role You will join our team of world-class experts who are developing the AlphaSense platform. The team is right at the very core of what we do and is responsible for implementing cutting-edge technology for scalable, distributed processing of millions of documents. We are seeking a highly skilled Software Engineer II to join our dynamic team responsible for building and maintaining data ingestion systems at scale. As a key member of our team, you will play a crucial role in designing, implementing, and optimizing robust solutions for ingesting millions of documents per month, including the addition of multimedia content such as audio and video from the public web. You'll play a key role in integrating cutting-edge AI models, enabling intelligent suggestions, and content synchronization. You are a good fit if you're a proactive problem-solver with a “go-getter” attitude, startup experience, and a readiness to learn whatever comes your way! Responsibilities Design, develop, and maintain high-performance, scalable applications using Python. Solve complex technical challenges with innovative solutions that enhance product features and operational efficiencies. Collaborate across teams to integrate applications, optimize system performance, and streamline data flows. Take full ownership of projects from inception to deployment, delivering high-quality solutions that improve user experience. Lead or support data ingestion processes, ensuring seamless data flow and management. Continuously learn and adapt to new tools, frameworks, and technologies as they arise, embracing a growth mindset. Mentor and guide junior developers, fostering a collaborative, innovative culture. Requirements 2+ years of professional Python development experience, with a strong understanding of Python frameworks (Django, Flask, FastAPI, etc.). Proven success working in a startup environment, demonstrating adaptability and flexibility in fast-changing conditions. Proactive problem-solver with a keen eye for tackling challenging technical issues. A willingness to learn and adapt to new technologies and challenges as they arise. Strong team player with a go-getter attitude, comfortable working both independently and within cross-functional teams. Nice-to-Have Experience with media processing and live streaming techniques is a major plus. Familiarity with Crossplane and/or ArgoCD for GitOps-based infrastructure management. Experience with working on Docker, K8s AlphaSense is an equal-opportunity employer. We are committed to a work environment that supports, inspires, and respects all individuals. All employees share in the responsibility for fulfilling AlphaSense’s commitment to equal employment opportunity. AlphaSense does not discriminate against any employee or applicant on the basis of race, color, sex (including pregnancy), national origin, age, religion, marital status, sexual orientation, gender identity, gender expression, military or veteran status, disability, or any other non-merit factor. This policy applies to every aspect of employment at AlphaSense, including recruitment, hiring, training, advancement, and termination. In addition, it is the policy of AlphaSense to provide reasonable accommodation to qualified employees who have protected disabilities to the extent required by applicable laws, regulations, and ordinances where a particular employee works. Recruiting Scams and Fraud We At AlphaSense Have Been Made Aware Of Fraudulent Job Postings And Individuals Impersonating AlphaSense Recruiters. These Scams May Involve Fake Job Offers, Requests For Sensitive Personal Information, Or Demands For Payment. Please Note AlphaSense never asks candidates to pay for job applications, equipment, or training. All official communications will come from an @alpha-sense.com email address. If you’re unsure about a job posting or recruiter, verify it on our Careers page. If you believe you’ve been targeted by a scam or have any doubts regarding the authenticity of any job listing purportedly from or on behalf of AlphaSense please contact us. Your security and trust matter to us. Show more Show less
Posted 4 weeks ago
3 years
0 Lacs
Pune, Maharashtra, India
On-site
## Position Overview We are seeking an experienced Platform Engineering Manager to lead and grow our platform engineering team. This role combines technical leadership in cloud-native technologies with people management skills to drive developer productivity and platform reliability at scale. ## Key Responsibilities Lead and mentor a team of platform engineers, fostering a culture of innovation, collaboration, and continuous improvement Oversee the development, operation, and evolution of our developer platforms, focusing on container orchestration with Kubernetes Collaborate with architecture teams and product management to inform platform decisions that enhance developer experience Partner with development teams to understand their needs and optimize the developer experience Establish and track meaningful metrics to measure and improve developer productivity across the organization Manage team priorities, resource allocation, and project delivery while maintaining high operational standards Build strong relationships across engineering teams to ensure platform solutions meet organizational needs Facilitate technical design reviews and contribute insights on platform utilization best practices Help define and drive the platform engineering roadmap in collaboration with stakeholders ## Required Qualifications 5+ years of experience in platform engineering or DevOps, with at least 3 years in a management role Proven track record of building and operating production-grade Kubernetes platforms Deep understanding of container technologies, CI/CD pipelines, and cloud-native architectures Experience implementing and managing developer productivity tools and metrics Strong background in automation, infrastructure as code, and site reliability engineering Demonstrated success in leading technical teams and managing stakeholder relationships Experience with agile methodologies and project management Excellent problem-solving skills and ability to balance technical debt with business needs ## Leadership Competencies Strong mentorship and coaching abilities to develop team members' technical and soft skills Excellent communication skills with the ability to translate complex technical concepts to various audiences Strategic thinking with a focus on long-term platform scalability and sustainability Proven ability to influence and drive consensus across multiple teams Experience in recruitment, performance management, and career development Demonstrated ability to manage competing priorities and make data-driven decisions ## Preferred Qualifications Experience with major cloud providers (AWS, GCP, or Azure) Experience with GitOps practices and tools such as ArgoCD Understanding of security best practices and compliance requirements Contribution to open-source projects or developer tools Experience with developer experience (DevX) optimization Background in implementing platform observability and monitoring solutions Knowledge of cost optimization strategies for cloud infrastructure ## Impact & Scope Lead a team responsible for critical developer infrastructure supporting multiple engineering teams Partner with architecture and product teams to align platform implementation with technical strategy Collaborate with engineering leadership to align platform capabilities with business objectives Champion developer experience improvements through effective platform solutions Play a key role in shaping the platform engineering roadmap and strategic initiatives Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
dunnhumby is the global leader in Customer Data Science, empowering businesses everywhere to compete and thrive in the modern data-driven economy. We always put the Customer First. Our mission: to enable businesses to grow and reimagine themselves by becoming advocates and champions for their Customers. With deep heritage and expertise in retail – one of the world’s most competitive markets, with a deluge of multi-dimensional data – dunnhumby today enables businesses all over the world, across industries, to be Customer First. dunnhumby employs nearly 2,500 experts in offices throughout Europe, Asia, Africa, and the Americas working for transformative, iconic brands such as Tesco, Coca-Cola, Meijer, Procter & Gamble and Metro. We’re looking for a Backend Engineer (.NET) who expects more from their career. This role offers an opportunity to build scalable, high-performance solutions in the platform engineering team, enhance code quality and engineering excellence, and contribute to critical architecture decisions within a data-driven environment. What We Expect From You 6+ years of hands-on experience in backend development, focusing on performance, scalability, security, and maintainability. Strong proficiency in C# and .NET Core, with expertise in developing RESTful APIs and microservices. Drive code quality, ensuring adherence to best practices, design patterns, and SOLID principles. Experience with cloud platforms (Google Cloud Platform & Azure), implementing cloud-native best practices for scalability, security, and resilience. Hands-on experience with containerization (Docker) and orchestration (Kubernetes, Helm). Strong focus on non-functional requirements (NFRs) such as performance tuning, observability (monitoring/logging/alerting), and security best practices. Experience implementing unit testing, integration testing, and automated testing frameworks. Proficiency in CI/CD automation, with experience in GitOps workflows and Infrastructure-as-Code (Terraform, Helm, or similar). Experience working in Agile methodologies (Scrum, Kanban) and DevOps best practices. Identify dependencies, risks, and bottlenecks early, working proactively with engineering leads to resolve them. - Stay updated with emerging technologies and industry best practices to drive continuous improvement. Key Technical Skills Strong proficiency in C#, .NET Core, and RESTful API development. Experience with asynchronous programming, concurrency control, and event-driven architecture (Pub/Sub, Kafka, etc.). Deep understanding of object-oriented programming, data structures, and algorithms. Experience with unit testing frameworks and a TDD approach to development. Hands-on experience with Docker and Kubernetes (K8s) for containerized applications. Strong knowledge of performance tuning, security best practices, and observability (monitoring/logging/alerting). Experience with CI/CD pipelines, GitOps workflows, and infrastructure-as-code (Terraform, Helm, or similar). Exposure to multi-tenant architectures and high-scale distributed systems. Proficiency in relational databases (PostgreSQL preferred) and exposure to NoSQL solutions. Preferred Skills Exposure and experience in working with front end technologies as React.js Knowledge of gRPC, GraphQL, event driven or other modern API technologies. Familiarity with feature flagging, blue-green deployments, and canary releases. For further information about how we collect and use your personal information please see our Privacy Notice which can be found (here) What You Can Expect From Us We won’t just meet your expectations. We’ll defy them. So you’ll enjoy the comprehensive rewards package you’d expect from a leading technology company. But also, a degree of personal flexibility you might not expect. Plus, thoughtful perks, like flexible working hours and your birthday off. You’ll also benefit from an investment in cutting-edge technology that reflects our global ambition. But with a nimble, small-business feel that gives you the freedom to play, experiment and learn. And we don’t just talk about diversity and inclusion. We live it every day – with thriving networks including dh Gender Equality Network, dh Proud, dh Family, dh One and dh Thrive as the living proof. We want everyone to have the opportunity to shine and perform at your best throughout our recruitment process. Please let us know how we can make this process work best for you. For an informal and confidential chat please contact stephanie.winson@dunnhumby.com to discuss how we can meet your needs. Our approach to Flexible Working At dunnhumby, we value and respect difference and are committed to building an inclusive culture by creating an environment where you can balance a successful career with your commitments and interests outside of work. We believe that you will do your best at work if you have a work / life balance. Some roles lend themselves to flexible options more than others, so if this is important to you please raise this with your recruiter, as we are open to discussing agile working opportunities during the hiring process. For further information about how we collect and use your personal information please see our Privacy Notice which can be found (here) Show more Show less
Posted 4 weeks ago
6 - 8 years
16 - 20 Lacs
Bengaluru
Work from Office
Senior DevOps Engineer Location: Bengaluru South, Karnataka, India Experience: 68 Years Compensation: 1620 LPA Industry: PropTech | AgriTech | Cloud Infrastructure | Platform Engineering Employment Type: Full-Time | On-Site/Hybrid Are you a DevOps Engineer passionate about building scalable and efficient infrastructure for innovative platforms? If you’re excited by the challenge of automating and optimizing cloud infrastructure for a mission-driven PropTech platform, this opportunity is for you. We are seeking a seasoned DevOps Engineer to be a key player in scaling a pioneering property-tech ecosystem that reimagines how people discover, trust, and own their dream land or property. Our ideal candidate thrives in dynamic environments, embraces automation, and values security, performance, and reliability. You’ll be working alongside a passionate and agile team that blends technology with sustainability, enabling seamless experiences for both property buyers and developers. Key Responsibilities Architect, deploy, and maintain highly available, scalable, and secure cloud infrastructure, preferably on AWS. Design, develop, and optimize CI/CD pipelines for automated software build, test, and deployment. Implement and manage Infrastructure as Code (IaC) using Terraform, CloudFormation, or similar tools. Set up and manage robust monitoring, logging, and alerting systems (Prometheus, Grafana, ELK, etc.). Proactively monitor and improve system performance, availability, and resilience. Ensure compliance, access control, and secrets management across environments using best-in-class DevSecOps practices. Collaborate closely with development, QA, and product teams to streamline software delivery lifecycles. Troubleshoot production issues, identify root causes, and implement long-term solutions. Optimize infrastructure costs while maintaining performance SLAs. Build and maintain internal tools and automation scripts to support development workflows. Stay updated with the latest in DevOps practices, cloud technologies, and infrastructure design. Participate in on-call support rotation for critical incidents and infrastructure health. Preferred Qualifications Bachelor's degree in Computer Science, Engineering, or related field. 6–8 years of hands-on experience in DevOps, SRE, or Infrastructure roles. Strong proficiency in AWS (EC2, S3, RDS, Lambda, ECS/EKS). Expert-level scripting skills in Python, Bash, or Go. Solid experience with CI/CD tools such as Jenkins, GitLab CI, CircleCI, etc. Expertise in Docker, Kubernetes, and container orchestration at scale. Experience with configuration management tools like Ansible, Chef, or Puppet. Solid understanding of networking, DNS, SSL, firewalls, and load balancing. Familiarity with relational and non-relational databases (PostgreSQL, MySQL, etc.) is a plus. Excellent troubleshooting and analytical skills with a performance- and security-first mindset. Experience working in agile, fast-paced startup environments is a strong plus. Nice to Have Experience working in PropTech, AgriTech, or sustainability-focused platforms. Exposure to geospatial mapping systems, virtual land visualization, or real-time data platforms. Prior work with DevSecOps, service meshes like Istio, or secrets management with Vault. Passion for building tech that positively impacts people and the planet. Why Join Us? Join India’s first revolutionary PropTech platform, blending human-centric design with cutting-edge technology to empower property discovery and ownership. Be part of a company that doesn’t just build products—it builds ecosystems: for urban buyers, rural farmers, and the environment. Work with a forward-thinking leadership team from one of India’s most respected sustainability and land stewardship organizations. Collaborate across cross-disciplinary teams solving real-world challenges at the intersection of tech, land, and sustainability.
Posted 1 month ago
7 years
0 Lacs
Bengaluru, Karnataka
Work from Office
About the job Devops + Python Devops + Python Location: Bangalore Mode: Hybrid (2- 3 days/week) Experience: 7+ Years Tech skills: Complementary tech skills / Relevant development experience is must Experience with Python Scripting. Understanding of code management and release approaches / must have. Understanding of CI/CD pipelines, GitFlow and Github, GitOps (Flux, ArgoCD) / must have / flux is good to have. Good understanding of functional programming (Python Primary / Golang Secondary used in IAC platform). Understanding ABAC / RBAC / JWT / SAML / AAD / OIDC authorization and authentication ( handson and direction No SQL databases, i.e., DynamoDB (SCC heavy). Event driven architecture queues, streams, batches, pub / subs. Understanding functional programming list / map / reduce / compose, if familiar with monads / needed. Fluent in operating kubernetes clusters, as from dev perspective Creating custom CRD, operators, controllers. Experience in creating Serverless AWS & Azure (both needed )Monorepo / multirepo / Understanding of code management approaches. Understanding scalability and concurrency. Understanding of network, direct connect connectivity, proxies. Deep knowledge in AWS cloud ( org / networks / security / IAM ) (Basic understanding of Azure cloud). Understanding of SDLC, DRY, KISS, SOLID/ development principles
Posted 1 month ago
3 - 8 years
10 - 20 Lacs
Bengaluru
Remote
About the Team/Role We are seeking a highly skilled DevOps Engineer with in-depth knowledge and hands-on experience in Kubernetes, GitOps, GitHub Actions, Argo CD, and Docker. The ideal candidate will be responsible for containerizing all technology applications and ensuring seamless integration and deployment across our infrastructure. How youll make an impact Provide strong technical guidance and leadership in DevOps practices. Design, implement, and maintain Kubernetes clusters for scalable application deployment. Utilize GitOps methodologies for continuous delivery and operational efficiency. Develop and manage CI/CD pipelines using GitHub Actions and Argo CD. Service mesh implementation using Istio. Containerize applications using Docker to ensure consistency across different environments. Collaborate with development and operations teams to deliver services quickly and efficiently. Monitor and optimize the performance, scalability, and reliability of applications. Experience you’ll bring Proven experience in Kubernetes, GitOps, GitHub Actions, Argo CD, and Docker. Strong background in containerizing technology applications. Demonstrated ability to deliver services quickly without compromising quality. Excellent problem-solving skills and the ability to troubleshoot complex issues. Strong communication skills and the ability to provide technical guidance to team members. Prior experience in a similar role is essential. Development background is essential. Monitoring and Logging: Implement and manage comprehensive monitoring and logging solutions to ensure proactive issue detection and resolution. Debugging and Troubleshooting: Utilize advanced debugging and troubleshooting skills to address complex issues across the infrastructure and application stack. Architect and Design: Lead the architecture and design of scalable and reliable infrastructure solutions, ensuring alignment with organizational goals and industry best practices.
Posted 1 month ago
0 - 10 years
0 Lacs
Noida, Uttar Pradesh
Work from Office
Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! The Opportunity: At Adobe, we offer an outstanding opportunity to work on new and emerging technology that crafts the digital experiences of millions. As Infrastructure Engineering team of Developer Platforms in Adobe, we provide industry-leading application hosting capabilities. Our solutions support high traffic, highly visible applications with immense amounts of data, numerous third-party integrations, and exciting scalability and performance problems. As a platform engineer on the Ethos team, you will work closely with our senior engineers to develop, deploy, and maintain our Kubernetes-based infrastructure. This role offers an excellent opportunity to grow your skills in cloud-native technologies and DevOps practices. We're on a mission to hire the very best and are committed to build exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new insights can come from everywhere in the organization, and we know the next big idea could be yours! What you'll Do: Contribute to the design, development, and maintenance of the K8s platform for container orchestration. Partner with other development teams across Adobe to ensure applications are crafted to be cloud-native and scalable. Perform day-to-day operational tasks such as upgrades and patching of the Kubernetes platform. Develop and implement CI/CD pipelines for application deployment on Kubernetes. Handle tasks and projects with Agile methodologies such as Scrum. Supervise the health of the platform and applications using tools like Prometheus and Grafana. Solve issues within the platform and collaborate with development teams to resolve application issues. Opportunities to contribute to upstream CNCF projects - Cluster API, ACK, ArgoCD among several others. Stay updated with the latest industry trends and technologies in container orchestration and cloud-native development. Participate in on-call rotation to resolve and get to the bottom of root cause as part of Incident & Problem management. What you need to succeed: B.Tech Degree in Computer Science or equivalent practical experience. Minimum of 5-10 years of experience working with Kubernetes. Certified Kubernetes Administrator and/or Developer/Security certifications encouraged. Strong software development skills in Python, Node.js, Go, Bash or similar languages. Experienced with AWS, Azure, or other cloud platforms. (AWS/Azure certifications encouraged.) Understanding of cloud network architectures (VNET/VPC/Nat Gateway/Envoy etc.). A solid understanding of time-series monitoring tools (such as Prometheus, Grafana, etc.). Familiarity with the 12-factor principles and software development lifecycle. Knowledge about GitOps, ArgoCD, and Helm with equivalent experience will be have advantage. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.
Posted 1 month ago
0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Role: GCP Enterprise ArchitectRequired Technical Skill Set: GCP Enterprise ArchitectDesired Experience Range: 8-10 yrsLocation of Requirement: Kolkata/DelhiNotice period: Immediately We are currently planning to do a Virtual Interview on 17th – May - 2025 (Saturday)Interview Date – 17th – May - 2025 (Saturday) Job Description: Overview: We are seeking a Google Cloud Platform (GCP) Enterprise Architect to design, implement, and optimize cloud solutions that drive business transformation. The ideal candidate will have deep expertise in GCP services, cloud architecture best practices, and enterprise IT infrastructure. You will collaborate with cross-functional teams to ensure scalability, security, and cost-efficiency while aligning cloud strategies with business goals. Required Skills & Experience: Cloud Expertise: Deep understanding of GCP services, including Compute Engine, Kubernetes Engine (GKE), Cloud Functions, BigQuery, Cloud Storage, and IAM. Architecture Patterns: Strong knowledge of microservices, serverless, and containerization (Kubernetes, Docker). Security & Compliance: Familiarity with cloud security best practices, compliance frameworks (ISO 27001, SOC 2, HIPAA, etc.), and identity management. Networking & Connectivity: Experience with hybrid cloud networking, VPC design, VPN, Cloud Interconnect, and DNS configurations. DevOps & Automation: Hands-on experience with Terraform, Ansible, CI/CD pipelines, and GitOps methodologies. Data & Analytics: Understanding of data lake architectures, ETL pipelines, and analytics solutions on GCP. Programming & Scripting: Proficiency in Python, Go, Java, or similar languages for cloud automation and scripting. Business Acumen: Ability to translate business needs into technical solutions and articulate cloud ROI to stakeholders.
Posted 1 month ago
0 - 20 years
0 Lacs
Perungudi, Chennai, Tamil Nadu
Remote
Position: Principal architect Domain: Finance, Retail & Additional Verticals. Exp: 12+ years Location: Chennai and other location applications are acceptable if willing to relocate. (No remote) Overview: GMIndia seeks a skilled Technical Project Manager with 05–20 years of experience to lead complex projects in BFSI, eCommerce, and IT domains. The ideal candidate is adept in Agile and Waterfall methodologies, with strong technical expertise in Java, FullStack, and MERN technologies, and a proven record of delivering $10M+ projects on time. Key Responsibilities: Architectural Strategy: Design microservices, APIs, event-driven systems, and data models for high-throughput web and mobile applications. Establish standards for coding, CI/CD, infrastructure as code, observability, and automated testing. Solution Delivery: Lead development of core platform components—balancing cost, performance, and security trade-offs. Oversee PoCs for emerging tech (AI/ML, real-time streaming, Web 3.0), driving rapid innovation. Collaboration & Governance: Partner with Product, Engineering, Security, and Compliance to align roadmaps and review designs. Chair architecture review boards; maintain up-to-date documentation and run regular design audits. Scalability & Resilience: Define SLAs and SLOs; own capacity planning, load/stress testing, and disaster-recovery strategies. Domain Expertise: Apply deep knowledge of finance (trading platforms, payment gateways, regulatory compliance) and retail (omnichannel, inventory systems, loyalty engines) to architecture decisions. Qualifications: 12+ years in software engineering, 5+ years in technical leadership/architecture, ideally in startups or high-growth firms. Preferred : Startup exposure (Seed to Series A), knowledge in AI/ML, blockchain, IoT, Kafka, and green-IT. Certifications : Required : AWS/GCP/Azure Architect, CKA Preferred : TOGAF 9, CISSP, CSP. Designed and operated enterprise-scale web/mobile apps with deep expertise in AWS/GCP/Azure, microservices, containers (Docker/Kubernetes), and serverless architectures. Strong backend skills (Java, Go, Node.js); familiar with modern frontend/mobile frameworks (React, Angular, Flutter, Swift/Kotlin). Hands-on DevOps (Terraform, GitOps, CI/CD, Prometheus, ELK, Datadog); solid understanding of security protocols (OAuth, encryption, GDPR, SOC 2). Excellent communicator and mentor; experienced in stakeholder engagement. Personal Attributes Visionary thinker with attention to detail. Strong sense of ownership and bias for action. Collaborative leadership style and a passion for mentoring. Adaptable in fast-paced, ambiguous startup environments. Job Types: Full-time, Permanent Pay: Up to ₹2,302,920.20 per year Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Work Location: In person Speak with the employer +91 9966099006
Posted 1 month ago
4 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
This role is for one of Weekday's clients Salary range: Rs 500000 - Rs 2400000 (ie INR 5-24 LPA) Min Experience: 4 years Location: Hyderabad, Coimbatore, Bengaluru JobType: full-time Requirements We are looking for a talented and experienced DevOps Engineer with expertise in Google Cloud Platform (GCP) to join our infrastructure and operations team. This is a key technical role where you will be responsible for automating, scaling, and optimizing our cloud infrastructure, CI/CD pipelines, and deployment processes. As a DevOps Engineer - GCP, you will work closely with development, QA, and security teams to build and maintain robust infrastructure that supports continuous integration, automated deployments, and high availability for mission-critical applications. You will champion DevOps best practices and ensure operational excellence in cloud-based environments. Key Responsibilities: Design, build, and manage infrastructure on Google Cloud Platform (GCP) with a focus on scalability, reliability, and security. Develop and maintain Infrastructure as Code (IaC) using tools such as Terraform, Deployment Manager, or equivalent. Set up and manage CI/CD pipelines using tools like Jenkins, GitLab CI, Cloud Build, or Spinnaker to enable fast, safe, and consistent releases. Monitor system performance, availability, and security using tools such as Stackdriver, Prometheus, Grafana, and integrate alerts with incident management systems. Automate repetitive operational tasks and deployments using Python, Bash, or Shell scripting. Implement and maintain containerized applications using Docker and orchestration with Kubernetes (GKE preferred). Manage secrets, configuration, and access control using tools like HashiCorp Vault, Google Secret Manager, and IAM policies. Ensure compliance with security standards, perform system hardening, and assist with audits and vulnerability assessments. Collaborate with engineering teams to troubleshoot infrastructure issues and support development workflows. Document infrastructure, processes, and procedures to ensure transparency and knowledge sharing within the team. Required Skills & Experience: Minimum of 4 years of hands-on DevOps experience, preferably in cloud-native environments. Strong experience working with Google Cloud Platform (GCP) and familiarity with core services like Compute Engine, GKE, Cloud Functions, Cloud Storage, Pub/Sub, BigQuery, and Cloud SQL. Proficiency with Infrastructure as Code tools like Terraform, and scripting languages like Bash or Python. Hands-on experience with CI/CD pipeline development, version control systems (Git), and deployment automation. Solid understanding of Linux-based systems, networking, load balancing, DNS, firewalls, and system security. Experience deploying and managing Kubernetes clusters (GKE experience preferred). Knowledge of monitoring, logging, and alerting tools and their integration in a production environment. Strong troubleshooting skills and the ability to diagnose and resolve infrastructure and application-level issues. Excellent communication and documentation skills, with a proactive and collaborative mindset. Nice to Have: Google Cloud Professional certifications (e.g., Professional Cloud DevOps Engineer, Associate Cloud Engineer). Experience with multi-cloud or hybrid cloud environments. Exposure to GitOps workflows and tools like ArgoCD or FluxCD
Posted 1 month ago
4 years
0 Lacs
Hyderabad, Telangana, India
On-site
This role is for one of Weekday's clients Salary range: Rs 500000 - Rs 2400000 (ie INR 5-24 LPA) Min Experience: 4 years Location: Hyderabad, Coimbatore, Bengaluru JobType: full-time Requirements We are looking for a talented and experienced DevOps Engineer with expertise in Google Cloud Platform (GCP) to join our infrastructure and operations team. This is a key technical role where you will be responsible for automating, scaling, and optimizing our cloud infrastructure, CI/CD pipelines, and deployment processes. As a DevOps Engineer - GCP, you will work closely with development, QA, and security teams to build and maintain robust infrastructure that supports continuous integration, automated deployments, and high availability for mission-critical applications. You will champion DevOps best practices and ensure operational excellence in cloud-based environments. Key Responsibilities: Design, build, and manage infrastructure on Google Cloud Platform (GCP) with a focus on scalability, reliability, and security. Develop and maintain Infrastructure as Code (IaC) using tools such as Terraform, Deployment Manager, or equivalent. Set up and manage CI/CD pipelines using tools like Jenkins, GitLab CI, Cloud Build, or Spinnaker to enable fast, safe, and consistent releases. Monitor system performance, availability, and security using tools such as Stackdriver, Prometheus, Grafana, and integrate alerts with incident management systems. Automate repetitive operational tasks and deployments using Python, Bash, or Shell scripting. Implement and maintain containerized applications using Docker and orchestration with Kubernetes (GKE preferred). Manage secrets, configuration, and access control using tools like HashiCorp Vault, Google Secret Manager, and IAM policies. Ensure compliance with security standards, perform system hardening, and assist with audits and vulnerability assessments. Collaborate with engineering teams to troubleshoot infrastructure issues and support development workflows. Document infrastructure, processes, and procedures to ensure transparency and knowledge sharing within the team. Required Skills & Experience: Minimum of 4 years of hands-on DevOps experience, preferably in cloud-native environments. Strong experience working with Google Cloud Platform (GCP) and familiarity with core services like Compute Engine, GKE, Cloud Functions, Cloud Storage, Pub/Sub, BigQuery, and Cloud SQL. Proficiency with Infrastructure as Code tools like Terraform, and scripting languages like Bash or Python. Hands-on experience with CI/CD pipeline development, version control systems (Git), and deployment automation. Solid understanding of Linux-based systems, networking, load balancing, DNS, firewalls, and system security. Experience deploying and managing Kubernetes clusters (GKE experience preferred). Knowledge of monitoring, logging, and alerting tools and their integration in a production environment. Strong troubleshooting skills and the ability to diagnose and resolve infrastructure and application-level issues. Excellent communication and documentation skills, with a proactive and collaborative mindset. Nice to Have: Google Cloud Professional certifications (e.g., Professional Cloud DevOps Engineer, Associate Cloud Engineer). Experience with multi-cloud or hybrid cloud environments. Exposure to GitOps workflows and tools like ArgoCD or FluxCD
Posted 1 month ago
4 years
0 Lacs
Greater Kolkata Area
Job Summary In this role, you will lead the architecture and implementation of MLOps/LLMOps systems within OpenShift AI. Job Description Company Overview: Outsourced is a leading ISO certified India & Philippines offshore outsourcing company that provides dedicated remote staff to some of the world's leading international companies. Outsourced is recognized as one of the Best Places to Work and has achieved Great Place to Work Certification. We are committed to providing a positive and supportive work environment where all staff can thrive. As an Outsourced staff member, you will enjoy a fun and friendly working environment, competitive salaries, opportunities for growth and development, work-life balance, and the chance to share your passion with a team of over 1000 talented professionals. Job Responsibilities Lead the architecture and implementation of MLOps/LLMOps systems within OpenShift AI, establishing best practices for scalability, reliability, and maintainability while actively contributing to relevant open source communitiesDesign and develop robust, production-grade features focused on AI trustworthiness, including model monitoringDrive technical decision-making around system architecture, technology selection, and implementation strategies for key MLOps components, with a focus on open source technologiesDefine and implement technical standards for model deployment, monitoring, and validation pipelines, while mentoring team members on MLOps best practices and engineering excellenceCollaborate with product management to translate customer requirements into technical specifications, architect solutions that address scalability and performance challenges, and provide technical leadership in customer-facing discussionsLead code reviews, architectural reviews, and technical documentation efforts to ensure high code quality and maintainable systems across distributed engineering teamsIdentify and resolve complex technical challenges in production environments, particularly around model serving, scaling, and reliability in enterprise Kubernetes deploymentsPartner with cross-functional teams to establish technical roadmaps, evaluate build-vs-buy decisions, and ensure alignment between engineering capabilities and product visionProvide technical mentorship to team members, including code review feedback, architecture guidance, and career development support while fostering a culture of engineering excellence Required Qualifications 5+ years of software engineering experience, with at least 4 years focusing on ML/AI systems in production environmentsStrong expertise in Python, with demonstrated experience building and deploying production ML systemsDeep understanding of Kubernetes and container orchestration, particularly in ML workload contextsExtensive experience with MLOps tools and frameworks (e.g., KServe, Kubeflow, MLflow, or similar)Track record of technical leadership in open source projects, including significant contributions and community engagementProven experience architecting and implementing large-scale distributed systemsStrong background in software engineering best practices, including CI/CD, testing, and monitoringExperience mentoring engineers and driving technical decisions in a team environment Preferred Qualifications Experience with Red Hat OpenShift or similar enterprise Kubernetes platformsContributions to ML/AI open source projects, particularly in the MLOps/GitOps spaceBackground in implementing ML model monitoringExperience with LLM operations and deployment at scalePublic speaking experience at technical conferencesAdvanced degree in Computer Science, Machine Learning, or related fieldExperience working with distributed engineering teams across multiple time zones What we Offer Health Insurance: We provide medical coverage up to 20 lakh per annum, which covers you, your spouse, and a set of parents. This is available after one month of successful engagement.Professional Development: You'll have access to a monthly upskill allowance of ₹5000 for continued education and certifications to support your career growth.Leave Policy: Vacation Leave (VL): 10 days per year, available after probation. You can carry over or encash up to 5 unused days.Casual Leave (CL): 8 days per year for personal needs or emergencies, available from day one.Sick Leave: 12 days per year, available after probation.Flexible Work Hours or Remote Work Opportunities – Depending on the role and project.Outsourced Benefits such as Paternity Leave, Maternity Leave, etc.
Posted 1 month ago
0.0 - 3.0 years
0 Lacs
Madgaon, Goa
On-site
About the Role: We are seeking an experienced Senior DevOps Engineer with strong expertise in AWS to join our growing team. You will be responsible for designing, implementing, and managing scalable, secure, and reliable cloud infrastructure. This role demands a proactive, highly technical individual who can drive DevOps practices across the organization and work closely with development, security, and operations teams. Key Responsibilities: Design, build, and maintain highly available cloud infrastructure using AWS services. Implement and manage CI/CD pipelines for automated software delivery and deployment. Collaborate with software engineers to ensure applications are designed for scalability, reliability, and performance. Manage Infrastructure as Code (IaC) using tools like Terraform, AWS CloudFormation, or similar. Optimize system performance, monitor production environments, and ensure system security and compliance. Develop and maintain system and application monitoring, alerting, and logging using tools like CloudWatch, Prometheus, Grafana, or ELK Stack. Manage containerized applications using Docker and orchestration platforms such as Kubernetes (EKS preferred). Conduct regular security assessments and audits, ensuring best practices are enforced. Mentor and guide junior DevOps team members. Continuously evaluate and recommend new tools, technologies, and best practices to improve infrastructure and deployment processes. Required Skills and Qualifications: 5+ years of professional experience as a DevOps Engineer, with a strong focus on AWS. Deep understanding of AWS core services (EC2, S3, RDS, IAM, Lambda, ECS, EKS, etc.). Expertise with Infrastructure as Code (IaC) – Terraform, CloudFormation, or similar. Strong experience with CI/CD tools such as Jenkins, GitLab CI, CircleCI, or AWS CodePipeline. Hands-on experience with containerization (Docker) and orchestration (Kubernetes, EKS). Proficiency in scripting languages (Python, Bash, Go, etc.). Solid understanding of networking concepts (VPC, VPN, DNS, Load Balancers, etc.). Experience implementing security best practices (IAM policies, KMS, WAF, etc.). Strong troubleshooting and problem-solving skills. Familiarity with monitoring and logging frameworks. Good understanding of Agile/Scrum methodologies. Preferred Qualifications: AWS Certified DevOps Engineer – Professional or other AWS certifications. Experience with serverless architectures and AWS Lambda functions. Exposure to GitOps practices and tools like ArgoCD or Flux. Experience with configuration management tools (Ansible, Chef, Puppet). Knowledge of cost optimization strategies in cloud environments. Job Type: Full-time Pay: ₹60,000.00 - ₹70,000.00 per month Benefits: Provident Fund Schedule: Day shift Application Question(s): Are you based in Goa ? Experience: DevOps: 4 years (Required) AWS: 3 years (Required) Work Location: In person Speak with the employer +91 8275022406
Posted 1 month ago
9 years
0 Lacs
Kochi, Kerala, India
On-site
Job Title: Java Full Stack Cloud ArchitectLocations: Pune, Bangalore, Chennai, Hyderabad, Kochi, TrivandrumExperience Level: 9+ Years Key ResponsibilitiesWe are seeking a highly skilled Java Full Stack Cloud Architect to design and implement scalable, secure, and high-performance cloud-native solutions. The ideal candidate will bring deep expertise in Java, microservices, modern front-end frameworks, cloud platforms (preferably AWS), and DevOps practices. Your Role Will Involve:Architecting Cloud-Native Solutions: Design scalable, resilient cloud architectures using Java (Spring Boot).Cloud Strategy & Implementation: Deploy and optimize applications on AWS (preferred), GCP, or Azure.Microservices Development: Build RESTful APIs and event-driven microservices using Kafka or RabbitMQ.Containerization & Orchestration: Leverage Docker and Kubernetes for service deployment and scalability.Security & Compliance: Implement secure authentication and authorization using OAuth2, JWT, IAM; ensure compliance with standards like SOC 2, ISO 27001.DevOps Enablement: Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab; automate infrastructure via Terraform or CloudFormation.Database & Performance Optimization: Work with SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, DynamoDB) databases; implement caching (Redis) for performance.Front-End Collaboration: Collaborate with UI teams using React, JavaScript/TypeScript, and modern UI frameworks.Stakeholder Collaboration: Engage with cross-functional teams to align architectural goals with business needs. Required Skills & QualificationsJava Proficiency: Strong hands-on experience with Java 8+, Spring Boot, Jakarta EE.Cloud Expertise: Proficient in AWS (preferred), Azure, or GCP.DevOps Skills: Experience with Kubernetes, Helm, Jenkins, GitOps, and IaC tools like Terraform.Security & Identity: Practical knowledge of OAuth 2.0, SAML, and Zero Trust Architecture.Database Knowledge: Expertise in SQL and NoSQL databases (PostgreSQL, MySQL, MongoDB, DynamoDB).Scalable System Design: Ability to design and implement low-latency, high-availability systems.UI/Front-End Understanding: Familiarity with React, TypeScript, and front-end integration in full-stack applications.Leadership & Communication: Excellent presentation skills, stakeholder management, and team mentoring ability.
Posted 1 month ago
10 - 20 years
30 - 37 Lacs
Bengaluru
Remote
Direct link to apply : https://jobs.lever.co/pythian/2cec65bb-46eb-4ca0-8f94-1c9afd36bb38 Role & responsibilities Build and maintain client relationships, providing technical leadership and guidance for current projects. Collaborate with stakeholders to understand business requirements, assist in project planning, and document project plans for both small and medium-sized projects. Participate in and support sprint planning activities with the Project Manager, including story point estimation and ceremonies such as standups, backlog grooming, and retrospectives. Design and implement technical solutions for customer projects, ensuring scalability and efficiency. Create or contribute to building technical design documents and other necessary documentation for projects. Write testable, high-performance, reliable, and maintainable code for CI/CD pipelines and infrastructure-as-code frameworks (e.g., Terraform, CloudFormation). Design and implement security and network software components for multi-cloud solutions and architectures. Research, evaluate, and recommend third-party software and technology packages based on project requirements. Provide performance optimization recommendations and document best practices. Create cloud migration strategies and plans, following best practices and ensuring smooth transitions to cloud architectures. Develop automated provisioning solutions for servers, environments, containers, and data centers. Preferred candidate profile Experience with engineering solutions on major cloud provider platforms, preferably Google Cloud Platform (GCP), and one or both of Amazon Web Services (AWS) and Microsoft Azure. Hands-on experience with operating system platform configuration, tuning, and administration for Linux or Windows, with a preference for both. Strong understanding of application performance and design best practices to ensure applications and services are highly available, performant, scalable, and secure. High proficiency with open-source tools, including Hashicorp solutions such as Terraform, Packer, and Vault, along with other deployment frameworks like Pulumi. Proficiency in at least one popular programming language (e.g., Go, Java, Python, Ruby, Rust). Solid understanding of testing techniques and frameworks, including test and behavior-driven development, with experience in writing test suites, mocks, and fixtures. Capability to write scripts for maintenance, automation, and data processing using scripting languages such as Bash, Groovy, JavaScript, Perl, PHP, PowerShell, or R. Experience with common configuration management tools (e.g., Ansible, Chef, Puppet). Strong knowledge of automating deployment, scaling, and management of containerized applications, ideally with hands-on experience using Kubernetes and tools like Helm. Exposure to Anthos is a plus. Skilled in common CI/CD tools, patterns, and techniques, with familiarity in pipeline enablement products such as ArgoCD, Azure DevOps, Cloud Build, GitLab, or Jenkins. Understanding of development methods, workflows, and patterns, particularly Agile and DevOps practices. Experience with stream-processing platforms and services, such as Kafka and Cloud Pub/Sub. Solid understanding of data security principles, including encryption, access control, and identity management, and their technical application to enforce data custodianship and compliance. Experience with on-premise architectures and visualization applications such as vCenter Experience in MLOPs is a plus.
Posted 1 month ago
8.0 years
0 Lacs
Saidapet, Chennai, Tamil Nadu
On-site
Job Information Department Name Frameworks & Tools Job Type Full time Date Opened 07/05/2025 Industry Software Development Minimum Experience In Years 8 Maximum Experience In Years 12 City Saidapet Province Tamil Nadu Country India Postal Code 600089 About Us MulticoreWare is a global software solutions & products company with its HQ in San Jose, CA, USA. With worldwide offices, it serves its clients and partners in North America, EMEA and APAC regions. Started by a group of researchers, MulticoreWare has grown to serve its clients and partners on HPC & Cloud computing, GPUs, Multicore & Multithread CPUS, DSPs, FPGAs and a variety of AI hardware accelerators. MulticoreWare was founded by a team of researchers that wanted a better way to program for heterogeneous architectures. With the advent of GPUs and the increasing prevalence of multi-core, multi-architecture platforms, our clients were struggling with the difficulties of using these platforms efficiently. We started as a boot-strapped services company and have since expanded our portfolio to span products and services related to compilers, machine learning, video codecs, image processing and augmented/virtual reality. Our hardware expertise has also expanded with our team; we now employ experts on HPC and Cloud Computing, GPUs, DSPs, FPGAs, and mobile and embedded platforms. We specialize in accelerating software and algorithms, so if your code targets a multi-core, heterogeneous platform, we can help. Job Description Key Responsibilities: Architect and implement container orchestration solutions using Kubernetes in production-grade environments. Lead the design and integration of OpenStack with Kubernetes-based platforms. Collaborate with infrastructure, DevOps, and software teams to design cloud-native applications and CI/CD pipelines. Define architectural standards, best practices, and governance models for Kubernetes-based workloads. Assess current system architecture and recommend improvements or migrations to Kubernetes. Mentor and guide junior engineers and DevOps teams on Kubernetes and cloud-native tools. Troubleshoot complex infrastructure and containerization issues. Key Requirements: 8+ years of experience in IT architecture with at least 4 + years working on Kubernetes. Deep understanding of Kubernetes architecture (control plane, kubelet, etcd, CNI plugins, etc.) Strong hands-on experience with containerization technologies like Docker and container runtimes. Proven experience working with OpenStack and integrating it with container platforms. Solid knowledge of cloud infrastructure , networking , and persistent storage in Kubernetes. Familiarity with Helm , Istio , service mesh , and other cloud-native tools is a plus. Experience with CI/CD pipelines , infrastructure as code (e.g., Terraform), and GitOps practices. Excellent problem-solving skills and ability to work in fast-paced environments. Preferred Qualifications: Certified Kubernetes Administrator (CKA) or Certified Kubernetes Application Developer (CKAD) Experience with multiple cloud platforms (AWS, Azure, GCP, or private cloud) Background in networking or storage architecture is highly desirable.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2