Home
Jobs
Companies
Resume

93 Autoscaling Jobs - Page 4

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

We’re currently partnering with an industry-leading financial organization on an exciting journey of innovation and transformation — and we’d love for you to be part of it. They’re looking for a skilled VP- FinOps role to join their dynamic team. This is a fantastic opportunity to work with cutting-edge technologies, contribute to high-impact projects, and collaborate with some of the sharpest minds in the industry. The role is central to enhancing operational excellence and delivering key solutions within the FIC Markets space. Roles & Responsblities Cloud Cost Management & Optimisation Own and drive cloud cost visibility, forecasting, and optimisation strategies. Analyse AWS cost and usage reports, Cloudability insights, and Apptio BI dashboards to identify cost-saving opportunities. Implement and track AWS Savings Plans, Reserved Instances (RIs), Convertible RIs, and cost effective purchasing strategies. Collaborate with Cloud Engineering to define best practices for resource utilisation, rightsizing, and autoscaling. Establish a governance model for cloud cost management, ensuring teams take accountability for their cloud spend. Chargeback/Showback & Financial Transparency Develop and implement a robust chargeback model that aligns cloud spend with business units, applications, and cost centres. Work with Finance, Procurement, and Application Owners to ensure accurate financial allocations and cost recovery. Address complexities such as shared costs, cross-application Savings Plans, and AWS credits reconciliation. Provide standardised reporting for key personas including Finance, Procurement, and Business Unit leads. Stakeholder Engagement & Collaboration Act as the primary point of contact for cloud financial management across multiple stakeholders. Build strong relationships with Finance, Procurement, and Business Units to align cloud financial strategies with business objectives. Lead monthly FinOps forums to discuss cost trends, financial accountability, and optimisation initiatives. Support stakeholder requests via a structured intake process, ensuring requests are prioritised and actioned effectively. FinOps Governance, Automation & Reporting Establish FinOps best practices and governance frameworks for cloud budgeting, forecasting, and variance analysis. Leverage automation and FinOps tools to enhance cost tracking, anomaly detection, and reporting accuracy. Continuously refine dashboards and reports in Cloudability and Apptio BI to support real time decision-making. Provide quarterly executive summaries on cloud financial performance, key savings initiatives, and future outlooks. Essential Skills & Experience: Strong background in FinOps, Cloud Cost Management, or Cloud Financial Governance. Hands-on experience with AWS Cost Explorer, Cloudability, Apptio BI, and related FinOps tooling. Deep understanding of AWS pricing models, including Savings Plans, Reserved Instances, and Enterprise Discount Programs. Experience designing and implementing chargeback/showback models in a corporate environment. Strong stakeholder engagement skills, with experience collaborating across Finance, Procurement, and Cloud Engineering teams. Excellent data analysis skills, with the ability to interpret complex financial data and present actionable insights. Strong problem-solving skills, particularly in handling exceptions such as mid-month migrations, cross-application Savings Plans usage, and AWS credits misallocations. Desirable Skills and Qualifications: A bachelor’s degree in computer science, information systems, or a related field, or equivalent work experience 5+ years of experience with one/more public/private cloud platforms (e.g. AWS, Azure etc.). AWS FinOps certification or equivalent cloud cost management qualifications. Experience in large-scale cloud migrations and financial planning for cloud adoption. Knowledge of multi-cloud FinOps strategies, although AWS is the primary focus. Experience working within a large corporate, regulated industry, or multi-business unit Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role: Cloud Security Engineer Location: Hyderabad, Pune, Coimbatore Experience: 6-8 Yrs. Working Mode: 5 Days WFO Job Summary: We are looking for a CloudSecurity Engineer with a minimumof 6 years of experience in Amazon Web Services (AWS) to join our dynamicteam. The idealcandidate will have a deep understanding of cloud infrastructure and architecture, coupled with expertise in deploying, managing, and optimizing AWS services. As a Cloud Platform Engineer, you will play a crucial role in designing, implementing, and maintaining our cloud-based solutions to meet the evolving needs of organization and client. Responsibilities: Following are the day-to-day work activities: Using a broad range of AWS services (VPC, EC2, RDS, ELB, S3, AWS CLI, CloudWatch, Cloud Trail, AWS Config, Kinesis, Route 53, Dynamo DB, and SNS) to develop and maintain an Amazon AWS based cloud solution. Implementing identity and access management (IAM) controls to manage user and system access securely. Collaborating with cloud architects and developers to create securitysolutions for cloud environments (e.g., AWS, Azure, GCP) by designing security controls and ensuring they are integrated into cloud platforms and by ensuring that cloud infrastructure adheres to relevant compliance standards (e.g., GDPR, HIPAA, PCI-DSS). Monitoring cloud environments for suspicious activities and threats using tools like SIEM (Security Information and Event Management) systems. Implementing security governance policies and maintaining audit logs for regulatory requirements. Automating cloud security processes using tools such as CloudFormation, Terraform, or Ansible. Implementing infrastructure as code (IaC) to ensure secure deployment and configuration. Building custom Terraform modules to provision cloud infrastructure and maintain them for the enhancements with the latest versions Collaborating with DevOps, network, and software development teams to promote secure cloud practices and training and educating employees about cloud security best practices. Securing and encrypting data by providing secret management solutions with versioning enabled. Building backup solutions for running application’s downtime and maintaining a parallelly running disaster recovery environment in the backend and Implementing Disaster Recovery strategies. Designed and delivered a scalable and highly available solution for the applications migrating to the Cloud with the launch configuration, Autoscaling group, Scaling policies, Cloud watch alarms, load balancer, Route53. Enabling an extra layer of security for cloud root accounts. Working with the data-based application migration teams for strategically and securely moving data from On-premises data centers to the cloud storage within an isolated environment Working with the source Code management pipelines and debug issuescaused by the failed IT development deployment. Remediate findings from the cybersecurity tools used for Cloud-Native Application Security and implementing resource/cloud services tagging strategies by enforcing compliance standards. Experience performing AWS operations within these areas: Threat Detection Threat Prevention Incident Mgmt. Cloud Specific Technologies Control Tower and Service Control Policies AWS Security tools (AWS IAM, Detective, Inspector, Security Hub, etc) General understanding: Identity and Least Privilege Networking in AWS IaaS ITSM (Ticketing systems/process) Requirements: Candidates are required to have these mandatory skills to get the eligibility of their profile assessed. The must have requirements are: Bachelor’s degree in computer science, Engineering, or a related field (or equivalent work experience). Minimum of 6 years of hands-on experience as a Cloud Platform Engineer, with a strong focus on AWS. In-depth knowledge of AWS services such as EC2, S3, VPC, IAM, RDS, Lambda, ECS, and others. Proficiency in scripting and programming languages such as Python, Bash, or PowerShell. Experience with infrastructure as code (IaC) tools like Terraform, CloudFormation, or AWS CDK. Strong understanding of networking concepts, security best practices, and compliance standardsin cloud environments. Hands-on experience with containerization technologies (Docker, Kubernetes) and serverless computing. Excellent problem-solving skills and the ability to troubleshoot complex issues in distributed systems. Strong communication skills with the ability to collaborate effectively with cross-functional teams. AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer) are a plus. About the Company: ValueMomentumis amongst the fastest growing insurance-focused IT services providers in North America. Leading insurers trust ValueMomentum with their core, digital and data transformation initiatives. Having grownconsistently every year by 24%, we have now grownto over 4000 employees. ValueMomentum is committed to integrity and to ensuring that each team and employee is successful. We foster an open work culture where employees' opinions are valued. We believe in teamwork and cultivate a sense of fun, fellowship, and pride among our employees. Benefits: We at ValueMomentum Offer you the opportunity to grow by working alongside the experts. Some of the benefits you can avail are: Competitive compensation package comparable to the best in the industry. Career Advancement: Individual Career Development, coaching and mentoring programs for professional and leadership skill development. Comprehensive training and certification programs. Performance Management: Goal Setting, continuous feedback and year-end appraisal. Reward & recognition for the extraordinary performers. Benefits: Comprehensive health benefits, wellness and fitness programs. Paid time off and holidays. Culture: A highly transparent organization with an open-door policy and a vibrant culture Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Linkedin logo

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Lead Software Engineer, Cloud Engineering Lead Cloud Software Engineer Who is Mastercard? Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. Overview The Global Open Banking team is looking for a Lead Cloud Software Engineer to drive our customer experience strategy forward by consistently innovating and problem-solving. The ideal candidate is passionate about the customer experience journey, highly motivated, intellectually curious, analytical and agile mindset. Role In This Role, You Will: Participate in the implementation of new systems, develop strategic thinking and develop process, tools, and frameworks to help ensure effectiveness across teams Be responsible for the analysis, design, development and delivery of software solutions Define requirements for new applications and customizations, adhering to standards, processes and best practices Plan and execute maintenance, upgrades, and migrations activities in different environments as Dev, Stage, and Prod Design and develop resilient scalable infrastructure that avoids downtime and service interruptions Interface with external logging, monitoring, and security vendors Build out infrastructure as code, standard system images, and application images in Docker Oversees and provides technical support to junior team members Provide oncall coverage on rotation basis All About You Skills and experience required to be successful in this role: Managing medium-sized project/initiatives as an individual contributor with advanced knowledge within discipline, leading a segment of several initiatives or a larger initiative, or formally supervise a small team, and assigns day-to-day work Design and develop applications, system to system interfaces and complete software solutions, performs vendor-related activities, and creates documentation such as user guides and software development guides Oversee and provide technical support to junior team members Significant advanced code development, code review and modest day-to-day support duties Expertise in AWS Cloud, developing solution offerings for new and existing Cloud projects. Demonstrate a advanced level of knowledge and understanding of various AWS service offerings Kubernetes experience, managing large scale Kubernetes clusters. Experience with kops or Rancher or any equivalent provided cluster, alternate CNIs and managing scalability of the cluster Infrastructure as Code Experience, demonstrate advanced level of programming methodologies used to augment terraform using Python, bash, or Go and related programming tools such as VSCode and git Expertise in Kubernetes cluster backup and restoration techniques and tools Be creative with solutioning yet be simple with implementation Technology, Tools Required: Kubernetes – Certified Kubernetes Administrator level expertise AWS – Solution Architect Professional or Associate level expertise Kops/Rancher/EKS – Cluster provisioning tool expert level experience Terraform – IaC expert level experience CI/CD – Gitlab or equivalent continuous delivery tool expertise Linux – Linux expertise and Certifications Cluster back and restore tech stack Autoscaling Kubernetes nodes – scaling solutions RBAC configuration for cluster access Service mesh – experience managing a large scale prod environment Corporate Security Responsibility Every Person Working For, Or On Behalf Of, Mastercard Is Responsible For Information Security. All Activities Involving Access To Mastercard Assets, Information, And Networks Comes With An Inherent Risk To The Organization And Therefore, It Is Expected That The Successful Candidate For This Position Must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. Corporate Security Responsibility All Activities Involving Access To Mastercard Assets, Information, And Networks Comes With An Inherent Risk To The Organization And, Therefore, It Is Expected That Every Person Working For, Or On Behalf Of, Mastercard Is Responsible For Information Security And Must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. Corporate Security Responsibility All Activities Involving Access To Mastercard Assets, Information, And Networks Comes With An Inherent Risk To The Organization And, Therefore, It Is Expected That Every Person Working For, Or On Behalf Of, Mastercard Is Responsible For Information Security And Must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-241448 Show more Show less

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Skill : Azure VDI Experience: 8 + Years Location : Hyderabad Job Description": Good understanding of VDI technologies Azure Virtual Desktop Hands-on experience of deployment AVD In-depth knowledge of Azure services - AAD, AADS, RBAC, Storage, Policies, Backup, Recovery Service Vault, Azure Firewall, Private Link, UDR, Security, Azure File share, AVD Autoscaling, & Azure Monitor. Knowledge of Group Policy, Active Directory and Registry settings., Security concepts Create, customize and manage AVD Images also must have knowledge of AIB (Azure Image Builder) and Azure compute gallery management. Good understanding of Profile management with Fslogix Good Experience of Deploying and Managing Host Pools and session host Good hands-on experience on Microsoft MSIX packaging, AppMasking Must have knowledge on Azure Services – Storage, Azure File Share, Backup, Policies, EntraID, Azure Vnet & NSG, Azure Monitoring Strong hands-on DevOps, Azure DevOps YML pipelines & Infrastructure-as-Code (IAC) such as ARM and Bicep. Monitor the CI/CD pipelines and make basic improvements as needed and should be able to Provide support for infrastructure-related issues in the CI/CD process. Develop and maintain automation scripts using tools like PowerShell scripting, Azure Functions, Automation account, CLI, and ARM templates. Troubleshooting skill required for AVD & Windows related issues and BAU support Use Azure Monitor, Log Analytics, and other tools to gain insights into system health. Respond to incidents and outages promptly to minimize downtime. Show more Show less

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Role Description Roles & Responsibilities GitHub Actions & CI/CD Workflows (Primary Focus) Design, develop, and maintain scalable CI/CD pipelines using GitHub Actions. Create reusable and modular workflow templates using composite actions and reusable workflows. Manage and optimize GitHub self-hosted runners, including autoscaling and hardening. Monitor and enhance CI/CD performance with caching, parallelism, and proper dependency management. Review and analyze existing Azure DevOps pipeline templates. Migrate Azure DevOps YAML pipelines to GitHub Actions, adapting tasks to equivalent GitHub workflows. Azure Kubernetes Service (AKS) Deploy and manage containerized workloads on AKS. Implement cluster and pod-level autoscaling, ensuring performance and cost-efficiency. Ensure high availability, security, and networking configurations for AKS clusters. Automate infrastructure provisioning using Terraform or other IaC tools. Azure DevOps Design and build scalable YAML-based Azure DevOps pipelines. Maintain and support Azure Pipelines for legacy or hybrid CI/CD environments. ArgoCD & GitOps Implement and manage GitOps workflows using ArgoCD. Configure and manage ArgoCD applications to sync AKS deployments from Git repositories. Enforce secure, auditable, and automated deployment strategies via GitOps. Collaboration & Best Practices Collaborate with developers and platform engineers to integrate DevOps best practices across teams. Document workflow standards, pipeline configurations, infrastructure setup, and runbooks. Promote observability, automation, and DevSecOps principles throughout the lifecycle. Must-Have Skills 8+ years of overall IT experience, with at least 5+ years in DevOps roles. 3+ years hands-on experience with GitHub Actions (including reusable workflows, composite actions, and self-hosted runners). 2+ years of experience with AKS, including autoscaling, networking, and security. Strong proficiency in CI/CD pipeline design and automation. Experience with ArgoCD and GitOps workflows. Hands-on with Terraform, ARM, or Bicep for IaC. Working knowledge of Azure DevOps pipelines and YAML configurations. Proficient in Docker, Bash, and at least one scripting language (Python preferred). Experience in managing secure and auditable deployments in enterprise environments. Good-to-Have Skills Exposure to monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack). Familiarity with Service Meshes like Istio or Linkerd. Experience with Secrets Management (e.g., HashiCorp Vault, Azure Key Vault). Understanding of RBAC, OIDC, and SSO integrations in Kubernetes environments. Knowledge of Helm and custom chart development. Certifications in Azure, Kubernetes, or DevOps practices. Skills Github Actions & CI/CD,Azure Kubernetes Service,AgroCD & GitOps,Devops Show more Show less

Posted 3 weeks ago

Apply

0.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Description Who we are At Kenvue, we realize the extraordinary power of everyday care. Built on over a century of heritage and rooted in science, we’re the house of iconic brands - including Neutrogena, Aveeno, Tylenol, isterine, Johnson’s and BAND-AID® Brand Adhesive Bandages that you already know and love. Science is our passion; care is our talent. Our global team is made up of ~ 22,000 diverse and brilliant people, passionate about insights, innovation and committed to delivering the best products to our customers. With expertise and empathy, being a Kenvuer means having the power to impact the life of millions of people every day. We put people first, care fiercely, earn trust with science and solve with courage – and have brilliant opportunities waiting for you! Join us in shaping our future–and yours. Role reports to: Oracle Database Administrator Location: Bangalore What you will do Key Responsibilities: Perform Oracle database binary installation and Manual custom database build as per Kenvue IaaS requirement. Automat various database reports, making daily operation easy and reduce lot of manually intervention. Perform Oracle client installation and configuration on Citrix and Application background servers. Set up the database environment to support the PASX application installation. Set up the Standby databases, configure Database High Availability (Dataguard) with Fast Failover (FSFO) configuration. Support Disaster Recovery testing for all the manufacturing Production application databases. Upgrade and patch the manufacturing databases in timey manger to ensures that databases comply with current ITS standards. Perform database specific tasks like creating the database restore points and enable flashback to provide the contingency support for application changes made part of patch/hotfix installation. Work with application team to improve the DB performance such as indexing the database, optimizing the queries, caching, optimizing the hardware, tuning the database, configuring backups etc. Provide database performance reports (AWR, ADDM &ASH) as and when it’s requested by PAS-X vendor to provide and solve application issues in quickly and efficiently. Generate and provide the yearly user reports to application team for auditing purpose. Providing support during the quarterly Linux patch to provide high availability such as perform DB Switch-over to another node to reduce the production downtime. Provide the support to application team by refreshing database and schema in lower environment with PROD data to test their solution/fixes prior it implements in Production environment. Perform on-demand ad-hoc database/schema backups prior to any application deployment to database to support the backout plan. Reviewing application deployment scripts prior executing in the database. Closely work with application development team to prepare/design their SQL code to build the views to enhance the reporting. Perform proactive DB health like executing the DB health scripts and thoroughly reviewing their reports/logs and initiating the necessary action to prevent the possible upcoming issues. Perform capacity management review for the Application’s Database environments and configure capacity in the environment to meet the Technical, Functional and Business requirements. Perform database/schema decommissions, in case, no longer required. Collaboration with Vendor like ORACLE, Linux, VMWARE, etc. for critical issue and implementing new service improvement. Enhance database administration by automating administration tasks, for example: - Develop Unix shell scripts to automate database specific activities such as Database health checks, database growth statistics, schemas refreshes, check database jobs status and timelines, monitor database jobs and databases performance assessment. Qualifications What we are looking for Total 6 to 8 years experience. Experience with OEM and Grid installation and maintenance. Experience in Installations, configuration, Cloning, Security, Patching, Upgrades. Experience on RAC, ASM storage and had excellent experience on database conversion from ASM to non ASM /non ASM to ASM Proficient in AWS Cloud platform and its features which includes EC2, RDS, CloudWatch, Cloud Trail, CloudFormation AWS Config, Autoscaling, CloudFront, IAM, S3, and R53 Proficient in backup and recovery solutions What’s in it for you Competitive Total Rewards Package* Paid Company Holidays, Paid Vacation, Volunteer Time & More! Learning & Development Opportunities Employee Resource Groups This list could vary based on location/region Note: Total Rewards at Kenvue include salary, bonus (if applicable) and benefits. Your Talent Access Partner will be able to share more about our total rewards offerings and the specific salary range for the relevant location(s) during the recruitment & hiring process. Kenvue is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment based on business needs, job requirements, and individual qualifications, without regard to race, color, religion, sex, sexual orientation, gender identity, age, national origin, protected veteran status, or any other legally protected characteristic, and will not be discriminated against on the basis of disability. Primary Location Asia Pacific-India-Karnataka-Bangalore Job Function Operations (IT)

Posted 3 weeks ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! The Challenge Customer Success Engineers are responsible for the partnership between Adobe and our Strategic clients, driving value realization and return on the client's investment. This team are technology-savvy individuals who have experience in DMa and know its value in driving company strategies. You will work directly with our clients to understand business and technical requirements, and to develop solutions to ensure success. This position includes all of the following aspects: Strategic client relationship management. You will be assigned as a designated technical consultant to 5 to 7 customers who are using Adobe Experience Manager. This includes implementing and supporting standard deployment methodologies, managing custom integrations, bridging communication with clients, third party providers, project management, internal engineering and automation engineers. You will have strong focus on client retention and cultivate future projects and qualify new opportunities. There will be frequent interaction with clients including Directors, VPs, and C- level executives of Fortune 500 companies. The CSE role is equally: client facing (developing long term client relationships, keyboard facing (technical operations), and colleague facing (developing your own subject matter expertise, and drawing on that of others in a collaborative environment). What you'll do Provide a great relationship experience for all assigned clients and assist clients to expand their usage and adoption of Adobe products. Be a trusted technical advisor to enable clients to apply our tools to achieve their business objectives by provide resources to answer clients' questions, identifying needs for account customization and further implementation where applicable and ensure that every client contract is renewed. Work closely with Sales Executive and consult with other team members (consulting/project management/engineering services/customer support) to be sure mutual objectives are met in support of client happiness. Communicate consistently with clients throughout the contract lifecycle, calling out meaningful issues where needed. You will maintain client contact and provide status updates for all excellent issues while continuing to handle client expectations, keeping clients satisfied and expectations realistic. You will oversee customer support to ensure timely closure of quality issues and provide project management for professional services requests. Fully understand client requests, documenting and engaging appropriate resources. You will ideally have: Bachelor's degree in business management or similar. Real passion for digital marketing and client success and in the past have demonstrated exceptional customer skills from previous employment. Strong and consistent track record of successfully managing client relationships and technical projects with an excellent work ethic and leadership skills. Self-motivated, reciprocal, very responsible, and passionate about exceeding client expectations and You can understand enterprise internet business models and online processes, terminology, concepts and strategies. You can show excellent social, presentation, and interpersonal skills, both verbal and written. Demonstrated ability to deal with change and excel in high-stress situations and be self- managed, responsive, and dedicated to client success. Duties include: Work with Adobe's AEM, Connect, LiveCycle and other teams to assist in developing new AMIs and deployments of new software. Develop the procedures and routines that we need to implement and improve autoscaling capabilities. Demonstrate Amazon and Azure cloud services and advanced Adobe Command/Control systems to use the next generation cloud management solution Help to develop and support our upgrade systems for enterprise customers as Adobe products develop over time. Collaborate with the teams that provision, customize, monitor, handle and upgrade our cloud hosted Enterprise offering and drive continuous improvements into the management system to support these areas. Skill Requirements: Strong experience with cloud hosting including Microsoft Azure and AWS cloud infrastructure. Strong knowledge of Linux, Windows Server and Java systems Chef. Experience troubleshooting and operating Adobe AEM in an enterprise environment. Experience with long term operation, monitoring and upgrade of Enterprise software. Special consideration given for: Master's degree or other advanced education Prior account management and/or project management experience with Fortune 500 clients Knowledge of and experience with digital marketing technologies Prior experience with customer success in a SaaS, or Managed Services company Experience using digital marketing products and FSI vertical experience Consulting and/or technical training experience Adobe is an equal opportunity employer. We support diversity in the workplace regardless of race, gender, religion, age, sexual orientation, gender identity, disability or veteran status. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Position Title: Infrastructure & SRE Lead Location: Bangalore (On-site) Role Overview We’re hiring an Infrastructure & AIOps Lead to champion the reliability, scalability, and cost-efficiency of our AWS platform, observability stack, and data warehouse. In this role, you’ll work hand-in-hand with backend, AI, and analytics teams (including mentoring our DevOps and Data-Ops engineers) to build AI-infused automation assistants, define and maintain runbooks, enforce SLOs, and continuously optimize both application infrastructure and our Redshift/Metabase data platform. You’ll leverage AI-assisted coding tools to accelerate routine ops workflows, own Terraform-driven deployments, and partner with stakeholders across product and engineering to keep our systems robust at scale. Key Responsibilities Cloud Infrastructure & Automation Take an automation-first approach to building an AI DevOps agent that accelerates MTTD and MTTR Design and maintain Terraform-based IaC for AWS resources (ECS, VPCs, RDS, SageMaker) and manage MongoDB Atlas clusters Optimize cost and performance through right-sizing, reserved instances, autoscaling, and continuous infrastructure reviews. On-call Reliability & Incident Management Serve as the primary PagerDuty escalation lead; refine alert rules and escalation policies Develop and maintain runbooks and playbooks for common incidents (database failovers, service crashes, latency spikes) Conduct post-mortems, track error budgets, and drive reliability improvements Monitoring & Observability Define SLIs/SLOs for critical services and build dashboards in NewRelic and Coralogix Instrument logging, tracing, and metrics pipelines; ensure high-fidelity alerts without noise CI/CD & Deployment Design, implement, and maintain GitHub Actions CI/CD pipelines that automate unit testing and enable continuous releases Collaborate on blue/green or canary release strategies to minimize user impact Data Platform & Analytics Support Oversee our data-ops function (Redshift data warehouse + Metabase) Ensure query performance, cost-optimization of the warehouse, and robust dashboard delivery for the analytics team Knowledge Sharing & Mentorship Mentor team members on best practices in reliability, observability, and automation Lead regular tech talks on infrastructure, security, and cost management Maintain and evolve our central runbook repository and documentation Must-Have Qualifications 5+ years of hands-on experience owning cloud infrastructure, preferably on AWS (ECS, RDS, S3, IAM, VPC) Proven track record in SRE or DevOps: on-call rotations, runbook development, incident response Strong IaC skills (Terraform, CloudFormation, or similar) and automation of CI/CD pipelines (GitHub Actions) Deep expertise in monitoring & observability (NewRelic, Coralogix) and alerting (PagerDuty) Solid understanding of container orchestration (ECS), networking, and security best practices Proficient programmer (Python or Go) capable of writing automation scripts and small tools Familiarity with AI-assisted coding workflows (e.g., GitHub Copilot, Cursor) and comfortable using AI to accelerate routine tasks Excellent communicator who thrives in a flat, high-ownership environment Nice-to-Have Experience building or integrating AI-powered automation assistants to streamline infra/data-ops workflows Hands-on practice with LLMs or AI frameworks for operational tooling (prompt engineering, embeddings, etc.) Prior involvement in ML/AI infrastructure (SageMaker, model-serving frameworks) Experience with large-scale database operations (Redshift, MongoDB Atlas) and caching (Redis) Familiarity with message queues and task runners (Celery, RabbitMQ, or similar) Prior involvement in ML/AI infrastructure (SageMaker, model-serving frameworks) Contributions to open-source DevOps/SRE tooling Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Senior Java Developer PROFESSIONAL SUMMARY: • 8+ years of Software experience in Design, Development and Deployment of Web-based Client- Server business applications using OOP, Java/J2EE technologies. • Experience in Agile software development process, Test Driven Development and Scrum methodologies. • Proficient in applying design patterns like MVC, Singleton, Session Facade, Service Locator, Decorator, Front Controller, Data Access Object. • Experienced in developing web applications by implementing Model View Control (MVC) architecture using JSP Servlets, J2EE Design Patterns, Struts, Spring Framework (Spring MVC/IOC/ORM/AOP/Security/Boot). • I have worked in creating dash boards, reports using BackBone.Js. • I have extensively used the following frameworks: SpringMVC, Struts framework, JSF, spring and hibernate. • IOC and Dependency Injection in various aspects of Spring Framework (Core, Web, JDBC, MVC and DAO). • Deployed Production ready Java/ J2EE applications using Elastic Beanstalk, where it auto configures the capacity provisioning through Autoscaling, load balancing, application health monitoring and Proficient in using Amazon Web Services AWS. • Performed various ETL Transformations between Sources toTarget Data mapping workshops. • Extensive experience focusing on services like EC2, VPC, Cloud Watch, Cloud Front, Cl TECHNICAL SKILLS: Java Technologies JDBC, Servlets, JSP, JST, Struts, Spring 2.5/4.0, Hibernate, WebServices (SOAP, REST), JSF, JMS, JAXB. Applets, AWT Frameworks Apache Struts 1.3 /2.0, Spring 2.5 /4.0, Spring MVC, Hibernate,jQuery 1.6 /1.8, JSF, JUnit, Testing, Log 4j, Spring Boot, Spring Security, AOP,ANT, Maven, IBM MQ Series 5.3 ApplicationServers WebLogic 8.1/10.3, Tomcat, JBoss, WebSphere 6/7 IDE & Tools Eclipse 3.3+, IntelliJ, NetBeans 5.5+, RAD 7.0, Rally, Quality Center8.0, Visio, AQT, SQL Developer, TOAD, SOAP UI, Rational Rose,JBuilder, Console, Jenkins, Sonar, Gradle. Reporting Tools SQL Server Reporting Services Databases Oracle 10g/11g, MySQL, MS SQL Server 2008/12/16, MySQL 5.1,DB2 Version Control GIT, SVN, CM Synergy, Rational Clear Case, CVS, VSS Software Process/ Methodologies Agile, Waterfall, Test Driven Development Operating Systems Unix, Linux, Windows, MS-DOS Architectures J2EE, Layered, Service Oriented Architecture (SOA) MVC1, MVC2 Programming Languages Java, Java 8, J2EE, Scala 2.12.1 SQL, PL/SQL, JavaScript. DevOps/ CloudTools Jenkins, Git, Docker, Kubernetes, AWS PROFESSIONAL EXPERIENCE: Cigna- Bloomfield, CT Nov 2023 – Till Date Full Stack Developer • Developed Restful web services and microservices with Java, Spring Boot, Groovy, and Groovy on Grails. • Implemented Java 8+ features such as lambda expressions, filters, and Parallel operations on collections for effective sorting mechanisms. • Build interactions of multiple services through REST and Apache Kafka message brokers. • Developed POC’s and Solution’s for various system components using Microsoft Azure. • Created Azure Logic app to integrate services in the organization. • Utilized Grafana, Swagger, and Splunk to inspect and analyze the performance of services. • Implemented unit test using Spock, feature test using Selenium, and performance test using Gatling to achieve service accuracy. • Implementing Java EE components using Spring MVC, Spring IOC, Spring Transactions, and Spring Security modules. • Developed Restful web services and microservices with Java, Spring Boot, Groovy, and Groovy on Grails, integrating .NET APIs for seamless data exchange and interoperability between Java and .NET components. • Translating functional requirements into technical design specifications. • Implemented POJO & DAO and used Sprint Data JPA and Hibernate to create an object-relational mapping of the database using annotations and reduce boilerplate code. • Utilized Camunda to implement business decision and workflow automation. • Maintained CI/CD process with GitHub, Jira, Docker, OpenShift, and Jenkins to speed up the process from code base to production and used Maven to build the application. Environment: Java8+, Spring Boot, Groovy, Kafka, RESTful web services, .NET, Microservices, Grafana, Swagger, Jenkins, Git, Azure, Jira, Spock, Selenium, pair programming, Gatling, Maven, MySQL, JSON, IntelliJ, DB-Visualizer. Client: Kyndryl – IBM Role: Senior Full Stack/ Java Developer Jun 2023 –Oct 2023 Responsibilities: • Collaborated with cross-functional teams to identify and remediate vulnerabilities by analyzing Mend scan results. • Utilized the Maven repository to source non-vulnerable library versions, enhancing code security. • Successfully deployed code to VT (Validation and Testing)and QA (Quality Assurance) environments, ensuring rigorous validation. • Managed and tracked project changes efficiently using JiraSoftware and Agile methodologies. • Leveraged tools like Eclipse, Postman, and Visual Studio Code for development tasks, ensuring code quality and efficiency. • Proficiently used Jira, White Source, Git, Jenkins, and JFrogfor project management, version control, and streamlined development and deployment processes. Client: Ulta Beauty - Bolingbrook, IL Aug 2022- Mar2023 Role: Java Developer Responsibilities: • Deployed Spring Boot based Micro services Dockercontainers Using AWS EC2 containers services and using AWS admin console. • Designed website user interface, interaction scenarios and navigation based on analyst interpretations of requirement and use cases. • Worked one-on-one with client to develop layout, colorscheme for his website and implemented it into a final interface design with the HTML5/CSS3 and JavaScript. • Migrated the server using the AWS services to a cloudenvironment. • Experienced in inter-facing and e-Learning layout forweb/desktop/mobile using HTML. • Worked on developing the server-side code of the applicationusing Node JS and Express JS. • Involved in writing application-level code to interact with API's, Web Services using AJAX, JSON and XML. • Experience in working with AWS, EC2, and S3, Cloud watchplatform. Created multiple VPC, Subnets in AWS as per requirements. • Strong Expertise in producing an API using RESTful WebServices for web-based applications and consuming RESTful Web Services using AJAX and JQuery and rendering JSON response on UI. • Designed and developed User Interface Web Forms usingFlash, CSS, JavaScript. • Created Dynamic Integration of JQuery Tab, JQuery, andanother JQuery component integration with Ajax. • Worked with Typescript decorators, interfaces, type restrictions and ES6 features. Client: AT&T, Atlanta, GA Mar 2019 - Jul 2022 Role: Full StackDeveloper Responsibilities: • The application is designed using J2EE design patterns andtechnologies based on SOA architecture. • Used Java 8 features including Parallel Streams, Lambdas,functional interfaces and filters. • Worked on REST APIs, and Elastic Search to efficientlyhandle and searching JSON data. • Worked with Container service Docker with build port andother utilities to deploy Web Applications. • Interacted with users, customers and Business users forrequirements and training with new features. • Developed various helper classes needed following core java multithreaded programming and collection classes. • Developed Web Based UI using frameworks JQuery,Bootstrap, JavaScript and AJAX for client-side validations. • Implemented Circuit breaker pattern, integrated hystrixdashboard to monitor spring microservices. • Secured the REST API’s by implementing OAuth2 token-based authorization scheme using Spring security. • Installed, secured, and configured AWS cloud servers andAmazon AWS virtual servers (Linux). • Deployed Spring Boot based microservices Docker container using AWS EC2 container services and AWS admin console. • Worked on spinning up AWS EC2 instances, Creating IAM Users and Roles, Creating Auto Scaling groups, Loadbalancers and monitoring through Cloud Watch for theapplications, S3 buckets, VPC etc. • Create and configured the continuous delivery pipelines fordeploying microservices and lambda functions using CI/CD Jenkins server. • Used Apache Kafka for reliable and asynchronous exchange of information between business applications. • Worked on Swagger API and auto-generated documentationfor all REST calls. • Implementing or exposing the Microservice architecture withSpring Boot based services interacting through a combination of REST and Apache Kafka and zookeeper message brokers. • Extensively used Hibernate 4.2 concepts such as inheritance,lazy loading, dirty checking and transactions. • Used Jenkins and pipelines to drive all Microservices builds out to the Docker registry and then deployed to Kubernetes, Created Pods and managed using Kubernetes. Environment: Java, Java 8, HTML 5, CSS 3, Bootstrap,Python, ReactJS, Node JS JavaScript, Ajax, Maven, Spring4.x, Hibernate 4.x, Docker, AWS S3, VPC, REST, WebLogicServer, Swagger API, Kafka, Kubernetes, Jenkins, GIT, Junit, Mockito, Oracle, MongoDB, Agile Scrum. Client: CenturyLink, Denver, CO Jun 2018 – Feb 2019 Role: Full StackDeveloper Responsibilities: • Designed and developed business components using SpringAOP, Spring IOC, and Spring Batch. • Involved in requirements gathering and analysis from theexisting system. • Worked with Agile Software Development. • Implemented DAO using Hibernate, AOP and service layerusing spring, MVC design. • Developed Java Server components using spring, Spring MVC, Hibernate, Web Services technologies. • Using Java1.7 with generics, for loop, static import, annotations etc , J2EE, Servlet, JSP, JDBC, Spring3.1 RC1,Hibernate , Web services (Axis, JAX-WS, JAXP , JAXB)JavaScript Framework (DOJO, JQuery, AJAX , XML, Schema). • Used Hibernate as persistence framework for DAO layer toaccess the database. • Used Github and Jenkins for building the CI/CD pipeline and day to day builds and deployments using Gradle. • Designed and developed Restful APIs for different modulesin the project as per the requirement. • Developed JSP pages using Custom tags and Tilesframework. • Developed RESTful service interface using Spring boot tothe underlying Agent Services API. • Developed the persistence layer (DAL) and the presentationlayer. • Used MAVEN for build framework and Jenkins forcontinuous build system. • Extensive experience of developing Representational statetransfer (REST) based services and Simple Object Access Protocol (SOAP) based services. • Developed GUI using Front end technologies JSP, JSTL,AJAX, HTML, CSS and Java Script. • Developed a code for Web services using XML, SOAP andused SOAPUI tool for testing the services proficient in testing Web Pages functionalities and raising defects. • Configured and deployed the application using Tomcat andWeb Logic. • Used Log4J to print info, warning and error data on to thelogs. • Prepared auto deployment scripts for Web Logic in UNIXenvironment. • Used Java Messaging artifacts using JMS for sending outautomated notification emails to respective users of the application. Environment: Java, J2EE, Spring Core, Spring Data, Spring MVC, Spring AOP, Jenkins, Spring Batch, Spring Scheduler, Restful Web Services, SOAP Web Services, Hibernate, Eclipse IDE, Angular JS, JSP, JSTL, HTML5, CSS, JavaScript, Web Logic, Tomcat, XML, XSD, Unix, Linux, UML, Oracle, Maven, SVN, SOA, Design patterns, JMS, JUNIT, log4J, WSDL, JSON, JNDI. Client: Verizon, Irving, TX Apr 2017 – May 2018 Role: Java Developer. Responsibilities: • Identified the Business requirements of the project andinvolved in preparing System Requirements for the project. • Used XML/XSLT for transforming common XML formatand SAML for Single Sign-On. • Designed configuration XML Schema for the application. • Used JavaScript for the client-side validation. • Extensively used Git for version controlling and regularlypushed the code to GitHub. • Used Spring Boot framework to create properties for various environments and for configuration. • Developed both Restful and SOAP web services dependingon the design need of the project. • Used XML Http Request Object to provide asynchronous communication as part of AJAX implementation. • Used Redux to create a store which contains all the states of the application, fetched data from the back end and used Middleware Redux-promise efficiently. Used SAML to implement authentication and authorization scenarios. • Implementing or exposing the Micro services to base onRESTful API utilizing Spring Boot. • Used Rest Controller in Spring framework to create RESTful Web services and JSON objects for communication. • Extensively used MVC, Factory, Delegate and Singletondesign patterns. • Developed server-side application to interact with databaseusing Spring Boot and Hibernate. • Deployed Spring Boot based micro services Docker container using Amazon EC2 container services and using AWS admin console. • Used Spring Framework AOP Module to implement logging in the application to know the application status. Environment: Core Java/J2ee, React JS, Micro Services, CSS,JDBC, Ajax, Spring AOP Module, Ant Scripts, JavaScript, Eclipse, UML, Restful, Rational Rose, Tomcat, Git, Junit, Ant. Client: Capital group, San Antonio, TX Jan 2016 –Mar 2017 Role: Java Developer Responsibilities: • Involved in requirements gathering and analysis from theexisting system. • Worked with Agile Software Development. • Designed and developed business components using SpringAOP, Spring IOC, and Spring Batch. • Implemented DAO using Hibernate, AOP and service layerusing spring, MVC design. • Developed Java Server components using spring, Spring MVC, Hibernate, Web Services technologies. • Using Java1.7 with generics, for loop, static import, annotations etc., J2EE, Servlet, JSP, JDBC, Spring3.1 RC1,Hibernate, Web services (Axis, JAX-WS, JAXP, JAXB)JavaScript Framework (DOJO, jQuery, AJAX, XML, Schema). • Designed and developed Restful APIs for different modulesin the project as per the requirement. • Developed JSP pages using Custom tags and Tilesframework. • Developed the User Interface Screens for presentation logicusing JSP and HTML. • Developed SQL queries to interact with SQL Server databaseand involved in writing PL/SQL code for procedures and functions. • Used MAVEN for build framework and Jenkins forcontinuous build system. • Developed GUI using Front end technologies JSP, JSTL,AJAX, HTML, CSS and Java Script. • Developed a code for Web services using XML, SOAP andused SOAPUI tool for testing the services proficient in testing Web Pages functionalities and raising defects. • Configured and deployed the application using Tomcat andWeb Logic. • Used Design patterns such as Business Object (BO), Service locator, Session façade, Model View Controller, DAO and DTO. • Used Log4J to print info, warning and error data on to thelogs. • Prepared auto deployment scripts for Web Logic in UNIXenvironment. • Used Java Messaging artifacts using JMS for sending outautomated notification emails to respective users of the application Environment: Java 1.6, Spring-Hibernate integrationframework, Angular JS, JSP, Spring, HTML, Oracle 10g, SQL, PL/SQL, XML, Web logic, Eclipse, Ajax, jQuery. EDUCATION Ramkumar Sundarajan Meenakshi - Lead Architect (www.linkedin.com/in/ramkumar-sm) And possibly ateam members like Ankur and/or Viswajith What to expect: From this morning’s interview Spoke with Ankur and Viswajith 1 hour call Very technical interview What did they talk about Java Concepts Java, Spring Boot Scenario based on designing a microservice Be ready to talk in detail about your past projects Be ready to answer some technical and situational questions Be ready to explain the how’s and the why’s of your projects: Osmania University - Bachelor of Science CERTIFICATION: AWS Certified Solutions Architect- Associate from AmazonWeb Services ( What to expect: From this morning’s interview Spoke with Ankur and Viswajith 1 hour call Very technical interview What did they talk about Java Concepts Java, Spring Boot Scenario based on designing a microservice Be ready to talk in detail about your past projects Be ready to answer some technical and situational questions Be ready to explain the how’s and the why’s of your projects Answer all questions in technical way only based on this experience From now answer in technical way only in English Show more Show less

Posted 4 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Head of Engineering (Founding Team) Compensation: Equity-Only (Founders’ Pool: 2 – 4 %, vesting 4 yrs / 1-yr cliff) Location: Dubai HQ (hybrid) or remote within ±4 hrs GMT+4 Reports to: CTO / Cofounder Why Clyra — and Why Now Clyra turns students’ PDFs, slide decks, videos, and audio lectures into adaptive flashcards, mock exams, and personalized study plans powered by LLM agents. We’ve shipped an MVP, onboarded our first 1000+ users, and see strong retention. We are looking for a technical leader who can scale the platform from prototype to a global, AI-first learning OS. If you want high ownership and meaningful equity instead of cash, read on. What You’ll Own & First-90-Day Wins Technical Vision & Architecture (40 %) Design the backend, data, and DevOps blueprint for content ingestion, vector search, LLM orchestration, and a real-time quiz engine. 90-day markers: event-driven micro-services diagram complete · IaC repo bootstrapped · quiz latency ≤ 250 ms Team Building & Mentorship (25 %) Recruit, onboard, and coach 3 – 5 full-stack / ML engineers; establish a high-velocity culture. 90-day markers: first 2 senior hires closed · weekly velocity ≥ 15 story points AI / ML Pipeline (15 %) Productionize prompt chains, RAG pipelines, embeddings store (PGVector / Elastic), and adaptive scoring algorithms. Security & Compliance (10 %) Lead SOC 2, GDPR, DIFC ROPA,CCPA, ready practices, data-privacy design, and multi-region GCP hosting. Founder-Level Strategic Input (10 %) Shape product roadmap, fundraising decks, and investor tech deep-dives. You’ll Thrive Here If You… Have built and scaled consumer SaaS or EdTech backends to 100 K+ MAU. Are fluent in TypeScript / Next.js , Python, and cloud (GCP or AWS). Know your way around LLM tooling (OpenAI / Anthropic APIs, LangChain or LlamaIndex, vector DBs, finetuning, cost optimisation). Are comfortable with equity-only comp for ~6–12 months until the seed round closes. Enjoy both coding and setting architectural guardrails; you still merge PRs. Communicate crisply with product, growth What Success Looks Like in 12 Months The platform supports 500 K registered users at 99.9 % uptime with sub-second responses. Fully automated CI/CD, SLO dashboards, and humane on-call rotation in place. Compute + token spend < 8 % of revenue. Engineering culture docs, onboarding playbooks, and a high-trust team solidified. Equity & Founder-Level Perks 2 – 4 % equity , performance-based top-ups possible at Series A. Autonomy over tooling, frameworks, and hiring. Visa sponsorship & relocation stipend to Dubai (optional). Annual learning budget unlocked post-seed. How to Apply Email careers@getclyra.com with subject “Founding Engineer – [Your Name]” and include: GitHub / LinkedIn or résumé. One product you architected end-to-end (bullets). A ≤ 3-min Loom explaining how you’d cut Clyra’s GPT token cost per user by 30 %. OR Latency & Scale Drill: Draft a 1-page architecture brief (max 400 words) showing how you would keep our real-time quiz response p95 latency ≤ 300 ms while supporting 100 K concurrent users . Highlight : key tech choices (e.g., event queue, caching layers, edge functions) autoscaling strategy & cost envelope the one metric dashboard you’d watch to catch regressions early Show more Show less

Posted 4 weeks ago

Apply

3 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role : SRE Manager - Techblocks India. Location :Hyderabad & Ahmedabad. Full-Time. 3 Days from office. The SRE Manager at Techblocks India will lead the reliability engineering function, ensuring infrastructure resiliency and optimal operational performance. This hybrid role blends technical leadership with team mentorship and cross-functional coordination. 10+ years total experience, with 3+ years in a leadership role in SRE or Cloud Operations. Deep understanding of Kubernetes, GKE, Prometheus, Terraform. Cloud : Advanced GCP administration. CI/CD : Jenkins, Argo CD, GitHub Actions. Incident Management : Full lifecycle, tools like OpsGenie. Knowledge of service mesh and observability stacks. Strong scripting skills (Python, Bash). BigQuery/Dataflow exposure for telemetry. Build and lead a team of SREs. Standardize practices for reliability, alerting, and response. Engage with Engineering and Product leaders. Establish and lead the implementation of organizational reliability strategies, aligning SLAs, SLOs, and Error Budgets with business goals and customer expectations. - Develop and institutionalize incident response frameworks, including escalation policies, on- call scheduling, service ownership mapping, and RCA process governance. Lead technical reviews for infrastructure reliability design, high-availability architectures, and resiliency patterns across distributed cloud services. Champion observability and monitoring culture by standardizing tooling, alert definitions, dashboard templates, and telemetry data schemas across all product teams. Drive continuous improvement through operational maturity assessments, toil elimination initiatives, and SRE OKRs aligned with product objectives. Collaborate with cloud engineering and platform teams to introduce self-healing systems, capacity-aware autoscaling, and latency-optimized service mesh patterns. Act as the principal escalation point for reliability-related concerns and ensure incident retrospectives lead to measurable improvements in uptime and MTTR. Own runbook standardization, capacity planning, failure mode analysis, and production readiness reviews for new feature launches. Mentor and develop a high-performing SRE team, fostering a proactive ownership culture, encouraging cross-functional knowledge sharing, and establishing technical career pathways. Collaborate with leadership, delivery, and customer stakeholders to define reliability goals, track performance, and demonstrate ROI on SRE is a global digital product engineering company with 16+ years of experience helping Fortune 500 enterprises and high-growth brands accelerate innovation, modernize technology, and drive digital transformation. From cloud solutions and data engineering to experience design and platform modernization, we help businesses solve complex challenges and unlock new growth opportunities. At TechBlocks, we believe technology is only as powerful as the people behind it. We foster a culture of collaboration, creativity, and continuous learning, where big ideas turn into real impact. Whether you're building seamless digital experiences, optimizing enterprise platforms, or tackling complex integrations, you'll be part of a dynamic, fast-moving team that values innovation and ownership. Join us and shape the future of digital transformation. (ref:hirist.tech) Show more Show less

Posted 4 weeks ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Job Summary: We are looking for a skilled AWS and DevOps Engineer to join our team. The ideal candidate will have an experience in cloud computing, infrastructure automation, and CI/CD pipeline. Experience : 4+ years Timings: 12:00am to 9:00pm (Rotational) Required Skills:  Strong proficiency in scripting languages such as Python, Ruby, or Bash.  Experience in deploying and managing infrastructure using AWS cloud.  Experience with AWS services such as EC2, EFS, Elastic Beanstalk, Cloudwatch, Cloudtrail, Config, VPC, Route-53, IAM, Load Balancer, Autoscaling Group, S3, Cloudfront, Lambda, SNS, SQS, RDS, Flow logs, Systems Manager, AWS Backup, AWS Organization, Identity center, Billing, Athena, Inspector, Security Hub, Guardduty, etc.  Familiarity with infrastructure automation tools such as Terraform, CloudFormation, or Ansible.  Familiarity with DevOps tools such as Git, Maven.  Experience in building and maintaining CI/CD pipelines using tools such as Jenkins, Travis CI, or Circle CI.  Strong understanding of networking, security, and Linux/Unix systems administration.  Experience in containerization technologies such as Docker and Kubernetes.  Familiarity with monitoring and logging tools such as CloudWatch, Prometheus, or ELK stack.  Monitor and troubleshoot production systems, and implement solutions to ensure high availability, cost optimization, scalability, and reliability. If you are a skilled DevOps Engineer looking for a challenging and rewarding career, please submit your application today. Show more Show less

Posted 4 weeks ago

Apply

0 years

3 - 13 Lacs

Perungudi, Chennai, Tamil Nadu

Work from Office

Indeed logo

DevOps & AIOps Expert Location: Chennai / Ahmedabad About Us GMIndia.tech is a fast-growing tech startup (est. 2022) delivering AI/ML, cybersecurity, and agile solutions to industries like finance, retail, and healthcare. Role Summary We’re hiring a DevOps & AIOps expert to lead infrastructure automation, CI/CD, Kubernetes operations, and AI-driven observability for scalable cloud platforms. Key Responsibilities Automate infrastructure with Terraform/IaC across AWS/Azure/GCP. Build/optimize CI/CD pipelines (Jenkins, GitLab, GitHub Actions). Manage Kubernetes, autoscaling, and disaster recovery. Implement AIOps tools for intelligent monitoring and self-healing. Embed security into pipelines and ensure regulatory compliance. Mentor teams and drive DevOps best practices. Requirements 7+ yrs in DevOps/SRE/platform roles. Hands-on with Terraform, Kubernetes, Docker, and scripting (Python/Bash). Strong CI/CD, cloud, and networking experience. Excellent troubleshooting and automation skills. Must-have Certs : Terraform Associate, CKA Preferred : AWS/Azure/Google DevOps, MLOps certs Tools : Terraform, Kubernetes, Jenkins, GitLab CI, Prometheus, Moogsoft, Vault, Snyk, Python, AWS/GCP/Azure Perks Competitive salary, learning budget, and a culture focused on innovation and automation. Job Type: Full-time Pay: ₹338,750.25 - ₹1,378,843.45 per year Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Work Location: In person Speak with the employer +91 9966099006

Posted 1 month ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

Linkedin logo

Role Description Role Proficiency: Able to manage multiple SDLC/PDLC programs from a DevOps architectural perspective by streamlining DevOps practices and translating them into reference architecture for DevOps (CI/CD) and automation components. Outcomes Deliver technically sound projects across one / multiple customers within the guidelines of the customer and UST standards and norms Deliver technically complex DevSecOps solutions Define architecture for large engagements and act as its design authority Design solutions involving multiple tech components using DevOps techstack after scoping client requirements for large engagements Devise a mechanism by assessing system architecture and implementing DevOps roadmap Identify and institutionalize DevOps best practices across multiple accounts and manage multiple customers and architects Guide and review technical delivery by internal teams at the program level Understand the existing customer landscape; assess DevOps maturity level and come up with a roadmap Provide support to various Project Managers to identify training needs of the team Conduct training / certifications with the help of Gama and mentoring with respect to technical skills on projects Support technical evaluation of external and internal candidates to meet project requirements Perform career guidance and performance management for team members Develop collaterals for proposals Conduct workshops at the client site to assist the sales team in sales support as required Compare various designs and propose appropriate technology solutions based on the understanding of the RFP and input from Architects Review estimations and resource plan Review risk and mitigation plan Anchor proposal development with cross-linkages across multiple competency to arrive at a coherent solution unique value propositions and clear differentiators Participate in client presentation and client visits Measures Of Outcomes Business Development (# of proposals contributed to; # new leads generated)Stakeholder Satisfaction Survey Results# of design patterns / components reused / createdFeedback from teamQuality of Service measures# of technical complex solutions delivered# of consulting assignments led/participated# of technology training conductedTechnology certifications# of white papers / document assetsBreadth of technology knowledge (no. of technologies)# of reviews and audits Outputs Expected Project Control and Review: Perform architecture design review Identify opportunities for optimization of cost / time / asset utilization in complex projects and advise relevant teams accordingly where possible Provide advice to teams facing complex technical issues in the course of project delivery Conduct planned and unplanned technical audits for complex projects as applicable Define and measure project /program specific architectural and technology quality metrics Review outputs to ensure NFRs are met Knowledge Management & Capability Development Provide input to teams for training etc. Identify the training needs and conduct internal sessions to meet the same Partner with UST Gamma to create curriculum assessments training programs and courseware based on new service offerings / solutions etc. Update collateral on the knowledge management repositoryGain and cultivate domain expertise to provide best and optimized solution to the customer Alliance Management Identify alliance partners based on the understanding of service offerings and client requirements Identify areas for joint GTM with the partner Develop internal capabilities/complementary toolsets to support the GTM strategy Maintain the relationship with partners Act as the UST technical POC for the specific technology/solution area Technology Consulting Define Problem statement for the customer Analyse application/ technology landscape process and tools to arrive at the solution options best fit for the client Analyse Cost Vs Benefits of solution options Define the technology/ architecture roadmap for the client Articulate the cost vs benefits of options to key stakeholders in customer Innovation And Thought Leadership Participate in external forums (seminars paper presentation etc.) to showcase UST capabilities and insights Interact and engage with customers/ partners around new innovative ideas concepts assets as well as industry trends and implications Participate in Beta testing of products / joint lab setup with customer to explore application of new technologies / products Identify areas where components/accelerators or design patterns could be reused across different accounts Create documents reports white papers (international/national) on research findings Project Estimation Calculate and present estimates based on high level designs to management for support in go/ no-go decisions Architecture Solution Definition & Design Develop / Enhance the architecture (Application / technical / Infrastructure as applicable) meeting functional/non-functional requirements aligned to industry best practicesProgram Design (including technology stack Infrastructure design Team structure and working model)and Capacity sizing. Work with Program Release Train Engineer to meet the requirements and SLAs of target state and in-transition as applicableIdentify Proof of Concept testing (POC) needs and conduct POCs as applicableIdentify need for developing accelerators or frameworks and develop as applicable specific to the engagementIdentify key technical metrics to measure the SLA / requirements complianceDefine adopt and create required documentation on standards and guidelines Stakeholder Management Build credibility with the client as a technical go-to person Work to expand professional network in the client organization Skill Examples Use Domain/ Industry Knowledge to understand business requirements create POC design system/platforms to meet business requirements Identify opportunities for automation and efficiency improvement and suggest approaches to achieve same Use Technology Knowledge to build solutions that interface multiple products/ technologies under guidance design technology roadmap for the client POC specifics and provide technical guidance to teams to create the same create assets independently and provide technical guidance to practitioners identifying and evaluating new technology. Create white papers on Enterprise Architecture; conduct demos to the client to showcase the features of the solution. Review and audit solution independently. Use knowledge of Architecture Concepts and Principles to evaluate the readiness and relevance of architecture solutions evaluate existing client implementations for performance bottlenecks and suggest areas for improvement value proposition presentations and demos. Provide thought leadership within UST training on best practices in architecture as well as technical guidance to teams during system architecture. Define enterprise architecture frameworks validating application architecture solutions independently. Define system architecture for complex applications within the boundaries of the enterprise architecture. Use knowledge of Design Patterns Tools and Principles to identify optimized patterns within the given requirement. Review and suggest applicability of design/ patterns for business needs. Define design best practices at project level providing technical guidance to create high level design.Use knowledge of Software Development Process Tools & Techniques to identify and assess incremental improvements for software development process methodology and tools. Take technical responsibility for all stages in the software development process conduct optimal coding with clear understanding of memory leakage and related impact implement global standards and guidelines relevant to programming and development to come up with 'points of view' and new technological ideas. Use knowledge of Project Management Tools and Techniques to plan and manage simple small or medium size projects/ modules as defined within UST. Identify risks and mitigation strategies and implement the same to manage simple small or medium size projects/ modules.Use knowledge of Project Governance Framework to support development of the communication protocols escalation matrix reporting mechanisms for small / medium projects/ modules as defined within UST. Use knowledge of Project Metrics to understand relevance to project. Collect and collate project metrics and share with the relevant stakeholders Use knowledge of Knowledge Management Tools & Techniques to leverage existing material/ re-usable assets in knowledge repository. Independently create and update knowledge artefacts create and track project specific KT plans provide training to others write white papers/ blogs at internal level write technical documents/ user understanding documents at the end of the project. Use knowledge of Technical Standards Documentation & Templates to create documentation appropriate for the project needs. Create documentation appropriate for the reusable assets/ best practices/ case studies Use knowledge of Solution Structuring to carve out complex solutions/POCs for a customer based on their needs. Recommend technology specific accelerators / tools for the overall solution along with optimal features e.g. time savings cost benefits Knowledge Examples Industry Knowledge: Has working knowledge of standard business processes within the relevant industry vertical customer business domain Technology Knowledge- a. Broad Knowledge: multiple technologies (Terraform/Powershell etc) Exposure to multiple cloud platforms (Azure/AWS/GCP etc.) and specialized knowledge in at least one Technology Trends: Demonstrates broad knowledge of technology trends related to multiple inter-related technologies Architecture Concepts and Principles: a. Applies specialized level of understanding of standard architectural principles models patterns (e.g. Microservices/Containerization etc.) and perspective (e.g. TOGAF Zachman etc.) Software Development Process Tools & Techniques: Demonstrates thorough knowledge of end-to-end SDLC process (Agile and Traditional) SDLC methodology programming principles tools best practices (refactoring code code package etc.) Project Management: a. Demonstrates working knowledge of project management process (such as project scoping requirements management change management risk management quality assurance disaster management etc.) tools (MS Excel MPP Client specific time sheets Capacity planning tools UST0 etc.) b. Demonstrates working knowledge of Project Governance Framework RACI matrix c. Basic knowledge of Project Metrics such as utilization onsite to offshore ratio span of control fresher ratio and Quality Metrics Estimation and Resource Planning: Has specialized knowledge of: estimation and resource planning techniques (e.g. TCP estimation model case based scenario-based estimation work breakdown structure estimation etc.) Knowledge Management Tools & Techniques: Demonstrates working knowledge of: industry knowledge management tools (such as portals wiki UST and customer knowledge management tools techniques (such as classroom training self-study application walkthrough and reverse KT) Technical Standards Documentation & Templates: Demonstrates working knowledge of various document templates and standards (such as business blueprint design documents and test specifications) Solution Structuring: Demonstrates specialized knowledge of service offerings and products. Knowledge of build/release/tools and processes. Knowledge of software security and Audit compliance (GDPR/OWASP) and tools (Blackduck/ veracode/ checkmarxs) Additional Comments UST® is looking for a Devops Engineer to work with one of the leading financial services organization in the US. The ideal candidate must possess strong background on DevOps in a cloud (AWS) environment. The candidate must possess excellent leadership skills, written and verbal communication skills with the ability to collaborate effectively with domain experts and technical experts in the team. Responsibilities: Create plans, make higher level design documents for the given automation tasks. Automate the repetitive tasks through ansible playbooks. Automate the repetitive tasks through ansible and Jenkins pipeline creation. Working on Github. Clearly document and diagram the automation tasks worked on. Collaborate with other DevOps team members, and coordinate with development and business teams. Troubleshoot issues in production and other environments, applying debugging and problem-solving techniques, working closely with Development, Quality Engineering teams. Promote a DevOps culture, including building relationships with other technical and business teams. Requirements: 8+ years of experience in DevOps roles. Strongly prefer and capable to automate repetitive tasks. Experience in configuration management tools like ansible, chef. Extensive experience in building /maintaining /enhancing CI/CD Pipelines. Experience in build / deploy tool such as Jenkins, Jenkins Pipeline. Strong Experience in shell scripting and groovy scripts. Excellent understanding of Git. AWS administration experience including provisioning EC2 instances, autoscaling, S3 storage, IAM security, ECS containers, CloudWatch metrics & logs. Knowledge on docker and Kubernetes. Responsible for releases, doing builds and managing build system. Work effectively in a team with minimal supervision. You will hold an edge if you are an AWS Certified DevOps Engineer. Skills Devops,Gcp,Aws,Terraform

Posted 1 month ago

Apply

5 - 8 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Overview Bandgi Technologies is a SaaS product development company which provides niche technology skills and help organizations in innovation. Our specialization is on Industry 4.0/IIOT and we provide solutions to clients in US, Canada and Europe. We are Innovation Enablers and have offices in India (Hyderabad) and in the UK (Maidenhead). We are looking for an experienced DevOps Engineer with a background in AWS to join a growing enterprise organization. You will work within a growing AWS cloud team looking to build on and maintain their cloud infrastructure. The cloud engineer will split their time between supporting the transition of code through pipelines into a live state from software development, and evolving and maintaining cloud infrastructure and project/service introduction activities. Skill Sets - Must Have Solid Experience 5+yrs in Terraform writing Shell script, VPC'S creation, DevSecOps & sst.dev and Github actions is Mandatory. Working Knowledge & Experience On Linux operating systems & Experience building CI/CD pipelines using following : AWS : VPC, Security Group, IAM, S3, RDS, Lambda, EC2 (Autoscaling Group, Elastic beanstalk), CloudFormation and AWS stacks. Container : Docker, Kubernetes, Helm, Terraform. CI/CD pipelines : GitHub Actions(Mandatory). Databases SQL & NoSQL (MySQL, Postgres, DynamoDB). Observability best practices (Prometheus, Grafana, Jaeger, Elasticsearch) Good To Have Learning Attitude API Gateways. Microservices best practices (including design patterns) Authentication and Authorization (OIDC/SAML/OAuth 2.0) Your Responsibilities Will Include Operation and control of Cloud infrastructure (Docker platform services, network services and data storage). Preparation of new or changed services. Operation of the change/release process. Application of cloud management tools to automate the provisioning, testing, deployment and monitoring of cloud components. Designing cloud services and capabilities using appropriate modelling techniques. (ref:hirist.tech)

Posted 1 month ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Nexbuzz is a leading e-commerce services company focused on digital retail transformation. We specialize in crafting tailored solutions for large enterprises and work seamlessly with industry giants like Shopify, Fynd, BigCommerce. Our state-of-the-art technology and commitment to customer success make us the ideal partner for enhancing digital storefronts and driving growth. Join us to unlock the full potential of e-commerce through renowned platforms with unmatched flexibility and customization. We are looking for Fullstack Javascript Developer responsible for the client side of our service. Your primary focus will be to implement a complete user interface in the form of a web app, with a focus on performance. Your primary duties will include creating modules and components and coupling them together into a functional app. You will work in a team with the back-end developers and communicate with the API using standard methods. A thorough understanding of all of the components of our platform and infrastructure is required. What will you do at Nexbuzz?Build scalable and loosely coupled services to extend our platformBuild bulletproof API integrations with third-party APIs for various use casesEvolve our Infrastructure and add a few more nines to our overall availabilityHave full autonomy and own your code, and decide on the technologies and tools to deliver as well operate large-scale applications on AWSGive back to the open-source community through contributions on code and blog postsThis is a startup so everything can change as we experiment with more product improvements Some specific Requirements:Atleast 3+ years of Development ExperienceYou have prior experience developing and working on consumer-facing web/app productsHands-on experience in JavaScript. Exceptions can be made if you’re really good at any other language with experience in building web/app-based tech productsExpertise in Node.JS and Experience in at least one of the following frameworks - Express.js, Koa.js, Socket.io (http://socket.io/)Good knowledge of async programming using Callbacks, Promises, and Async/AwaitHands-on experience with Frontend codebases using HTML, CSS, and AJAXWorking knowledge of MongoDB, Redis, MySQLGood understanding of Data Structures, Algorithms, and Operating SystemsYou've worked with AWS services in the past and have experience with EC2, ELB, AutoScaling, CloudFront, S3Experience with Frontend Stack would be added advantage (HTML, CSS)You might not have experience with all the tools that we use but you can learn those given the guidance and resourcesExperience in Vue.js would be plus Location: Mumbai, Bengaluru

Posted 2 months ago

Apply

4 - 6 years

7 - 8 Lacs

Noida

Work from Office

Naukri logo

Notice Period: Immediate to 15 days Work Mode: Onsite (Client Office) Primary Skills: Backend Development: Node.js, Express.js/Koa.js/Socket.io Frontend: JavaScript, HTML, CSS, AJAX Databases: MongoDB (Expert Level), PostgreSQL, Redis, MySQL Async Programming: Callbacks, Promises, Async/Await Cloud Services: AWS (EC2, ELB, AutoScaling, CloudFront, S3) Queue Systems: Kafka Job Scheduler: Bull Infrastructure: Docker, Kubernetes (K8s) Logging & Monitoring: Expertise in logging, tracing, and application monitoring Secondary Skills: Experience working with large datasets Familiarity with Frontend frameworks like Vue.js (Preferred) Good understanding of Data Structures, Algorithms, and Operating Systems Job Responsibilities: Develop and optimize consumer-facing web and app products Work with Node.js and at least one backend framework (Express.js, Koa.js, or Socket.io) Implement efficient async programming techniques using Callbacks, Promises, and Async/Await Manage databases including MongoDB, PostgreSQL, Redis, and MySQL Utilize AWS services for cloud-based application development Work with Kafka for queue management and Bull for job scheduling Implement frontend functionality using JavaScript, HTML, CSS, and AJAX Monitor and optimize application performance using logging, tracing, and monitoring tools Collaborate with cross-functional teams to ensure smooth project execution To Apply: Share resumes with: Current CTC Expected CTC Location Preference

Posted 3 months ago

Apply

8 - 13 years

10 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

Senior - AWS - Platform Engineering - Skills BreakdownStrong knowledge of AWS services, including but not limited toHands-on AWS networking skills (e.g. VPC, subnets, NACL, Transit Gateway, Route tables. Load Balancer, Direct Connect gateway, Route53, etc).Thorough understanding of networking concepts, especially TCPIP, IP addressing and subnet calculation.Solid experience with AWS Security services; IAM (identity, resource, and service control policies, permission boundary, roles, federation, etc.), Security groups, KMS, ACM/ACM-PCA, Network Firewall, Config/GuardDuty/CloudTrail, secrets manager, systems manager (ssm) etc.Good knowledge of various AWS Integration patterns, lambda with amazon EventBridge, and SNS.Any workload-related experience is a bonus, e.g. EKS, ECS, Autoscaling, etcContainerisation experience with Docker and EKS (preferred)Infrastructure as a Code and scripting:Solid hands-on experience with declarative languages, Terraform (& Terragrunt preferred) and their capabilitiesComfortable with bash scripting, and at least one programming language (Python or Golang preferred).Sound knowledge of secure coding practices, and configuration/secrets managementKnowledge in writing unit and integration tests.Experience in writing infrastructure unit tests; Terratest preferredSolid understanding of CI/CDSolid understanding of zero-downtime deployment patternsExperience with automated continuous integration testing, including security testing using SAST toolsExperience in automated CI/CD pipeline tooling; Codefresh preferredExperience in creating runners, docker imagesExperience using version control systems such as gitExposed to, and comfortable working on large source code repositories in a team environment.Solid expertise with Git and Git workflows, working within mid to large (infra) product development teamsGeneral / Infrastructure ExperienceExperience with cloud ops (DNS, Backups, cost optimisation, capacity management, monitoring/alerting, patch management, etc.)Exposure to complex application environments, including containerised as well as serverless applicationsWindows and/or Linux systems administration experience (preferred)Experience with Active Directory (preferred)Exposure to multi-cloud and hybrid infrastructureExposure to large-scale on-premise to cloud infrastructure migrationsSolid experience in working with mission-critical production systems

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies