Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
8.0 years
0 Lacs
India
On-site
Job Summary: We are looking for an experienced Cloud Platform Lead to spearhead the design, implementation, and governance of scalable, secure, and resilient cloud-native platforms on Azure . This role requires deep technical expertise in Azure services , Kubernetes (AKS) , containers , Gateway, Frontdoor, WAF , and API management , along with the ability to lead cross-functional initiatives and define cloud platform strategy and best practices. Key Responsibilities: β Lead the architecture, development, and operations of Azure-based cloud platforms across environments (dev, staging, production). β Design and manage Azure Front Door , Application Gateway , and WAF to ensure global performance, availability, and security. β Design and implement Kubernetes platform (AKS) , ensuring reliability, observability, and governance of containerized workloads. β Drive adoption and standardization of Azure API Management for secure and scalable API delivery. β Collaborate with security and DevOps teams to implement secure-by-design cloud practices, including WAF rules , RBAC , and network isolation . β Guide and mentor engineers in Kubernetes, container orchestration, CI/CD pipelines, and Infrastructure as Code (IaC). β Define and implement monitoring, logging, and alerting best practices using tools like Azure Monitor , ELK, Signoz β Evaluate and introduce tools, frameworks, and standards to continuously evolve the cloud platform. β Participate in cost optimization and performance tuning initiatives for cloud services. Required Skills & Qualifications: β 8+ years of experience in cloud infrastructure or platform engineering, including at least 4+ years in a leadership or ownership role . β Deep hands-on expertise with Azure Front Door , Application Gateway , Web Application Firewall (WAF) , and Azure API Management . β Strong experience with Kubernetes and Azure Kubernetes Service (AKS) , including networking, autoscaling, and security. β Proficient with Docker and container orchestration principles. β Infrastructure-as-Code experience with Terraform , ARM Templates , or Bicep . β Excellent understanding of cloud security, identity (AAD, RBAC), and compliance. β Experience building and guiding CI/CD workflows using tools like Azure DevOps and Bitbucket Ci/CD, or similar. Education B Tech / BE/ M Tech / MCA Job Type: Full-time Schedule: Day shift Application Question(s): What is your total years of experience what is the relevant years of experience what is your current CTC What is your expected CTC How long is the notice period How many years of experience in Azure Front Door, Application Gateway, Web Application Firewall (WAF), and Azure API Management. How many years of experience in Terraform, ARM Templates, or Bicep. How many years of experience in Kubernetes and Azure Kubernetes Service (AKS). How many years of experience in designing and implementing Azure architecture for production grade application on Kubernetes. How many years of experience in Docker and container orchestration principles. Work Location: In person
Posted 11 hours ago
3.0 - 6.0 years
0 Lacs
Goregaon, Maharashtra, India
On-site
Experience: 3 to 6 years Location: Mumbai (Onsite) Openings: 2 About the Role: We are looking for hands-on and automation-driven Associate Cloud Engineers to join our DevOps team at Gray Matrix. You will be responsible for managing cloud infrastructure, CI/CD pipelines, containerized deployments, and ensuring platform stability and scalability across environments. Key Responsibilities: Design, build, and maintain secure and scalable infrastructure on AWS, Azure, or GCP. Set up and manage CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins. Manage Dockerized environments, ECS, EKS, or Kubernetes clusters for microservice-based deployments. Monitor and troubleshoot production and staging environments, ensuring uptime and performance. Work closely with developers to streamline release cycles and automate testing, deployments, and rollback procedures. Maintain infrastructure as code using Terraform or CloudFormation. What Weβre Looking For: 3β6 years of experience in DevOps or cloud engineering roles. Strong knowledge of Linux system administration, networking, and cloud infrastructure (preferably AWS). Experience with Docker, Kubernetes, Nginx, and monitoring tools like Prometheus, Grafana, or CloudWatch. Familiarity with Git, scripting (Shell/Python), and secrets management tools. Ability to debug infrastructure issues, logs, and deployments across cloud-native stacks. Bonus Points: Certification in AWS/GCP/Azure DevOps or SysOps. Exposure to security, cost optimization, and autoscaling setups. Work Mode: Onsite β Mumbai Reporting To: Senior Cloud Engineer / Lead Cloud Engineer Show more Show less
Posted 12 hours ago
0 years
0 Lacs
India
Remote
Design, provision, and document a production-grade AWS micro-service platform for a Apache-powered ERP implementationβhitting our 90-day βgo-liveβ target while embedding DevSecOps guard-rails the team can run without you. Key Responsibilities Cloud Architecture & IaC Author Terraform modules for VPC, EKS (Graviton), RDS (MariaDB Multi-AZ), MSK, ElastiCache, S3 lifecycle, API Gateway, WAF, Route 53. Implement node pools (App, Spot Analytics, Cache, GPU) with Karpenter autoscaling. CI/CD & GitOps Set up GitHub Actions pipelines (lint, unit tests, container scan, Terraform Plan). Deploy Argo CD for Helm-based application roll-outs (ERP, Bot, Superset, etc.). DevSecOps Controls Enforce OPA Gatekeeper policies, IAM IRSA, Secrets Manager, AWS WAF rules, ECR image scanning. Build CloudWatch/X-Ray dashboards; wire alerting to Slack/email. Automation & DR Define backup plans (RDS PITR, EBS, S3 Std-IA β Glacier). Document cross-Region fail-over run-book (Route 53 health-checks). Standard Operating Procedures Draft SOPs for patching, scaling, on-call, incident triage, budget monitoring. Knowledge Transfer (KT) Run 3Γ 2-hour remote workshops (infra deep-dive, CI/CD hand-over, DR drill). Produce βDay-2β wiki: diagrams (Mermaid), run-books, FAQ. Required Skill Set 8+ yrs designing AWS micro-service / Kubernetes architectures (ideally EKS on Graviton). Expert in Terraform , Helm , GitHub Actions , Argo CD . Hands-on with RDS MariaDB , Kafka (MSK) , Redis , SageMaker endpoints . Proven DevSecOps background: OPA, IAM least-privilege, vulnerability scanning. Comfortable translating infra diagrams into plain-language SOPs for non-cloud staff. Nice-to-have: prior ERP deployment experience; WhatsApp Business API integration; EPC or construction IT domain knowledge. How Success Is Measured Go-live readiness β Production cluster passes load, fail-over, and security tests by Day 75. Zero critical CVEs exposed in final Trivy scan. 99 % IaC coverage β manual console changes not permitted. Team self-sufficiency β internal staff can recreate the stack from scratch using docs + KT alone. Show more Show less
Posted 14 hours ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description We are looking for a skilled and proactive Full Stake .NET Developer with 4+ years of experience to join our engineering team at ELLKAY Software Pvt.Ltd. Your position as a Software Engineer will be instrumental in coding, testing and maintaining software applications that power our organisations product(s). As a crucial part of our development team, you will also be responsible for developing and integrating Cloud Services and Deployment as well as to build scalable and performant solutions on cloud platforms such as AWS or Azure. Key Responsibilities Develop efficient C# client applications(.net framework), robust APIs and Services using .Net Core that interact seamlessly with backend services. Develop and maintain cloud-native applications using services offered by AWS or Azure. Design clean architecture enables easy maintenance, scalability and Identify/Manage performance bottlenecks of the Application and APIs. Implement monitoring solutions using Prometheus, Grafana and centralized logging like ELK stack to gain application performance and health. Follow best practices for security measures like HTTPs, JWT Authentication and secure storage of sensitive information. Participate in code reviews, architecture discussions, and design Skills And Qualifications Bachelors degree in Computer Science, Information Technology, or a related field. 4+ years of experience as a Full Stack .Net Developer on Microsoft Applications and working experience on cloud-based applications. Efficient communication skills and the ability to work collaboratively within a team and can work closely with cross-functional Skills Strong experience with .NET Core / .NET 8 and C# development. Solid understanding of DevOps principles and hands-on experience with CI/CD pipelines (e., Azure DevOps, Jenkins, GitHub Actions). Familiarity with Containerization (Docker) and Orchestration (Kubernetes) deployment to managed services (e., ConfigMaps, Secrets, Horizontal Pod Autoscaling). Experience with either Azure or AWS cloud Skills Experience with monitoring and logging tools like Prometheus, Grafana, Azure Monitor, or CloudWatch. Experience with databases such as PostgreSQL database. Experience with event-driven architectures or microservices. Cloud certification is a plus (e., AWS Developer Associate, Azure Developer Associate). (ref:hirist.tech) Show more Show less
Posted 1 day ago
3.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Company Profile Our client is a global IT services company that helps businesses with digital transformation with oο¬ces in India and the United States. It helps businesses with digital transformation, provide IT collaborations and uses technology, innovation, and enterprise to have a positive impact on the world of business. With expertise is in the ο¬elds of Data, IoT, AI, Cloud Infrastructure and SAP, it helps accelerate digital transformation through key practice areas - IT staο¬ng on demand, innovation and growth by focusing on cost and problem solving. Location & work β New Delhi (On βSite), WFO Employment Type - Full Time Profile β Platform Engineer Preferred experience β 3-5 Years The Role: We are looking for a highly skilled Platform Engineer to join our infrastructure and data platform team. This role will focus on the integration and support of Posit integration for data science workloads, managing R language environments, and leveraging Kubernetes to build scalable, reliable, and secure data science infrastructure. Responsibilities: Integrate and manage Posit Suite (Workbench, Connect, Package Manager) within containerized environments. Design and maintain scalable R environment integration (including versioning, dependency management, and environment isolation) for reproducible data science workflows. Deploy and orchestrate services using Kubernetes, including Helm-based Posit deployments. Automate provisioning, configuration, and scaling of infrastructure using IaC tools (Terraform, Ansible). Collaborate with Data Scientists to optimize R runtimes and streamline access to compute resources. Implement monitoring, alerting, and logging for Posit components and Kubernetes workloads. Ensure platform security and compliance, including authentication (e.g., LDAP, SSO), role-based access control (RBAC), and network policies. Support continuous improvement of DevOps pipelines for platform services. Must-Have Qualifications β Bachelor's or Master's degree in Computer Science, Information Systems, or a related field. Minimum 3+ years of experience in platform, DevOps, or infrastructure engineering. Hands-on experience with Posit (RStudio) products including deployment, configuration, and user management. Proficiency in R integration practices in enterprise environments (e.g., dependency management, version control, reproducibility). Strong knowledge of Kubernetes, including Helm, pod security, and autoscaling. Experience with containerization tools (Docker, OCI images) and CI/CD pipelines. Familiarity with monitoring tools (Prometheus, Grafana) and centralized logging (ELK, Loki). Scripting experience in Bash, Python, or similar. Preferred Qualifications Experience with cloud-native Posit deployments on AWS, GCP, or Azure. Familiarity with Shiny apps, RMarkdown, and their deployment through Posit Connect. Background in data science infrastructure, enabling reproducible workflows across R and Python. Exposure to JupyterHub or similar multi-user notebook environments. Knowledge of enterprise security controls, such as SSO, OAuth2, and network segmentation. Application Method Apply online on this portal or on email at careers@speedmart.co.in Show more Show less
Posted 2 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title :Performance Tester Key Skills :AWS,Jmeter,AppDynamics, New Relic, Splunk, DataDog Job Locations :Chennai,Pune Experience : 65-7 Education Qualification : Any Graduation. Work Mode : Hybrid. Employment Type : Contract. Notice Period : Immediate Job Description: Experience, Skills and Qualifications: β’ Performance Engineering, testing and tuning cloud hosted digital platforms (e.g. AWS) β’ Working knowledge (preferably with an AWS Solutions Architect certification) on Cloud Platforms like AWS and AWS Key Services and DevOps tools like CloudFormation, Teraform β’ Performance engineering and testing of web Apps (Linux) Performance testing and tuning web-based applications β’ Performance engineering toolsets such as JMeter, Microfocus Performance Center, BrowserStack, Taurus, Lighthouse, β’ Monitoring/logging tools (such as AppDynamics, New Relic, Splunk, DataDog) β’ Windows / UNIX / Linux / Web / Database / Network performance monitors to diagnose performance issues along with JVM tuning and Heap analysing skills β’ Docker, Kubernetes and Cloud-native development and container orchestration frameworks, Kubernetes clusters, pods & nodes, vertical/horizontal pod autoscaling concepts, High availability β’ Performance Testing and Engineering activity planning, estimating, designing, executing and analysing output from performance tests β’ Working in an agile environment, "DevOps" team or a similar multi-skilled team in a technically demanding function β’ Jenkins and CI-CD Pipelines including Pipeline scripting β’ Chaos engineering using tools like Chaos toolkit, AWS Fault Injection Simulator, Gremlin etc. β’ Programming and scripting language skills in Java, Shell, Scala, Groovy, Python and knowledge of security mechanisms such as OAuth etc. β’ Tools like GitHub, Jira & Confluence β’ Assisting with Resiliency Production support teams and Performance Incident Root Cause Analysis β’ Ability to prioritize work effectively and deliver within agreed service levels in a diverse and ever-changing environment β’ High levels of judgment and decision making, being able to rationalize and present the background and reasoning for direction taken β’ Strong stakeholder management and excellent communication skills. β’ Extensive knowledge of risk management and mitigation β’ Strong analytical and problem-solving skills Show more Show less
Posted 2 days ago
10.0 years
0 Lacs
Delhi
On-site
Purpose of role: Mid-level leadership role in Service Management. Maintain excellent service uptime levels for both external services for clients and internal high impacting tools. Maintain Engineers and architects within the team and provide higher management with a clear high level overview of the teamβs activities and progress. Seniority is based on years of experience, knowledge and skill-set. This role is also hands-on in day-to-day operations of the team. Experience: 10+ years for Senior Manager (12+ years for Director position) Role: Technical, Sr. Manager / Director (MC) Knowledge and Skill-set: Degree in Computer Science, Software Engineering, IT or related discipline 10+ yearsβ professional experience in infrastructure (on-premise/cloud) / Linux administration / networking / client project implementations and experience in leading a Infrastructure team Must have a strong background in Cloud infrastructure, from serverless up to containerization. Must have a general idea about (but not limited to): Cloud infrastructure, Continuous Integration/Continuous Deployment Must be an expert in Infrastructure best practices and practise them where applicable Must have in-depth knowledge of AWS (or similar), including: AutoScaling, S3, CloudFront, Route53, IAM, Certificate Manager, DynamoDB/MongoDB and RDS Must have in-depth knowledge of Jenkins or other CI/CD environments Must be familiar with cost optimisation both for clientsβ and internal projects Must have the ability to develop and manage a budget Must have, at least, the following certification(s): AWS Certified Solutions Architect - Associate (Professional will be preferred) Must have an understanding of software development processes, tools, and skill in at least two languages (back-end/front-end/scripting/JS) Strong written and verbal communication skills in English. Must also be able to simplistically explain solutions to other team members and clients, who donβt necessarily have to be technical Experience with containerisation and orchestration Requirements: Responsibilities: Lead the Infrastructure Operations team Act as an escalation point for the Infrastructure Operations Team Act as mentor and escalation point for the Support Engineering team Analyse system requirements Recommend alternative technologies where applicable Work closely with the higher management and provide high level reporting of the teamβs activities Have the ability to document his/her work in a clear and concise manner Always be on the lookout for gaps in the general day to day operations Provide suggestions for where things can be automated Work closely with internal stakeholders (Delivery Managers, Engineers, Support, Products, and QA) for implementing the best solutions for clients and define clear roadmaps and milestones Work closely with the Engineering Directors to define architecture standards, policies and processes, and governing methodologies on the aspect of (but not limited to) infrastructure, efficiency, security, and reliability Draft, review and management of proposals and commercial contracts Carry-out Management tasks such as resourcing, budget, proposals and commercial contracts preparation/review, mentoring, etc Country: India
Posted 3 days ago
8.0 years
0 Lacs
India
Remote
Apply at https://www.gravityer.com/jobs/full-time/lead-devops-engineer The Lead DevOps Engineer will assume a pivotal role in propelling the growth and prosperity of our organization. We are seeking a skilled and proactive DevOps Engineer to join our team. In this role, you will develop and maintain GCP infrastructure, automate deployment and scaling using Kubernetes, and collaborate with the software development team. This position offers an exciting opportunity to monitor system performance, implement Infrastructure as Code practices, ensure high levels of performance and security, and operate effectively in an Agile, start-up environment. Responsibilities Design and maintain highly available, fault-tolerant systems on GCP using SRE best practices. Implement SLIs/SLOs, monitor error budgets, and lead post-incident reviews with RCA documentation. Automate infrastructure provisioning (Terraform/Deployment Manager) and CI/CD workflows. Operate and optimize Kubernetes (GKE) clusters including autoscaling, resource tuning, and HPA policies. Integrate observability across microservices using Prometheus, Grafana, Stackdriver, and OpenTelemetry. Manage and fine-tune databases (MySQL/Postgres/BigQuery/Firestore) for performance and cost. Improve API reliability and performance through Apigee (proxy tuning, quota/policy handling, caching). Drive container best practices including image optimization, vulnerability scanning, and registry hygiene. Participate in on-call rotations, capacity planning, and infrastructure cost reviews. Qualifications Minimum 8 years of total experience, with at least 3 years in SRE, DevOps, or Platform Engineering roles. Strong expertise in GCP services (GKE, IAM, Cloud Run, Cloud Functions, Pub/Sub, VPC, Monitoring). Advanced Kubernetes knowledge: pod orchestration, secrets management, liveness/readiness probes. Experience in writing automation tools/scripts in Python, Bash, or Go. Solid understanding of incident response frameworks and runbook development. CI/CD expertise with GitHub Actions, Cloud Build, or similar tools. Skills: mysql,go,kubernetes,postgres,gcp,ansible,grafana,terraform,monitoring tools,opentelemetry,prometheus,ci/cd,apigee,database,bash,scripting language,stackdriver,firestore,bigquery,devops,cloud,senior reliability engineer,python,docker Show more Show less
Posted 3 days ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
As a Lead Software Engineer β Performance Engineering , you will drive the strategy, design, and execution of performance engineering initiatives across highly distributed systems. You will lead technical efforts to ensure reliability, scalability, and responsiveness of business-critical applications. This role requires deep technical expertise, hands-on performance testing experience, and the ability to mentor engineers while collaborating cross-functionally with architecture, SRE, and development teams. Responsibilities: Define, implement, and enforce SLAs, SLOs, and performance benchmarks for large-scale systems. Lead performance testing initiatives including load, stress, soak, chaos, and scalability testing. Design and build performance testing frameworks integrated into CI/CD pipelines. Analyze application, infrastructure, and database metrics to identify bottlenecks and recommend optimizations. Collaborate with cross-functional teams to influence system architecture and improve end-to-end performance. Guide the implementation of observability strategies using monitoring and APM tools. Optimize cloud infrastructure (e.g., autoscaling, caching, network tuning) for cost-efficiency and speed. Tune databases and messaging systems (e.g., PostgreSQL, Kafka, Redis) for high throughput and low latency. Mentor engineers and foster a performance-first culture across teams. Lead incident response and postmortem processes related to performance issues. Drive continuous improvement initiatives using data-driven insights and operational feedback. Required Qualifications: Bachelorβs or Masterβs degree in Computer Science, Engineering, or related field. 8+ years of experience in software/performance engineering, with 2+ years in a technical leadership role. Expertise in performance testing tools such as JMeter, k6, Gatling, or Locust. Strong knowledge of distributed systems, cloud-native architecture, and microservices. Proficiency in scripting and automation using Python, Go, or Shell. Experience with observability and APM tools (e.g., Datadog, Prometheus, New Relic, AppDynamics). Deep understanding of SQL performance, caching strategies, and tuning for systems like PostgreSQL and Redis. Familiarity with CI/CD pipelines, container orchestration, and IaC tools (e.g., Kubernetes, Terraform). Strong communication skills and experience mentoring and leading technical teams. Ability to work cross-functionally and make informed decisions in high-scale, production environments. Show more Show less
Posted 3 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title :Performance Tester Key Skills :AWS,Jmeter,AppDynamics, New Relic, Splunk, DataDog Job Locations :Chennai,Pune Experience : 65-7 Education Qualification : Any Graduation. Work Mode : Hybrid. Employment Type : Contract. Notice Period : Immediate Job Description: Experience, Skills and Qualifications: β’ Performance Engineering, testing and tuning cloud hosted digital platforms (e.g. AWS) β’ Working knowledge (preferably with an AWS Solutions Architect certification) on Cloud Platforms like AWS and AWS Key Services and DevOps tools like CloudFormation, Teraform β’ Performance engineering and testing of web Apps (Linux) Performance testing and tuning web-based applications β’ Performance engineering toolsets such as JMeter, Microfocus Performance Center, BrowserStack, Taurus, Lighthouse, β’ Monitoring/logging tools (such as AppDynamics, New Relic, Splunk, DataDog) β’ Windows / UNIX / Linux / Web / Database / Network performance monitors to diagnose performance issues along with JVM tuning and Heap analysing skills β’ Docker, Kubernetes and Cloud-native development and container orchestration frameworks, Kubernetes clusters, pods & nodes, vertical/horizontal pod autoscaling concepts, High availability β’ Performance Testing and Engineering activity planning, estimating, designing, executing and analysing output from performance tests β’ Working in an agile environment, "DevOps" team or a similar multi-skilled team in a technically demanding function β’ Jenkins and CI-CD Pipelines including Pipeline scripting β’ Chaos engineering using tools like Chaos toolkit, AWS Fault Injection Simulator, Gremlin etc. β’ Programming and scripting language skills in Java, Shell, Scala, Groovy, Python and knowledge of security mechanisms such as OAuth etc. β’ Tools like GitHub, Jira & Confluence β’ Assisting with Resiliency Production support teams and Performance Incident Root Cause Analysis β’ Ability to prioritize work effectively and deliver within agreed service levels in a diverse and ever-changing environment β’ High levels of judgment and decision making, being able to rationalize and present the background and reasoning for direction taken β’ Strong stakeholder management and excellent communication skills. β’ Extensive knowledge of risk management and mitigation β’ Strong analytical and problem-solving skills Show more Show less
Posted 3 days ago
6.0 - 8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of β¬5.8 billion. Job Description The world is how we shape it. Role: AWS DevOps Engineer Skillset: AWS, Terraform, CI/CD Pipeline & Kubernetes (AKS) Experience: 6-8 years Certification: AWS Certified Location: Noida and Chennai Professional Experience in: Cloud-native software architecture (Microservices, Patterns, DDD, β¦) Working with internal developer platforms such as backstage AWS Platform (Infrastructure, Services, Administration, Provisioning, Monitoringβ¦) Managing Kubernetes (AKS) Operations Networking Autoscaling High Availability GitOps based deployments (argocd) Working with basic web technologies (DNS, HTTPS, TLS, JWT, OAuth2.0, OIDC, etcβ¦) Container Technology such as Docker Infrastructure as Code via Terraform Experience in: Cloud Monitoring, Alerting, Observability mechanisms e.g. via Elastic Stack Defining, Implementing and Maintaining build and release pipelines -> Continuous Deployment Working with git version control systems Code Collaboration Platforms (AWS DevOps, GitHub) Cloud Security patterns Developing software based on Spring & Spring Boot Preparation of Release and deployment scripts (In test and production environments) Qualifications BE, MCA, BTech Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities. Show more Show less
Posted 3 days ago
2.0 years
0 Lacs
Desuri, Rajasthan, India
On-site
What we offer Home About Services Back IT Consulting DevSecOps DevOps Site Reliability Engineering Cloud Advancement Managed Kubernetes Services Platform Engineering Infrastructure Automation Data Intelligence and Innovation Cloud Native Architecture Internet of Things (IoT) SAP DevEx Need different solutions? BerryBytes scalable solutions adapt to your needs, ensuring robust growth without compromise. Talk to sales Learn more about 01Cloud Protection Against Cyber Threats Scalable and Tailored Solutions Expert Guidance and Support Careers Events News Contact DevOps Engineer (AWS Solution Architect) DevOps Engineer (AWS Solution Architect) Job Category: Infrastructure Engineering Job Type: Full Time Job Location: India & Nepal Reports To : Director of Cloud Infrastructure We are looking for a DevOps Engineer to be responsible for our infrastructure and deployments in our Multi-cloud environments. As a member of our engineering team, you will be in involved all things DevOps/SysOps/MLOps. Youβll be responsible for planning and building tools for system configuration and provisioning. This role also will be responsible for maintaining any required infrastructure SLAs both internal and external to the business. Our team is extremely collaborative. Interested candidates must be self-motivated, willing to learn, and willing to share new ideas to improve our team and process. Responsibilities Performs technical maintenance of the configuration management tools and release engineering practices to ensure technical changes are documented, comply with standard configurations, and are sustainable. Designs develops, automates, and maintains tools using an automate-first mindset to improve the quality and repeatability of software and infrastructure configuration development and deployment. Will train software developers and system administrators in the use of pipeline tools and the implementation of quality standards. Oversee integration work & provide automated solutions in support of multiple products. Provide technical leadership, lead code reviews and mentor other developers. Build systems that dynamically scale. Plan deployment. Requirements Experience with AWS and GCP. Hands of experience in Kubernetes (at least 2years of K8s experience.) Minimum 3+ years experience with Unix based systems. Working knowledge of Ansible, or other Configuration Management. Experience in leading scripting tools (Python/Ruby/Bash etc). Experience with Jenkins or Cloud Native CI/CD. Strong scripting and automation skills. Solid understanding of web applications. Experience in Windows and LInux Automations using ANsible or similar. Excellent hands on skill in Terraform and CloudFormation. Great To Have Experience with Terraform Experience with Azure AWS Solution Arch (Pro) or DevOps Engineer (Pro) Experience with continuous deployments (CD) Experience with cloud-based autoscaling and elastic sizing Experience with relational database administration and SQL Experience with Redis, MongoDB, Memcached, Cassandra, or other non-relational storage Apply for this position Full Name * Email * Phone * Cover Letter * Upload CV/Resume *Allowed Type(s): .pdf By using this form you agree with the storage and handling of your data by this website. * Get the latest BerryBytes updates by subscribing to our Newsletter! Unleash Your Potential with Cloud Native Solutions Contact Us Navigation Home About Careers Events News Contact Services IT Consulting DevSecOps DevOps Site Reliability Engineering Cloud Advancement Managed Kubernetes Services Platform Engineering Services Infrastructure Automation Data Intelligence and Innovation Cloud Native Architecture Internet Of Things (IoT) SAP DevEx Legal Terms & Conditions Cookie Policy Privacy Policy Copyright Β© 2025 BerryBytes. All Rights Reserved. Designed & Built by Wattdot What we offer Home About Services Back IT Consulting DevSecOps DevOps Site Reliability Engineering Cloud Advancement Managed Kubernetes Services Platform Engineering Infrastructure Automation Data Intelligence and Innovation Cloud Native Architecture Internet of Things (IoT) SAP DevEx Need different solutions? BerryBytes scalable solutions adapt to your needs, ensuring robust growth without compromise. Talk to sales Learn more about 01Cloud Protection Against Cyber Threats Scalable and Tailored Solutions Expert Guidance and Support Careers Events News Contact Show more Show less
Posted 4 days ago
5.0 years
0 Lacs
India
Remote
eTip eTip is the leading digital tipping platform for the hospitality and service industry, empowering businesses with tools to attract, retain, and motivate their hardworking staff. Trusted by thousands of leading hotels, restaurants, and management companies, eTip stands out due to its commitment to customer security, product customization, dedication to customer service, and to its numerous partnerships including with Visa. Your Calling As a Senior DevOps Engineer, you will own and drive the DevOps strategy for our cloud-native tech stack built on top of AWS, Kubernetes, and Karpenter. Youβll design, implement, and optimize scalable, secure, and highly available infrastructure that processes millions of dollars while fostering a culture of automation, observability, and CI/CD excellence. What Youβll Do Infrastructure & Cloud Leadership Architect, deploy, and manage AWS cloud infrastructure (EKS, EC2, VPC, IAM, RDS, S3, Lambda, etc). Lead Kubernetes (EKS) cluster design, scaling, and optimization using Karpenter for cost-efficient autoscaling. Optimize cloud costs while ensuring performance and reliability. CI/CD & Automation Develop & maintain Github Action CI/CD pipeline workflows for backend, web frontend, & Android/iOS. Observability & Reliability Develop & maintain logging (Loki), monitoring (Prometheus, Grafana), and alerting to ensure system health. Security & Compliance Harden Kubernetes clusters (RBAC, network policies, OPA/Gatekeeper). Ensure compliance with SOC2, ISO 27001, or other security frameworks. Application development When infrastructure work is down, develop application features on backend/frontend depending on where your strengths/interests fit. Skills You Bring 5+ years of DevOps/SRE experience for SaaS companies. Deep expertise in AWS & Kubernetes. Proficiency in Karpenter, Helm and other Kubernetes operators. Strong development skills (Kotlin, Python, Go, or Bash). Experience with observability tools (Prometheus, Grafana, OpenTelemetry). Security-first mindset with knowledge of networking and cost optimization. Why Youβll Love Working Here Own and shape DevOps for a cutting-edge cloud-native stack. Work alongside very passionate & talented engineers. Work on a very high impact product that processes millions of dollars. Remote first, flexible work environment. Growth opportunities in a small, collaborative, high-impact team. Participate in yearly off-sites that take place all around the world. Eager to build the future of tipping with us? πͺ Apply today! π Show more Show less
Posted 4 days ago
3.0 years
0 Lacs
India
On-site
About Us: Waltcorp is at the forefront of cloud engineering, helping businesses transform their operations by leveraging the power of Google Cloud Platform (GCP) . We are seeking a skilled and visionary GCP DevOps Solutions Architect β ML/AI Focus to design and implement cloud solutions that address our clients' complex business challenges. Key Responsibilities: Solution Design: Collaborate with stakeholders to understand business requirements and design scalable, secure, and high-performing GCP cloud architectures . Technical Leadership: Serve as a technical advisor, guiding teams on GCP best practices, services, and tools to optimize performance, security, and cost efficiency. Infrastructure Development: Architect and oversee the deployment of cloud solutions using GCP services such as Compute Engine, Cloud Storage, Cloud Functions, Cloud SQL , and more. Infrastructure as Code (IaC) & Cloud Automation: Design, implement, and manage infrastructure using Terraform, Google Cloud Deployment Manager , or Pulumi . Automate provisioning of compute, storage, and networking resources using GCP services like Compute Engine, Cloud Storage, VPC, IAM, GKE (Google Kubernetes Engine), Cloud Run . Implement and maintain CI/CD pipelines (using Cloud Build, Jenkins, GitHub Actions , or GitLab CI ). ML Model Deployment & Automation (MLOps): Build and optimize end-to-end ML pipelines using Vertex AI Pipelines, Kubeflow , or MLflow . Automate training, testing, validation, and deployment of ML models in staging and production environments. Support model versioning, reproducibility, and lineage tracking using tools like DVC, Vertex AI Model Registry , or MLflow . Monitoring & Logging: Implement monitoring for both infrastructure and ML workflows using Cloud Monitoring, Prometheus, Grafana, Vertex AI Model Monitoring . Set up alerting for anomalies in ML model performance (data drift, concept drift). Ensure application logs, model outputs, and system metrics are centralized and accessible. Containerization & Orchestration: Containerize ML workloads using Docker and orchestrate using GKE or Cloud Run . Optimize resource usage through autoscaling and right-sizing of ML workloads in containers. Data & Experiment Management: Integrate with data versioning tools (e.g., DVC or LakeFS ) to track datasets used in model training. Enable experiment tracking using MLflow, Weights & Biases , or Vertex AI Experiments . Support reproducible research and automated experimentation pipelines. Client Engagement: Communicate complex technical solutions to non-technical stakeholders and deliver high-level architectural designs, presentations, and proposals. Integration and Migration: Plan and execute cloud migration strategies, integrating existing on-premises systems with GCP infrastructure . Security and Compliance: Implement robust security measures, including IAM policies, encryption, and monitoring , to ensure compliance with industry standards and regulations. Documentation: Develop and maintain detailed technical documentation for architecture designs, deployment processes, and configurations. Continuous Improvement: Stay current with GCP advancements and emerging trends , recommending updates to architecture strategies and tools. Qualifications: Educational Background: Bachelorβs degree in Computer Science, Information Technology, or a related field (or equivalent experience). Experience: 3+ years of experience in cloud architecture, with a focus on GCP . Technical Expertise: Strong knowledge of GCP core services , including compute, storage, networking, and database solutions. Proficiency in Infrastructure as Code (IaC) tools like Terraform , Deployment Manager , or Pulumi . Experience with containerization and orchestration tools (e.g., Docker , Kubernetes , GKE , or Cloud Run ). Understanding of DevOps practices, CI/CD pipelines, and automation . Strong command of networking concepts such as VPCs, load balancing , and firewall rules . Familiarity with scripting languages like Python or Bash . Preferred Qualifications: Google Cloud Certified β Professional Cloud Architect or Professional DevOps Engineer . Expertise in engineering and maintaining MLOps and AI applications . Experience in hybrid cloud or multi-cloud environments . Familiarity with monitoring and logging tools such as Cloud Monitoring, ELK Stack , or Datadog . [CLOUD-GCDEPS-J25] Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Kochi, Kerala, India
On-site
Lead - Cloud Platform Job Summary: We are looking for an experienced Cloud Platform Lead to spearhead the design, implementation, and governance of scalable, secure, and resilient cloud-native platforms on Azure . This role requires deep technical expertise in Azure services , Kubernetes (AKS) , containers , Gateway, Frontdoor, WAF , and API management , along with the ability to lead cross-functional initiatives and define cloud platform strategy and best practices. Key Responsibilities: Lead the architecture, development, and operations of Azure-based cloud platforms across environments (dev, staging, production). Design and manage Azure Front Door , Application Gateway , and WAF to ensure global performance, availability, and security. Design and implement Kubernetes platform (AKS) , ensuring reliability, observability, and governance of containerized workloads. Drive adoption and standardization of Azure API Management for secure and scalable API delivery. Collaborate with security and DevOps teams to implement secure-by-design cloud practices, including WAF rules , RBAC , and network isolation . Guide and mentor engineers in Kubernetes, container orchestration, CI/CD pipelines, and Infrastructure as Code (IaC). Define and implement monitoring, logging, and alerting best practices using tools like Azure Monitor , ELK, Signoz Evaluate and introduce tools, frameworks, and standards to continuously evolve the cloud platform. Participate in cost optimization and performance tuning initiatives for cloud services. Required Skills & Qualifications: 8+ years of experience in cloud infrastructure or platform engineering, including at least 4+ years in a leadership or ownership role . Deep hands-on expertise with Azure Front Door , Application Gateway , Web Application Firewall (WAF) , and Azure API Management . Strong experience with Kubernetes and Azure Kubernetes Service (AKS) , including networking, autoscaling, and security. Proficient with Docker and container orchestration principles. Infrastructure-as-Code experience with Terraform , ARM Templates , or Bicep . Excellent understanding of cloud security, identity (AAD, RBAC), and compliance. Experience building and guiding CI/CD workflows using tools like Azure DevOps and Bitbucket Ci/CD, or similar. Education : B Tech / BE/ M Tech / MCA Show more Show less
Posted 4 days ago
6.0 - 8.0 years
0 Lacs
Bhubaneshwar, Odisha, India
On-site
AVD Administrator (SME) Minimum 6-8 years in cloud Infrastructure Location: BBSR/KOL/Delhi/Pune Good understanding on VDI technologies Azure Virtual Desktop & Win365 Operations and Deployment knowledge on Azure Virtual Desktop Hands on experience in implementing and supporting of AVD with DR Good skills on Windows Client and Server OS, Azure Image Gallery Creating and customization of Images Configure and deploy FSLOGIX and required infra in AVD Configure Optimization & Security features on hostpools In-depth knowledge of Azure services - AAD, AADS, RBAC, Storage, Policies, Backup, Recovery Service Vault, Azure Firewall, Private Link, UDR, Security, Azure File share, AVD Autoscaling, & Azure Monitor User Profile management in AVD Troubleshooting AVD related issues and BAU support Knowledge related to AD, Networking and Azure services Show more Show less
Posted 4 days ago
5.0 - 12.0 years
0 Lacs
HyderΔbΔd
On-site
EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking a talented Lead Software Engineer with expertise in AWS and Java to join our dynamic team. This role involves working on critical application modernization projects, transforming legacy systems into cloud-native solutions, and driving innovation in security, observability, and governance. You'll collaborate with self-governing engineering teams to deliver high-impact, scalable software solutions. Responsibilities Lead end-to-end development in Java and AWS services, ensuring high-quality deliverables Design, develop, and implement REST APIs using AWS Lambda/APIGateway, JBoss, or Spring Boot Utilize AWS Java SDK to interact with various AWS services effectively Drive deployment automation through AWS Java CDK, CloudFormation, or Terraform Architect containerized applications and manage orchestrations via Kubernetes on AWS EKS or AWS ECS Apply advanced microservices concepts and adhere to best practices during development Build, test, and debug code while addressing technical setbacks effectively Expose application functionalities via APIs using Lambda and Spring Boot Manage data formatting (JSON, YAML) and handle diverse data types (String, Numbers, Arrays) Implement robust unit test cases with JUnit or equivalent testing frameworks Oversee source code management through platforms like GitLab, GitHub, or Bitbucket Ensure efficient application builds using Maven or Gradle Coordinate development requirements, schedules, and other dependencies with multiple stakeholders Requirements 5 to 12 years of experience in Java development and AWS services Expertise in AWS services including Lambda, SQS, SNS, DynamoDB, Step Functions, and API Gateway Proficiency in using Docker and managing container orchestration through Kubernetes on AWS EKS or ECS Strong understanding of AWS Core services such as EC2, VPC, RDS, EBS, and EFS Competency in deployment tools like AWS CDK, Terraform, or CloudFormation Knowledge of NoSQL databases, storage solutions, AWS Elastic Cache, and DynamoDB Understanding of AWS Orchestration tools for automation and data processing Capability to handle production workloads, automate tasks, and manage logs effectively Experience in writing scalable applications employing microservices principles Nice to have Proficiency with AWS Core Services such as Autoscaling, Load Balancers, Route 53, and IAM Skills in scripting with Linux/Shell/Python/Windows PowerShell or using Ansible/Chef/Puppet Experience with build automation tools like Jenkins, AWS CodeBuild/CodeDeploy, or GitLab CI Familiarity with collaborative tools like Jira and Confluence Knowledge of in-place deployment strategies, including Blue-Green or Canary Deployment Showcase of experience in ELK (Elasticsearch, Logstash, Kibana) stack development We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)
Posted 5 days ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description We are looking for a skilled and proactive Full Stake .NET Developer with 4+ years of experience to join our engineering team at ELLKAY Software Pvt. Ltd. Your position as a Software Engineer will be instrumental in coding, testing and maintaining software applications that power our organisations product(s). As a crucial part of our development team, you will also be responsible for developing and integrating Cloud Services and Deployment as well as to build scalable and performant solutions on cloud platforms such as AWS or Azure. Key Responsibilities Develop efficient C# client applications(.net framework), robust APIs and Services using .Net Core that interact seamlessly with backend services. Develop and maintain cloud-native applications using services offered by AWS or Azure Design clean architecture enables easy maintenance, scalability and Identify/Manage performance bottlenecks of the Application and APIs. Implement monitoring solutions using Prometheus, Grafana and centralized logging like ELK stack to gain application performance and health. Follow best practices for security measures like HTTPs, JWT Authentication and secure storage of sensitive information. Participate in code reviews, architecture discussions, and design skills and qualifications : Bachelors degree in Computer Science, Information Technology, or a related field. 4+ years of experience as a Full Stack .Net Developer on Microsoft Applications and working experience on cloud-based applications. Efficient communication skills and the ability to work collaboratively within a team and can work closely with cross-functional Skills : Strong experience with .NET Core / .NET 8 and C# development. Solid understanding of DevOps principles and hands-on experience with CI/CD pipelines (e.g., Azure DevOps, Jenkins, GitHub Actions). Familiarity with Containerization (Docker) and Orchestration (Kubernetes) deployment to managed services (e.g., ConfigMaps, Secrets, Horizontal Pod Autoscaling). Experience with either Azure or AWS cloud skills : Experience with monitoring and logging tools like Prometheus, Grafana, Azure Monitor, or CloudWatch. Experience with databases such as PostgreSQL database. Experience with event-driven architectures or microservices. Cloud certification is a plus (e.g., AWS Developer Associate, Azure Developer Associate). (ref:hirist.tech) Show more Show less
Posted 5 days ago
2.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Build scalable and loosely coupled services to extend our platform Build bulletproof API integrations with third-party APIs for various use case. Evolve our Infrastructure and add a few more nines to our overall availability Have full autonomy and own your code, and decide on the technologies and tools to deliver as well operate large-scale applications on AWS Give back to the open-source community through contributions on code and blog posts This is a startup so everything can change as we experiment with more product specific Requirements : Atleast 2+ years of Development Experience You have prior experience developing and working on consumer-facing web/app products Hands-on experience in JavaScript. Exceptions can be made if youre really good at any other language with experience in building web/app-based tech products Expertise in Node.JS and Experience in at least one of the following frameworks - Express.js, Koa.js, Socket.io (http://socket.io/) Good knowledge of async programming using Callbacks, Promises, and Async/Await Hands-on experience with Frontend codebases using HTML, CSS, and AJAX Working knowledge of MongoDB, Redis, MySQL Good understanding of Data Structures, Algorithms, and Operating Systems You've worked with AWS services in the past and have experience with EC2, ELB, AutoScaling, CloudFront, S3 Experience with Frontend Stack would be added advantage (HTML, CSS) You might not have experience with all the tools that we use but you can learn those given the guidance and resources Experience in Vue.js would be plus (ref:hirist.tech) Show more Show less
Posted 5 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Overview Bandgi Technologies is a SaaS product development company which provides niche technology skills and help organizations in innovation. Our specialization is on Industry 4.0/IIOT and we provide solutions to clients in US, Canada and Europe. We are Innovation Enablers and have offices in India (Hyderabad) and in the UK (Maidenhead). We are looking for an experienced DevOps Engineer with a background in AWS to join a growing enterprise organization. You will work within a growing AWS cloud team looking to build on and maintain their cloud infrastructure. The cloud engineer will split their time between supporting the transition of code through pipelines into a live state from software development, and evolving and maintaining cloud infrastructure and project/service introduction activities. Skill Sets - Must Have Solid Experience 5+yrs in Terraform writing Shell script, VPC'S creation, DevSecOps & sst.dev and Github actions is Mandatory. Working Knowledge & Experience On Linux operating systems & Experience building CI/CD pipelines using following : AWS : VPC, Security Group, IAM, S3, RDS, Lambda, EC2 (Autoscaling Group, Elastic beanstalk), CloudFormation and AWS stacks. Container : Docker, Kubernetes, Helm, Terraform. CI/CD pipelines : GitHub Actions(Mandatory). Databases SQL & NoSQL (MySQL, Postgres, DynamoDB). Observability best practices (Prometheus, Grafana, Jaeger, Elasticsearch) Good To Have Learning Attitude API Gateways. Microservices best practices (including design patterns) Authentication and Authorization (OIDC/SAML/OAuth 2.0) Your Responsibilities Will Include Operation and control of Cloud infrastructure (Docker platform services, network services and data storage). Preparation of new or changed services. Operation of the change/release process. Application of cloud management tools to automate the provisioning, testing, deployment and monitoring of cloud components. Designing cloud services and capabilities using appropriate modelling techniques. (ref:hirist.tech) Show more Show less
Posted 5 days ago
5.0 years
0 Lacs
Delhi, India
Remote
About Highlevel: HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. We are proud to support a global and growing community of over 2 million businesses, from marketing agencies to entrepreneurs to small businesses and beyond. Our platform empowers users across industries to streamline operations, drive growth, and crush their goals. HighLevel processes over 15 billion API hits and handles more than 2.5 billion message events every day. Our platform manages 470 terabytes of data distributed across five databases, operates with a network of over 250 micro-services, and supports over 1 million domain names. Our People With over 1,500 team members across 15+ countries, we operate in a global, remote-first environment. We are building more than software; we are building a global community rooted in creativity, collaboration, and impact. We take pride in cultivating a culture where innovation thrives, ideas are celebrated, and people come first, no matter where they call home. Our Impact Every month, our platform powers over 1.5 billion messages, helps generate over 200 million leads, and facilitates over 20 million conversations for more than 2 million businesses we serve. Behind those numbers are real people growing their companies, connecting with customers, and making their mark - and we get to help make that happen. Learn more about us on our YouTube Channel or Blog Posts About the Role: Weβre looking for a Team Lead β Full Stack (Node.js & Vue.js) to drive technical excellence, lead a team of high-caliber engineers, and build next-generation CRM marketing solutions. Responsibilities: Mentor: Guide a team of developers, ensuring best practices in software development, clean architecture, and performance optimization Architect & Scale: Design and build highly scalable and reliable backend services using Node.js, MongoDB, and ElasticSearch, ensuring optimal indexing, sharding, and query performance Frontend Development: Develop and optimize user interfaces using Vue.js (or React/Angular) for an exceptional customer experience Event-Driven Systems: Design and implement real-time data processing pipelines using Kafka, RabbitMQ, or ActiveMQ Optimize Performance: Work on autoscaling, database sharding, and indexing strategies to handle millions of transactions efficiently Cross-Functional Collaboration: Work closely with Product Managers, Data Engineers, and DevOps teams to align on vision, execution, and business goals Quality & Security: Implement secure, maintainable, and scalable codebases while adhering to industry best practices Code Reviews & Standards: Drive high engineering standards, perform code reviews, and enforce best practices across the development team Ownership & Delivery: Manage timelines, oversee deployments, and ensure smooth product releases with minimal downtime Requirements: 5+ years of hands-on software development experience Strong proficiency in Node.js, Vue.js (or React/Angular), MongoDB, and Elasticsearch Experience in real-time data processing, message queues (Kafka, RabbitMQ, or ActiveMQ), and event-driven architectures Scalability expertise: Proven track record of scaling services to 200k+ MAUs and handling high-throughput systems Strong understanding of database sharding, indexing, and performance optimization. Experience with distributed systems, microservices, and cloud infrastructures (AWS, GCP, or Azure) Proficiency in CI/CD pipelines, Git version control, and automated testing Strong problem-solving, analytical, and debugging skills. Excellent communication and leadership abilitiesβable to guide engineers while collaborating with stakeholders Good to Have: Experience with big data technologies like Apache Flink, Spark, Hadoop, or Data Lakes Familiarity with real-time analytics, data ingestion pipelines, and large-scale event processing systems Previous experience in CRM or SaaS environments EEO Statement: The company is an Equal Opportunity Employer. As an employer subject to affirmative action regulations, we invite you to voluntarily provide the following demographic information. This information is used solely for compliance with government recordkeeping, reporting, and other legal requirements. Providing this information is voluntary and refusal to do so will not affect your application status. This data will be kept separate from your application and will not be used in the hiring decision. Show more Show less
Posted 1 week ago
3.0 years
7 - 10 Lacs
Mumbai
On-site
Fynd is Indiaβs largest omnichannel platform and a multi-platform tech company specialising in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries. We are looking for an SDE 3 β Fullstack responsible for leading and mentoring a team of full-stack developers to build scalable, high-performance applications. Your primary focus will be on developing robust, maintainable, and efficient software solutions that power our platform. You will be responsible for designing and optimizing frontend and backend systems, ensuring seamless integration and high availability of services. What will you do at Fynd? Build scalable and loosely coupled services to extend our platform. Build bulletproof API integrations with third-party APIs for various use cases. Evolve our infrastructure and enhance availability and performance. Have full autonomy to own your code, decide on technologies, and operate large-scale applications on AWS. Mentor and lead a team of full-stack engineers, fostering a culture of innovation and collaboration. Optimize frontend and backend performance, caching mechanisms, and API integrations. Implement and enforce security best practices to safeguard applications and user data. Stay up to date with emerging full-stack technologies and evaluate their potential impact on our tech stack. Contribute to the open-source community through code contributions and blog posts. Some specific Requirements: 3+ years of development experience in full-stack development with expertise in JavaScript, React.js, Node.js, and Python . Prior experience developing and working on consumer-facing web/app products. Expertise in backend frameworks such as Express.js, Koa.js, or Socket.io . Strong understanding of async programming using callbacks, promises, and async/await. Hands-on experience with frontend technologies HTML, CSS, AJAX, and React.js . Working knowledge of MongoDB, Redis, MySQL . Solid understanding of data structures, algorithms, and operating systems . Experience with AWS services such as EC2, ELB, AutoScaling, CloudFront, and S3. Experience with CI/CD pipelines , containerization ( Docker, Kubernetes ), and DevOps practices. Ability to troubleshoot complex full-stack issues and drive continuous improvements. Good understanding of GraphQL, WebSockets, and real-time applications is a plus. Experience with Vue.js would be an added advantage. What do we offer? Growth Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially. Flex University We help you upskill by organising in-house courses on important subjects Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you. Culture Community and Team building activities Host weekly, quarterly and annual events/parties. Wellness Mediclaim policy for you + parents + spouse + kids Experienced therapist for better mental health, improve productivity & work-life balance We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
TCS hiring!!! Role: Azure Virtual Desktop Location: Hyderabad Experience: 8-12 Must Have: Good understanding of VDI technologies Azure Virtual Desktop Hands-on experience of deployment AVD In-depth knowledge of Azure services - AAD, AADS, RBAC, Storage, Policies, Backup, Recovery Service Vault, Azure Firewall, Private Link, UDR, Security, Azure File share, AVD Autoscaling, & Azure Monitor. Knowledge of Group Policy, Active Directory and Registry settings., Security concepts Create, customize and manage AVD Images also must have knowledge of AIB (Azure Image Builder) and Azure compute gallery management. Good understanding of Profile management with Fslogix Good Experience of Deploying and Managing Host Pools and session host Good hands-on experience on Microsoft MSIX packaging, AppMasking Must have knowledge on Azure Services β Storage, Azure File Share, Backup, Policies, EntraID, Azure Vnet & NSG, Azure Monitoring Strong hands-on DevOps, Azure DevOps YML pipelines & Infrastructure-as-Code (IAC) such as ARM and Bicep. Monitor the CI/CD pipelines and make basic improvements as needed and should be able to Provide support for infrastructure-related issues in the CI/CD process. Develop and maintain automation scripts using tools like PowerShell scripting, Azure Functions, Automation account, CLI, and ARM templates. Troubleshooting skill required for AVD & Windows related issues and BAU support Use Azure Monitor, Log Analytics, and other tools to gain insights into system health. Respond to incidents and outages promptly to minimize downtime. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka
Remote
Category: Infrastructure/Cloud Main location: India, Karnataka, Bangalore Position ID: J0425-1242 Employment Type: Full Time Position Description: Company Profile: At CGI, weβre a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve. At CGI, weβre a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve results for our clients - and for our members. Come grow with us. Learn more at www.cgi.com. This is a great opportunity to join a winning team. CGI offers a competitive compensation package with opportunities for growth and professional development. Benefits for full-time, permanent members start on the first day of employment and include a paid time-off program and profit participation and stock purchase plans. We wish to thank all applicants for their interest and effort in applying for this position, however, only candidates selected for interviews will be contacted. No unsolicited agency referrals please. Job Title: Google Cloud Engineer (DevOps + GKE )- SSE Position: Senior Systems Engineer Experience:5+Years Category: GCP+GKE Main location: Bangalore/Chennai/Hyderabad/Pune/Mumbai Position ID: J0425-1242 Employment Type: Full Time Job Description : We are seeking a skilled and proactive Google Cloud Engineer with strong experience in DevOps with hands-on expertise in Google Kubernetes Engine (GKE) to design, implement, and manage cloud-native infrastructure . You will play a key role in automating deployments, maintaining scalable systems, and ensuring the availability and performance of our cloud services on Google Cloud Platform (GCP). Key Responsibilities and Required Skills 5+ years of experience in DevOps / Cloud Engineering roles. Design and manage cloud infrastructure using Google Cloud services such as Compute Engine, Cloud Storage, VPC, IAM, Cloud SQL, GKE, and more. Proficient in writing Infrastructure-as-Code using Terraform, Deployment Manager, or similar tools. Automate CI/CD pipelines using tools like Cloud Build, Jenkins, GitHub Actions, etc. Manage and optimize Kubernetes clusters for high availability, performance, and security. Collaborate with developers to containerize applications and streamline their deployment. Monitor cloud environments and troubleshoot performance, availability, or security issues. Implement best practices for cloud governance, security, cost management, and compliance. Participate in cloud migration and modernization projects. Ensure system reliability and high availability through redundancy, backup strategies, and proactive monitoring. Contribute to cost optimization and cloud governance practices. Strong hands-on experience with core GCP services including Compute, Networking, IAM, Storage, and optional Kubernetes (GKE). Proven expertise in Kubernetes (GKE)βmanaging clusters, deployments, services, autoscaling, etc. Experience in Configuring Kubernetes resources (Deployments, Services, Ingress, Helm charts, etc.) to support application lifecycles. Solid scripting knowledge (e.g., Python, Bash, Go). Familiarity with GitOps and deployment tools like ArgoCD, Helm. Experience with CI/CD tools and setting up automated deployment pipelines. Should have Google Cloud certifications (e.g., Professional Cloud DevOps Engineer, Cloud Architect, or Cloud Engineer). Behavioural Competencies : Proven experience of delivering process efficiencies and improvements Clear and fluent English (both verbal and written) Ability to build and maintain efficient working relationships with remote teams Demonstrate ability to take ownership of and accountability for relevant products and services Ability to plan, prioritise and complete your own work, whilst remaining a team player Willingness to engage with and work in other technologies Note: This job description is a general outline of the responsibilities and qualifications typically associated with the Virtualization Specialist role. Actual duties and qualifications may vary based on the specific needs of the organization. CGI is an equal opportunity employer. In addition, CGI is committed to providing accommodations for people with disabilities in accordance with provincial legislation. Please let us know if you require a reasonable accommodation due to a disability during any aspect of the recruitment process and we will work with you to address your needs. Your future duties and responsibilities Required qualifications to be successful in this role Skills: DevOps Google Cloud Platform Kubernetes Terraform Helm What you can expect from us: Together, as owners, letβs turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, youβll reach your full potential becauseβ¦ You are invited to be an owner from day 1 as we work together to bring our Dream to life. Thatβs why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our companyβs strategy and direction. Your work creates value. Youβll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. Youβll shape your career by joining a company built to grow and last. Youβll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our teamβone of the largest IT and business consulting services firms in the world.
Posted 1 week ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We are looking for a Senior Manager-DevOps Engineer to join out technology team in Clarivate . You will be responsible for providing strategic leadership for DevOps, shaping technical and operational strategies, oversee project execution, collaborating with cross-functional teams, mentoring team members for professional growth. About You β Experience, Education, Skills, And Accomplishments 7+ years of leadership experience working with cross-functional teams (business and technology teams) in a dynamic environment. At least 10 years of professional experience with minimum 6 years as DevOps Engineer or similar skillsets with experience on various CI/CD and configuration management tools e.g., Jenkins, Maven, Gradle, Azure DevOps, Gitlab, TeamCity, AWS Codepipeline, Packer, Cloudformation, Terraform, or similar CI/CD orchestrator tool(s). Hands-on experience with Docker and Kubernetes, including building Docker files and images, establishing Docker image repositories, and creating, managing, and orchestrating a Kubernetes based infrastructure in cloud or on-prem. Comfortable writing scripts/services that pull and manipulate data from heterogeneous data sources. It would be great, if you also had , Strong understanding of data pipelines, ETL/ELT processes, and cloud data platforms (e.g., AWS, Azure, GCP) Familiarity with modern data tools (e.g., Airflow, DBT, Snowflake, Databricks, Kafka). Knowledge on cloud-native software architectures and based on microservices, e.g., API management, autoscaling, service discovery, service mesh, service gateways. What will you be doing in this role? Provide leadership and technical guidance to coach, motivate, and lead team members to their optimum performance levels and career development. Ability to communicate technical information to non-technical stakeholders. Develop Strong Architecture and Design using best practices, patterns, and business acumen. Drive analysis, design, and delivery of quality technical solutions, projects in line with product roadmaps, customer expectations, and internal priorities, as well as developing infrastructure-as-code and automated scripts meant for building or deploying workloads in various environments through CI/CD pipelines. Develop and Support Quarterly plans for IP Product Segment Collaborate with cross-functional teams to analyze, design, and develop software solutions. Stay up to date with emerging trends and technologies in DevOps and cloud computing. Participate in the testing and deployment of software solutions. Keep up with industry best practices, trends, and standards and identifies automation opportunities, designs, and develops automation solutions that improve operations, efficiency, security, and visibility. About The Team Cloud Architecture and DevOps Engineering is a part of Product Engineering in Clarivate IP Business Unit. This team is responsible for driving cloud native initiatives, CI/CD standardization support in improving DevOps engineering practices and building future proof cloud solution. Working Hours This is a full-time opportunity with Clarivate. 9 hours per day including lunch break. You should be flexible with working hours to align with globally distributed teams and stakeholders At Clarivate, we are committed to providing equal employment opportunities for all qualified persons with respect to hiring, compensation, promotion, training, and other terms, conditions, and privileges of employment. We comply with applicable laws and regulations governing non-discrimination in all locations. Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2