Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
4 - 9 Lacs
Noida
On-site
Bachelor’s/Master’s degree in Computer Science, Information Technology or related field 5-7 years of experience in a DevOps role Strong understanding of the SDLC and experience with working on fully Agile teams Proven experience in coding & scripting DevOps, Ant/Maven, Groovy, Terraform, Shell Scripting, and Helm Chart skills. Working experience with IaC tools like Terraform, CloudFormation, or ARM templates Strong experience with cloud computing platforms (e.g. Oracle Cloud (OCI), AWS, Azure, Google Cloud) Experience with containerization technologies (e.g. Docker, Kubernetes/EKS/AKS) Experience with continuous integration and delivery tools (e.g. Jenkins, GitLab CI/CD) Kubernetes - Experience with managing Kubernetes clusters and using kubectl for managing helm chart deployments, ingress services, and troubleshooting pods. OS Services – Basic Knowledge to Manage, configuring, and troubleshooting Linux operating system issues (Linux), storage (block and object), networking (VPCs, proxies, and CDNs) Monitoring and instrumentation - Implement metrics in Prometheus, Grafana, Elastic, log management and related systems, and Slack/PagerDuty/Sentry integrations Strong know-how of modern distributed version control systems (e.g. Git, GitHub, GitLab etc) Strong troubleshooting and problem-solving skills, and ability to work well under pressure Excellent communication and collaboration skills, and ability to lead and mentor junior team members Career Level - IC3 Design, implement, and maintain automated build, deployment, and testing systems Experience in Taking Application Code and Third Party Products and Building Fully Automated Pipelines for Java Applications to Build, Test and Deploy Complex Systems for delivery in Cloud. Ability to Containerize an Application i.e. creating Docker Containers and Pushing them to an Artifact Repository for deployment on containerization solutions with OKE (Oracle container Engine for Kubernetes) using Helm Charts. Lead efforts to optimize the build and deployment processes for high-volume, high-availability systems Monitor production systems to ensure high availability and performance, and proactively identify and resolve issues Support and Troubleshoot Cloud Deployment and Environment Issues Create and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD Continuously improve the scalability and security of our systems, and lead efforts to implement best practices Participate in the design and implementation of new features and applications, and provide guidance on best practices for deployment and operations Work with security team to ensure compliance with industry and company standards, and implement security measures to protect against threats Keep up-to-date with emerging trends and technologies in DevOps, and make recommendations for improvement Lead and mentor junior DevOps engineers and collaborate with cross-functional teams to ensure successful delivery of projects Analyze, design develop, troubleshoot and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications. As a member of the software engineering division, you will analyze and integrate external customer specifications. Specify, design and implement modest changes to existing software architecture. Build new products and development tools. Build and execute unit tests and unit test plans. Review integration and regression test plans created by QA. Communicate with QA and porting engineering to discuss major changes to functionality. Work is non-routine and very complex, involving the application of advanced technical/business skills in area of specialization. Leading contributor individually and as a team member, providing direction and mentoring to others. BS or MS degree or equivalent experience relevant to functional area. 6+ years of software engineering or related experience.
Posted 3 days ago
5.0 years
0 Lacs
West Bengal
On-site
Job Information Date Opened 30/07/2025 Job Type Full time Industry IT Services Work Experience 5+ Years City Kolkata Province West Bengal Country India Postal Code 700091 About Us We are a fast growing technology company specializing in current and emerging internet, cloud and mobile technologies. Job Description CodelogicX is a forward-thinking tech company dedicated to pushing the boundaries of innovation and delivering cutting-edge solutions. We are seeking a Senior DevOps Engineer with at least 5 years of hands-on experience in building, managing, and optimizing scalable infrastructure and CI/CD pipelines. The ideal candidate will play a crucial role in automating deployment workflows, securing cloud environments and managing container orchestration platforms. You will leverage your expertise in AWS, Kubernetes, ArgoCD, and CI/CD to streamline our development processes, ensure the reliability and scalability of our systems, and drive the adoption of best practices across the team. Key Responsibilities: Design, implement, and maintain CI/CD pipelines using GitHub Actions and Bitbucket Pipelines. Develop and manage Infrastructure as Code (IaC) using Terraform for AWS-based infrastructure. Setup and administer SFTP servers on cloud-based VMs using chroot configurations and automate file transfers to S3-backed Glacier . Manage SNS for alerting and notification integration. Ensure cost optimization of AWS services through billing reviews and usage audits. Implement and maintain secure secrets management using AWS KMS , Parameter Store , and Secrets Manager . Configure, deploy, and maintain a wide range of AWS services, including but not limited to: Compute Services o Provision and manage compute resources using EC2, EKS, AWS Lambda, and EventBridge for compute-driven, serverless and event-driven architectures. Storage & Content Delivery o Manage data storage and archival solutions using S3, Glacier, and content delivery through CloudFront. Networking & Connectivity o Design and manage secure network architectures with VPCs, Load Balancers, Security Groups, VPNs, and Route 53 for DNS routing and failover. Ensure proper functioning of Network Services like TCP/IP, reverse proxies (e.g., NGINX). Monitoring & Observability o Implement monitoring, logging, and tracing solutions using CloudWatch, Prometheus, Grafana, ArgoCD, and OpenTelemetry to ensure system health and performance visibility. Database Services o Deploy and manage relational databases via RDS for MySQL, PostgreSQL, Aurora, and healthcare-specific FHIR database configurations. Security & Compliance o Enforce security best practices using IAM (roles, policies), AWS WAF, Amazon Inspector, GuardDuty, Security Hub, and Trusted Advisor to monitor, detect, and mitigate risks. GitOps o Apply excellent knowledge of GitOps practices, ensuring all infrastructure and application configuration changes are tracked and versioned through Git commits. Architect and manage Kubernetes environments (EKS) , implementing Helm charts, ingress controllers, autoscaling (HPA/VPA), and service meshes (Istio), troubleshoot advanced issues related to pods, services, DNS, and kubelets. Apply best practices in Git workflows (trunk-based, feature branching) in both monorepo and multi-repo environments. Maintain, troubleshoot, and optimize Linux-based systems (Ubuntu, CentOS, Amazon Linux). Support the engineering and compliance teams by addressing requirements for HIPAA, GDPR, ISO 27001, SOC 2 , and ensuring infrastructure readiness. Perform rollback and hotfix procedures with minimal downtime. Collaborate with developers to define release and deployment processes. Manage and standardize build environments across dev, staging, and production. Manage release and deployment processes across dev, staging, and production. Work cross-functionally with development and QA teams. Lead incident postmortems and drive continuous improvement. Perform root cause analysis and implement corrective/preventive actions for system incidents. Set up automated backups/snapshots, disaster recovery plans, and incident response strategies. Ensure on-time patching. Mentor junior DevOps engineers. Requirements Required Qualifications: Bachelor's degree in Computer Science, Engineering, or equivalent practical experience. 5+ years of proven DevOps engineering experience in cloud-based environments. Advanced knowledge of AWS , Terraform , CI/CD tools , and Kubernetes (EKS) . Strong scripting and automation mindset. Solid experience with Linux system administration and networking. Excellent communication and documentation skills. Ability to collaborate across teams and lead DevOps initiatives independently. Preferred Qualifications: Experience with infrastructure as code tools such as Terraform or CloudFormation. Experience with GitHub Actions is a plus. Certifications in AWS (e.g., AWS DevOps Engineer, AWS SysOps Administrator) or Kubernetes (CKA/CKAD). Experience working in regulated environments (e.g., healthcare or fintech). Exposure to container security tools and cloud compliance scanners. Experience: 5-10 Years Working Mode: Hybrid Job Type: Full-Time Location: Kolkata Benefits Health insurance Hybrid working mode Provident Fund Parental leave Yearly Bonus Gratuity
Posted 3 days ago
0.0 - 2.0 years
0 - 0 Lacs
Turbhe Khurd, Navi Mumbai, Maharashtra
On-site
JD Devops Experience required : 2 yrs – 3yrs Max salary /Month: 23000 The ideal candidate will: Design and implement scalable, reliable AWS infrastructure. Develop and maintain automation tools and CI/CD pipelines using Jenkins or GitHub Actions. Build, operate, and maintain Kubernetes clusters for container orchestration. Leverage Infrastructure as Code tools like Terraform or CloudFormation for consistent environment provisioning. Automate system tasks using Python or Golang scripting. Collaborate closely with developers and SREs to ensure systems are resilient, scalable, and efficient. Monitor and troubleshoot system performance using observability tools like Datadog, Prometheus, and Grafana. Primary Skills Bachelors degree in Computer Science, Information Technology, or a related field 6+ years of experience as a DevOps Engineer with a strong focus on AWS Hands-on experience with containerization tools like Docker Expertise in managing Kubernetes clusters in production Create Helm charts for the applications Proficient in creating and managing CI/CD workflows with Jenkins or GitHub Actions Strong background in Infrastructure as Code (Terraform or CloudFormation) Automation and scripting skills using Python or Golang Strong analytical and problem-solving abilities Excellent communication and collaboration skills Certifications in Kubernetes or Terraform are a plus Good to Have Skills Configuration management using Ansible Basic understanding of AI & ML concepts Job Types: Full-time, Permanent Pay: ₹20,000.00 - ₹26,000.00 per month Benefits: Paid sick time Paid time off Provident Fund Schedule: Day shift Fixed shift Monday to Friday Morning shift Weekend availability Supplemental Pay: Performance bonus Yearly bonus Ability to commute/relocate: Turbhe Khurd, Navi Mumbai, Maharashtra: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Open to negotiate with current Salary Willing to sign the service bond of 18 months in case got selected Experience: DevOps: 2 years (Required) Work Location: In person Speak with the employer +91 7087738773 Expected Start Date: 04/08/2025
Posted 3 days ago
3.0 years
0 Lacs
India
On-site
We need an experienced DevOps Engineer to single-handedly build our Automated Provisioning Service on Google Cloud Platform. You'll implement infrastructure automation that provisions complete cloud environments for B2B customers in under 10 minutes. Core Responsibilities: Infrastructure as Code Implementation Develop Terraform modules for automated GCP resource provisioning Create reusable templates for: GKE cluster deployment with predefined node pools Cloud Storage bucket configuration Cloud DNS and SSL certificate automation IAM roles and service account setup Implement state management and version control for IaC Automation & Orchestration Build Cloud Functions or Cloud Build triggers for provisioning workflows Create automation scripts (Bash/Python) for deployment orchestration Deploy containerized Node.js applications to GKE using Helm charts Configure automated SSL certificate provisioning via Certificate Manager Security & Access Control Implement IAM policies and RBAC for customer isolation Configure secure service accounts with minimal required permissions Set up audit logging and monitoring for all provisioned resources Integration & Deployment Create webhook endpoints to receive provisioning requests from frontend Implement provisioning status tracking and error handling Document deployment procedures and troubleshooting guides Ensure 5-10 minute provisioning time SLA Required Skills & Certifications: MANDATORY Certification (Must have one of the following): Google Cloud Associate Cloud Engineer (minimum requirement) Google Cloud Professional Cloud DevOps Engineer (preferred) Google Cloud Professional Cloud Architect (preferred) Technical Skills (Must Have): 3+ years hands-on experience with Google Cloud Platform Strong Terraform expertise with proven track record GKE/Kubernetes deployment and management experience Proficiency in Bash and Python scripting Experience with CI/CD pipelines (Cloud Build preferred) GCP IAM and security best practices knowledge Ability to work independently with minimal supervision Nice to Have: Experience developing RESTful APIs for service integration Experience with multi-tenant architectures Node.js/Docker containerization experience Helm chart creation and management Deliverables (2-Month Timeline) Month 1: Complete Terraform modules for all GCP resources Working prototype of automated provisioning flow Basic IAM and security implementation Integration with webhook triggers Month 2: Production-ready deployment with error handling Performance optimization (achieve <10 min provisioning) Complete documentation and runbooks Handover and knowledge transfer Technical Environment Primary Tools: Terraform, GCP (GKE, Cloud Storage, Cloud DNS, IAM) Languages: Bash, Python (automation scripts) Orchestration: Cloud Build, Cloud Functions Containerization: Docker, Kubernetes, Helm Ideal Candidate Self-starter who can own the entire DevOps scope independently Strong problem-solver comfortable with ambiguity Excellent time management skills to meet tight deadlines Clear communicator who documents their work thoroughly Important Note: Google Cloud certification is mandatory for this position due to partnership requirements. Please include your certification details and ID number in your application. Application Requirements: Proof of valid Google Cloud certification Examples of similar GCP automation projects GitHub/GitLab links to relevant Terraform modules (if available)
Posted 3 days ago
4.0 - 6.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
Experience Required: 4-6 years Location: Gurgaon Department: Product and Engineering Working Days: Alternate Saturdays Working (1st and 3rd) 🔧 Key Responsibilities Design, implement, and maintain highly available and scalable infrastructure using AWS Cloud Services. Build and manage Kubernetes clusters (EKS, self-managed) to ensure reliable deployment and scaling of microservices. Develop Infrastructure-as-Code using Terraform, ensuring modular, reusable, and secure provisioning. Containerize applications and optimize Docker images for performance and security. Ensure CI/CD pipelines (Jenkins, GitHub Actions, etc.) are optimized for fast and secure deployments. Drive SRE principles including monitoring, alerting, SLIs/SLOs, and incident response. Set up and manage observability tools (Prometheus, Grafana, ELK, Datadog, etc.). Automate routine tasks with scripting languages (Python, Bash, etc.). Lead capacity planning, auto-scaling, and cost optimization efforts across cloud infrastructure. Collaborate closely with development teams to enable DevSecOps best practices. Participate in on-call rotations, handle outages with calm, and conduct postmortems. 🧰 Must-Have Technical Skills Kubernetes (EKS, Helm, Operators) Docker & Docker Compose Terraform (modular, state management, remote backends) AWS (EC2, VPC, S3, RDS, IAM, CloudWatch, ECS/EKS) Linux system administration CI/CD pipelines (Jenkins, GitLab CI, GitHub Actions) Logging & monitoring tools: ELK, Prometheus, Grafana, CloudWatch Site Reliability Engineering practices Load balancing, autoscaling, and HA architectures 💡 Good-To-Have GCP or Azure exposure Service Mesh (Istio, Linkerd) Secrets management (Vault, AWS Secrets Manager) Security hardening of containers and infrastructure Chaos engineering exposure Knowledge of networking (DNS, firewalls, VPNs) 👤 Soft Skills Strong problem-solving attitude; calm under pressure Good documentation and communication skills Ownership mindset with a drive to automate everything Collaborative and proactive with cross-functional teams
Posted 3 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
```html About the Company Skill Mandatory About the Role Java, Spring, Spring boot, Microservices and cloud Responsibilities Expertise in Identity MFA Concepts (OKTA, Daon, Ping, etc.) Helm / Kubernetes / Docker / Containerization OpenShift Platform Qualifications Education details Required Skills Java Spring Spring boot Microservices Cloud Identity MFA Concepts (OKTA, Daon, Ping, etc.) Helm Kubernetes Docker Containerization OpenShift Platform Preferred Skills Additional preferred skills can be listed here. Pay range and compensation package Pay range or salary or compensation Equal Opportunity Statement Include a statement on commitment to diversity and inclusivity. ```
Posted 3 days ago
5.0 years
0 Lacs
Greater Kolkata Area
On-site
CodelogicX is a forward-thinking tech company dedicated to pushing the boundaries of innovation and delivering cutting-edge solutions. We are seeking a Senior DevOps Engineer with at least 5 years of hands-on experience in building, managing, and optimizing scalable infrastructure and CI/CD pipelines. The ideal candidate will play a crucial role in automating deployment workflows, securing cloud environments and managing container orchestration platforms. You will leverage your expertise in AWS, Kubernetes, ArgoCD, and CI/CD to streamline our development processes, ensure the reliability and scalability of our systems, and drive the adoption of best practices across the team. Key Responsibilities Design, implement, and maintain CI/CD pipelines using GitHub Actions and Bitbucket Pipelines. Develop and manage Infrastructure as Code (IaC) using Terraform for AWS-based infrastructure. Setup and administer SFTP servers on cloud-based VMs using chroot configurations and automate file transfers to S3-backed Glacier. Manage SNS for alerting and notification integration. Ensure cost optimization of AWS services through billing reviews and usage audits. Implement and maintain secure secrets management using AWS KMS, Parameter Store, and Secrets Manager. Configure, deploy, and maintain a wide range of AWS services, including but not limited to: Compute Services Provision and manage compute resources using EC2, EKS, AWS Lambda, and EventBridge for compute-driven, serverless and event-driven architectures. Storage & Content Delivery Manage data storage and archival solutions using S3, Glacier, and content delivery through CloudFront. Networking & Connectivity Design and manage secure network architectures with VPCs, Load Balancers, Security Groups, VPNs, and Route 53 for DNS routing and failover. Ensure proper functioning of Network Services like TCP/IP, reverse proxies (e.g., NGINX). Monitoring & Observability Implement monitoring, logging, and tracing solutions using CloudWatch, Prometheus, Grafana, ArgoCD, and OpenTelemetry to ensure system health and performance visibility. Database Services Deploy and manage relational databases via RDS for MySQL, PostgreSQL, Aurora, and healthcare-specific FHIR database configurations. Security & Compliance Enforce security best practices using IAM (roles, policies), AWS WAF, Amazon Inspector, GuardDuty, Security Hub, and Trusted Advisor to monitor, detect, and mitigate risks. GitOps Apply excellent knowledge of GitOps practices, ensuring all infrastructure and application configuration changes are tracked and versioned through Git commits. Architect and manage Kubernetes environments (EKS), implementing Helm charts, ingress controllers, autoscaling (HPA/VPA), and service meshes (Istio), troubleshoot advanced issues related to pods, services, DNS, and kubelets. Apply best practices in Git workflows (trunk-based, feature branching) in both monorepo and multi-repo environments. Maintain, troubleshoot, and optimize Linux-based systems (Ubuntu, CentOS, Amazon Linux). Support the engineering and compliance teams by addressing requirements for HIPAA, GDPR, ISO 27001, SOC 2, and ensuring infrastructure readiness. Perform rollback and hotfix procedures with minimal downtime. Collaborate with developers to define release and deployment processes. Manage and standardize build environments across dev, staging, and production. Manage release and deployment processes across dev, staging, and production. Work cross-functionally with development and QA teams. Lead incident postmortems and drive continuous improvement. Perform root cause analysis and implement corrective/preventive actions for system incidents. Set up automated backups/snapshots, disaster recovery plans, and incident response strategies. Ensure on-time patching. Mentor junior DevOps engineers. Requirements Required Qualifications: Bachelor's degree in Computer Science, Engineering, or equivalent practical experience. 5+ years of proven DevOps engineering experience in cloud-based environments. Advanced knowledge of AWS, Terraform, CI/CD tools, and Kubernetes (EKS). Strong scripting and automation mindset. Solid experience with Linux system administration and networking. Excellent communication and documentation skills. Ability to collaborate across teams and lead DevOps initiatives independently. Preferred Qualifications Experience with infrastructure as code tools such as Terraform or CloudFormation. Experience with GitHub Actions is a plus. Certifications in AWS (e.g., AWS DevOps Engineer, AWS SysOps Administrator) or Kubernetes (CKA/CKAD). Experience working in regulated environments (e.g., healthcare or fintech). Exposure to container security tools and cloud compliance scanners. Experience: 5-10 Years Working Mode: Hybrid Job Type: Full-Time Location: Kolkata Benefits Health insurance Hybrid working mode Provident Fund Parental leave Yearly Bonus Gratuity
Posted 3 days ago
6.0 years
0 Lacs
Lucknow, Uttar Pradesh, India
Remote
Job Title: Java/J2EE & Microservices Tech Lead (Development & Support) with DevOps Expertise Location: India (Remote) Skills: Java (6+ Years), Angular (4+ Years), SQL (4+) Job Description: Experienced Java/J2EE and Microservices Tech Lead with 8+ year of experience to lead the development and support activities of enterprise-level applications. The candidate will lead a team of developers, ensure high-quality deliverables, and implement DevOps practices to streamline deployment and operational processes. The role involves technical leadership, stakeholder communication, and hands-on involvement in coding and designing scalable, resilient microservices architectures. Roles and Responsibilities: Technical Leadership: • Lead, mentor, and manage a team of Java/J2EE developers in designing, developing, and maintaining microservices-based applications. • Provide technical guidance on architecture, best practices, coding standards, and code reviews. • Design and develop microservices and define APIs for seamless integration. • Ensure proper data management, fault tolerance, and scalability in microservices design. • Implement CI/CD pipelines and automate deployment processes. Development Activities: • Design and develop scalable, efficient, and secure microservices using Java, J2EE, Spring Boot, and related frameworks. • Experience in front end technologies, Angular required. • Collaborate with product managers and business analysts to translate requirements into technical solutions. • Guide the team for and review work packet estimation. DevOps & Automation: • Implement DevOps best practices including version control, automated builds, automated testing, continuous integration, and deployment. • Manage and monitor application performance, availability, and security. • Use tools like Jenkins, Docker, Kubernetes, Helm, and OpenShift to streamline development, testing, and deployment workflows. Support Activities: • Lead the application support, troubleshooting, and issue resolution to ensure high system reliability and availability. • Maintain and improve existing legacy applications and participate in refactoring efforts. • Establish and oversee monitoring solutions for batch job executions and application health, performance. • Proactively identify, diagnose, and resolve issues related to batch processing and application performance. • Implement alerting and reporting mechanisms to ensure timely responses to failures or performance degradation. Continuous Improvement Initiatives: • Promote a culture of continuous improvement by analyzing system metrics, incident reports, and user feedback. • Identify opportunities for process automation, performance optimization, and quality enhancements. • Lead or contribute to technical workshops, training sessions, and updated documentation to foster ongoing skill development. Stakeholder Collaboration: • Communicate effectively with technical and non-technical stakeholders about project progress, technical challenges, and risk mitigation. • Participate in project planning, estimation, and retrospectives. Quality and Compliance: • Enforce code quality and security standards through automated testing and code reviews. • Ensure compliance with organizational and regulatory policies. Required Skills and Qualifications: • Proven experience in Java and J2EE technologies. • Experience in Spring boot, Kubernetes and Dockers, Open shift and Angular. • Strong knowledge of microservices architecture, REST APIs, and containerization (Docker, Kubernetes). • Hands-on experience with OpenShift platform. • Solid SQL skills and experience working with relational databases. • Extensive DevOps experience including CI/CD pipelines, automation, monitoring tools. • Experience in application and batch jobs monitoring practices. • Leadership and team management skills. • Excellent problem-solving and communication skills. • Experience working in projects that follow Agile methodology • Experience Handling production support projects.
Posted 3 days ago
5.0 years
0 Lacs
India
On-site
Role Summary Join our product team as a Back End Engineer building end-to-end features, including .NET Core microservices running on Kubernetes. You’ll work in short, iterative cycles, shipping to production multiple times per week. Key Responsibilities Develop REST/GraphQL endpoints, background jobs, and integrations in .NET Core / C# . Write automated tests (unit, integration, e2e) to maintain a high quality bar. Containerize services with Docker ; deploy and monitor in Kubernetes (AKS). Optimize data access across SQL Server and NoSQL stores (Redis, Cosmos DB, Azure Search). Collaborate closely with designers, product managers, and DevOps to deliver production‑ready increments. Required Qualifications 5+ years professional software development. Solid backend proficiency in .NET Core / C# (web APIs, background services, EF Core). Hands‑on experience deploying containerized applications to Linux‑based environments. Familiarity with Kubernetes concepts (pods, services, Helm, CI/CD). Working knowledge of caching/NoSQL solutions (Redis, Cosmos DB, etc.). Nice to Have Exposure to workflow engines (Temporal, Camunda, Durable Functions). Azure PaaS experience (Functions, App Service, API Management).
Posted 3 days ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
We are looking for a talented Python QA Automation Engineer with expertise in cloud technologies, specifically Google Cloud Platform (GCP). As a Python QA Automation Engineer, you will be responsible for designing, implementing, and maintaining automated testing frameworks to ensure the quality and reliability of software applications deployed on GCP. This role requires a strong background in Python programming, QA automation, and cloud-based environments. You will collaborate with internal teams to solve complex problems in quality and development, while gaining a deep understanding of networking and access technologies in the Cloud. Your responsibilities will include leading or contributing to engineering efforts, from planning to execution, to address engineering challenges effectively. To be successful in this role, you should have 4 to 8 years of experience in test development and automation tools development. You will design and build advanced automated testing frameworks, tools, and test suites. Proficiency in GoLang programming, experience with Google Cloud Platform, Kubernetes, Docker, Helm, Ansible, and building internal tools are essential. Additionally, you should have expertise in backend testing, creating test cases and test plans, and defining optimal test suites for various testing scenarios. Experience in CI/CD pipelines, Python programming, Linux environments, PaaS and/or SaaS platforms, and the Hadoop ecosystem is advantageous. A solid understanding of computer science fundamentals and data structures is required. Excellent communication and collaboration skills are necessary for effective teamwork. Benefits of joining our team include a competitive salary and benefits package, talent development opportunities, exposure to cutting-edge technologies, and various employee engagement initiatives. We are committed to fostering diversity and inclusion in the workplace, offering hybrid work options, flexible hours, and accessible facilities for employees with disabilities. If you are ready to accelerate your growth professionally and personally, impact the world with innovative technologies, and thrive in a diverse and inclusive environment, join us at Persistent. Unlock your full potential and embark on a rewarding career journey with us.,
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
As an Ignition Application Administrator at EY, you will be a key member of the Enterprise Services Data team. Your role will involve collaborating closely with peer platform administrators, developers, Product/Project Seniors, and Customers to administer the existing analytics platforms. While focusing primarily on Ignition, you will also be cross-trained on other tools such as Qlik Sense, Tableau, PowerBI, SAP Business Objects, and more. Your willingness to tackle complex problems and find innovative solutions will be crucial in this role. In this position, you will have the opportunity to work in a start-up-like environment within a Fortune 50 company, driving digital transformation and leveraging insights to enhance products and services. Your responsibilities will include installing and configuring Ignition, monitoring the platform, troubleshooting issues, managing data source connections, and contributing to the overall data platform architecture and strategy. You will also be involved in integrating Ignition with other ES Data platforms and Business Unit installations. To succeed in this role, you should have at least 3 years of experience in customer success or a customer-facing engineering capacity, along with expertise in large-scale implementations and complex solutions environments. Experience with Linux command line, cloud operations, Kubernetes application deployment, and cloud platform architecture is essential. Strong communication skills, both interpersonal and written, are also key for this position. Ideally, you should hold a BA/BS Degree in technology, computing, or a related field, although relevant work experience may be considered in place of formal education. The position may require flexibility in working hours, including weekends, to meet deadlines and fulfill application administration obligations. Join us at EY and contribute to building a better working world by leveraging data, technology, and your unique skills to drive innovation and growth for our clients and society.,
Posted 3 days ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As an SMO OSS Integration Consultant with 4 to 8 years of experience based in Bangalore, you will be expected to demonstrate the following expertise: Technical Expertise: - Possess a strong knowledge of SMO platforms and their integration with OSS systems. - Show familiarity with OSS functions such as inventory management, fault management, and performance monitoring. - Have hands-on experience with O-RAN interfaces like A1, E2, and O1. Protocols and Standards: - Showcase in-depth knowledge of 5G standards, including 3GPP, O-RAN, and TM Forum. - Demonstrate familiarity with protocols such as HTTP/REST APIs, NETCONF, and YANG. Programming and Scripting: - Exhibit proficiency in Python, Bash, or similar languages for scripting and automation. - Display experience with AI/ML frameworks and their application in network optimization. Tools and Platforms: - Possess experience working with tools like Prometheus, Grafana, Kubernetes, Helm, and Ansible for monitoring and deployment. - Show familiarity with cloud-native deployments, including OpenShift, AWS, and Azure. If you meet the requirements mentioned above and are prepared to contribute effectively in a fast-paced environment, please reach out to Bala at bala@cssrecruit.com for further details.,
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
Job Description: As a Senior DevOps Engineer at LUMIQ, you will be a crucial part of our team, responsible for designing and implementing DevOps solutions for both internal and external projects. Your primary focus will be on creating robust DevOps processes that enhance efficiency, scalability, and automation within the project landscape. Working at the intersection of automation, system reliability, and CI/CD pipelines, you will ensure secure, scalable, and efficient deployment processes. Your expertise will drive continuous improvement in operational workflows and contribute to delivering high-performance solutions for our customers. Your responsibilities will include having a sound knowledge of Linux system and networking internals, as well as experience with container runtimes. You will be expected to work with cloud providers such as AWS, GCP, or Azure, and have hands-on experience with IAC (Infra as Code) tools like Terraform. Additionally, a solid understanding of running applications in container environments, dockerising applications, and orchestrating using Kubernetes, helm, and service mesh deployments is essential. Experience with open-source tooling, configuration management, CI/CD deployment on cloud or Jenkins, and server-side scripting using bash, python, or similar languages is required. Strong analytical, troubleshooting, and problem-solving skills, along with effective communication, interpersonal, collaboration, and documentation skills are necessary for success in this role. Qualifications for this position include a Bachelor's degree in computer science, Engineering, or a related field, with a preference for a Master's degree. You should have at least 3 years of experience in infrastructure, DevOps, or platform roles, with a thorough understanding of the software development lifecycle and infrastructure engineering best practices. While not mandatory, preferred skills include exposure to databases, data analytics, and warehousing services on the cloud, familiarity with architectural and systems designing best practices, experience in Professional Services, knowledge of the BFSI domain, and understanding of Data, Snowflake, AI, and ML concepts. Joining LUMIQ will provide you with the opportunity to work in an entrepreneurial culture and experience the startup hustler environment. You will be part of a collaborative and innovative work environment, receive a competitive salary package, access group medical policies, equal employment opportunities, maternity leave, and opportunities for upskilling and exposure to the latest technologies. Additionally, 100% sponsorship for certifications is provided to support your professional growth.,
Posted 3 days ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a Platform Support Engineer in Pune, India, your primary responsibility will be to ensure the successful design, implementation, and resolution of issues within the system. You should possess expertise in AWS, Azure, Kubernetes, and Terraform, along with proficiency in managing tools like Docker, RabbitMQ, PostgreSQL, and scripting in Bash and Python. Your role will involve working closely with AWS and Azure environments, as well as supporting technologies such as Terraform, Helm, and Kubernetes. You will be responsible for both the design and implementation of systems to effectively address product issues and resolve cloud or hardware-related problems. Furthermore, you will be involved in the development of platform and installation tools using a combination of Bash and Python. The platform you will be working on utilizes services like Kubernetes, Docker, RabbitMQ, PostgreSQL, and various cloud-specific services. This position offers you the opportunity to gain in-depth knowledge and experience in managing these technologies. With a minimum of 4 years of experience, you will contribute to the seamless operation and continuous improvement of the platform while also staying updated on emerging trends and best practices in the field of platform support engineering.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
We are seeking experienced and talented engineers to join our team. Your main responsibilities will include designing, building, and maintaining the software that drives the global logistics industry. WiseTech Global is a leading provider of software for the logistics sector, facilitating connectivity for major companies like DHL and FedEx within their supply chains. Our organization is product and engineer-focused, with a strong commitment to enhancing the functionality and quality of our software through continuous innovation. Our primary Research and Development center in Bangalore plays a pivotal role in our growth strategies and product development roadmap. As a Lead Software Engineer, you will serve as a mentor, a leader, and an expert in your field. You should be adept at effective communication with senior management while also being hands-on with the code to deliver effective solutions. The technical environment you will work in includes technologies such as C#, Java, C++, Python, Scala, Spring, Spring Boot, Apache Spark, Hadoop, Hive, Delta Lake, Kafka, Debezium, GKE (Kubernetes Engine), Composer (Airflow), DataProc, DataStreams, DataFlow, MySQL RDBMS, MongoDB NoSQL (Atlas), UIPath, Helm, Flyway, Sterling, EDI, Redis, Elastic Search, Grafana Dashboard, and Docker. Before applying, please note that WiseTech Global may engage external service providers to assess applications. By submitting your application and personal information, you agree to WiseTech Global sharing this data with external service providers who will handle it confidentially in compliance with privacy and data protection laws.,
Posted 3 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At Zerodha Fund House, you'll have the incredible opportunity to join a team of world-class engineers, designers, and finance professionals with diverse backgrounds and skills. As a DevOps team member, you'll play a vital role in building the next generation of investment products for millennials. We're looking for a passionate and proactive undergrad, preferably graduating in 2024 with experience in DevOps. You'll be working on cutting-edge technologies to deliver a delightful investing experience to our users. If you're eager to learn and grow, and you're excited to be part of a team that's changing the way people invest, then we encourage you to apply. Responsibilities Help the team research and evaluate new DevOps tools and technologies to improve our development and deployment process. Monitor and troubleshoot infrastructure and applications. Document and share DevOps knowledge with the team. Contribute to the DevOps community by writing blog posts, giving presentations, and participating in open source projects. Requirements Strong Linux and networking fundamentals Web development concepts - server architecture (cloud computing). Basic knowledge of DevOps tools and technologies, such as GitOps, CI/CD Good to-have Interest (and/or experience) in the financial/stock market space. Familiarity with tools like ArgoCD, ArgoWorkflow, AWS CDK, CDK8s, Terraform, Helm and Cloudformation, AWS Infra (EC2, ELB, ALB, S3, VPC, Cloudfront, Route53). Familiarity with databases - (MongoDB, Postgres, Redis), Scripting (bash, python). Familiarity with AWS Infra (EC2, ELB, ALB, S3, VPC, Cloudfront, Route53).
Posted 4 days ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Bachelor’s/Master’s degree in Computer Science, Information Technology or related field 5-7 years of experience in a DevOps role Strong understanding of the SDLC and experience with working on fully Agile teams Proven experience in coding & scripting DevOps, Ant/Maven, Groovy, Terraform, Shell Scripting, and Helm Chart skills. Working experience with IaC tools like Terraform, CloudFormation, or ARM templates Strong experience with cloud computing platforms (e.g. Oracle Cloud (OCI), AWS, Azure, Google Cloud) Experience with containerization technologies (e.g. Docker, Kubernetes/EKS/AKS) Experience with continuous integration and delivery tools (e.g. Jenkins, GitLab CI/CD) Kubernetes - Experience with managing Kubernetes clusters and using kubectl for managing helm chart deployments, ingress services, and troubleshooting pods. OS Services – Basic Knowledge to Manage, configuring, and troubleshooting Linux operating system issues (Linux), storage (block and object), networking (VPCs, proxies, and CDNs) Monitoring and instrumentation - Implement metrics in Prometheus, Grafana, Elastic, log management and related systems, and Slack/PagerDuty/Sentry integrations Strong know-how of modern distributed version control systems (e.g. Git, GitHub, GitLab etc) Strong troubleshooting and problem-solving skills, and ability to work well under pressure Excellent communication and collaboration skills, and ability to lead and mentor junior team members Career Level - IC3 Responsibilities Design, implement, and maintain automated build, deployment, and testing systems Experience in Taking Application Code and Third Party Products and Building Fully Automated Pipelines for Java Applications to Build, Test and Deploy Complex Systems for delivery in Cloud. Ability to Containerize an Application i.e. creating Docker Containers and Pushing them to an Artifact Repository for deployment on containerization solutions with OKE (Oracle container Engine for Kubernetes) using Helm Charts. Lead efforts to optimize the build and deployment processes for high-volume, high-availability systems Monitor production systems to ensure high availability and performance, and proactively identify and resolve issues Support and Troubleshoot Cloud Deployment and Environment Issues Create and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD Continuously improve the scalability and security of our systems, and lead efforts to implement best practices Participate in the design and implementation of new features and applications, and provide guidance on best practices for deployment and operations Work with security team to ensure compliance with industry and company standards, and implement security measures to protect against threats Keep up-to-date with emerging trends and technologies in DevOps, and make recommendations for improvement Lead and mentor junior DevOps engineers and collaborate with cross-functional teams to ensure successful delivery of projects Analyze, design develop, troubleshoot and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications. As a member of the software engineering division, you will analyze and integrate external customer specifications. Specify, design and implement modest changes to existing software architecture. Build new products and development tools. Build and execute unit tests and unit test plans. Review integration and regression test plans created by QA. Communicate with QA and porting engineering to discuss major changes to functionality. Work is non-routine and very complex, involving the application of advanced technical/business skills in area of specialization. Leading contributor individually and as a team member, providing direction and mentoring to others. BS or MS degree or equivalent experience relevant to functional area. 6+ years of software engineering or related experience. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 4 days ago
7.0 - 11.0 years
0 Lacs
pune, maharashtra
On-site
Working at Tech Holding provides you with an opportunity to be part of a full-service consulting firm dedicated to delivering high-quality solutions and predictable outcomes to clients. Our team, comprising industry veterans with experience in both emerging startups and Fortune 50 firms, has developed a unique approach based on deep expertise, integrity, transparency, and dependability. We are currently seeking a Cloud Architect with at least 9 years of experience to assist in building functional systems that enhance customer experience. Your responsibilities will include: Monitoring & Observability: - Setting up and configuring Datadog and Grafana for comprehensive system metric monitoring and visualization. - Developing alerting systems to proactively identify and resolve potential issues. - Integrating monitoring tools with applications and infrastructure to ensure high observability. CI/CD: - Implementing and managing CI/CD pipelines using GitHub Actions, EKS, and Helm to automate build, test, and deployment processes. - Optimizing build times and deployment frequency to expedite development cycles. - Ensuring adherence to best practices for code quality, security, and compliance. Cloud Infrastructure: - Designing and overseeing the migration of Azure infrastructure to AWS with a focus on leveraging best practices and cloud-native technologies. - Managing and optimizing AWS and Azure environments, including cost management, resource allocation, and security. - Implementing and maintaining infrastructure as code (IaC) using tools like Terraform or AWS CloudFormation. Incident Management: - Implementing and managing incident response processes for efficient detection, response, and resolution of incidents. - Collaborating with development, operations, and security teams to identify root causes and implement preventative measures. - Maintaining incident response documentation and conducting regular drills to enhance readiness. Migration: - Leading the migration of ECS services to EKS while ensuring minimal downtime and data integrity. - Optimizing EKS clusters for performance and scalability. - Implementing best practices for container security and management. CDN Management: - Managing and optimizing the Akamai CDN solution to efficiently deliver content. - Configuring CDN settings for caching, compression, and security. - Monitoring CDN performance and troubleshooting issues. Technology Stack: - Proficiency in Python or Go for scripting and automation. - Experience with Mux Enterprise for reporting, monitoring, and alerting. - Familiarity with relevant technologies and tools such as Kubernetes, Docker, Ansible, and Jenkins. Qualifications: - Bachelor's degree in computer science, engineering, or a related field. - Minimum of 7 years of experience in DevOps or a similar role. - Strong understanding of cloud platforms (AWS and Azure) and their services. - Expertise in Python or Go Lang and monitoring/observability tools (Datadog, Grafana). - Proficiency in CI/CD pipelines and tools (GitHub Actions, EKS, Helm). - Experience with infrastructure as code (IaC) tools (Terraform, AWS CloudFormation). - Knowledge of containerization technologies (Docker, Kubernetes). - Excellent problem-solving, troubleshooting, and communication skills. - Ability to work independently and collaboratively within a team. Employee Benefits include flexible work timings, work from home options as needed, family insurance policy, various leave benefits, and opportunities for learning and development.,
Posted 4 days ago
0 years
0 Lacs
Jaipur, Rajasthan, India
Remote
Are you the DevOps engineer who believes in building systems that are not only resilient but also future-proof? If you relish the challenge of transforming chaotic legacy infrastructures into sleek, automated ecosystems, we want you. We're on the hunt for an AWS infrastructure virtuoso who thrives under pressure, ensuring our acquired products run like clockwork with 99.9% uptime. Your mission? To migrate diverse product stacks into a unified, scalable AWS environment, complete with robust monitoring and automation to minimize incidents. While most roles focus on maintaining a singular tech stack, we're looking for someone eager to consolidate and refine multiple stacks, enhancing efficiency without missing a beat. If you're not interested in merely sustaining existing systems but are passionate about redesigning fragmented environments into cohesive ones, this is your calling. You'll be at the helm of infrastructure transformations, from AI-driven automation and performance tuning to database migrations and cost optimization. This includes troubleshooting, executing seamless cloud migrations with minimal downtime, and automating half your tasks using AI/ML workflows. You'll wield genuine decision-making power, free from bureaucratic delays. If you're a proactive problem-solver who thrives on refining complex systems to achieve impeccable performance, this role offers the autonomy and challenge you crave. But if you prefer predictable projects or require constant guidance, this might not be the right fit. Ready to own a high-impact infrastructure role with opportunities for large-scale optimization and automation? Apply today! What You Will Be Doing Orchestrating intricate infrastructure transformations, including migrating legacy systems to AWS cloud and executing lift-and-shift migrations Crafting comprehensive monitoring strategies and automating deployments and operational workflows Engaging in system monitoring, backups, incident response, database migrations, configurations, and optimizing costs What You Won’t Be Doing Being bogged down by Jira or endless status meetings - we prioritize solution-driven individuals over mere problem trackers Prolonging the life of obsolete systems - you'll have the mandate to enact substantial enhancements Getting tangled in bureaucratic approval processes - you'll have the autonomy to implement immediate fixes Limiting yourself to narrow technical specialties - this role demands a wide-ranging expertise Struggling for budget for essential upgrades - we recognize the critical nature of infrastructure investments DevOps Engineer Key Responsibilities Enhance the reliability and standardization of our cloud infrastructure across a diverse product portfolio by implementing effective monitoring, automation, and adhering to AWS best practices. Basic Requirements Extensive expertise in AWS infrastructure (our primary platform - experience in other clouds won't suffice) Proficient programming skills in Python or JavaScript for automation and tool development Proven experience in managing and migrating production databases with various engines (including MySql, Postgres, Oracle, MS-SQL) Advanced skills in Docker/Kubernetes Proficiency in infrastructure automation tools (Terraform, Ansible, or CloudFormation) Expertise in Linux systems administration About Trilogy Hundreds of software businesses run on the Trilogy Business Platform. For three decades, Trilogy has been known for 3 things: Relentlessly seeking top talent, Innovating new technology, and incubating new businesses. Our technological innovation is spearheaded by a passion for simple customer-facing designs. Our incubation of new businesses ranges from entirely new moon-shot ideas to rearchitecting existing projects for today's modern cloud-based stack. Trilogy is a place where you can be surrounded with great people, be proud of doing great work, and grow your career by leaps and bounds. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $50 USD/hour, which equates to $100,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5236-IN-Jaipur-DevOpsEngineer.004
Posted 4 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Lead / Staff Software Engineer in Black Duck SRE team, you will play a key role in transforming our R&D products through the adoption of advanced cloud, Containerization, Microservices, modern software delivery and other cutting edge technologies. You will be a key member of the team, working independently to develop tools and scripts, automated provisioning, deployment, and monitoring. The position is based in Bangalore (Near Dairy Circle Flyover) with a Hybrid work mode. Key Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - Minimum of 5-7 years of experience in Site Reliability Engineering / DevOps Engineering. - Strong hands-on experience with Containerization & Orchestration using Docker, Kubernetes (K8s), Helm to Secure, optimize, and scale K8s. - Deep understanding of Cloud Platforms & Services in AWS / GCP / Azure (Preferably GCP) cloud to Optimize cost, security, and performance. - Solid experience with Infrastructure as Code (IaC) using Terraform / CloudFormation / Pulumi (Preferably Terraform) - Write modules, manage state. - Proficient in Scripting & Automation using Bash, Python / Golang - Automate tasks, error handling. - Experienced in CI/CD Pipelines & GitOps using Git / GitHub / GitLab / Bitbucket / ArgoCD, Harness.io - Implement GitOps for deployments. - Strong background in Monitoring & Observability using Prometheus / Grafana / ELK Stack / Datadog / New Relic - Configure alerts, analyze trends. - Good understanding in Networking & Security using Firewalls, VPN, IAM, RBAC, TLS, SSO, Zero Trust - Implement IAM, TLS, logging. - Experience with Backup & Disaster Recovery using Velero, Snapshots, DR Planning - Implement backup solutions. - Basic Understanding messaging concepts using RabbitMQ / Kafka / Pub,Sub / SQS. - Familiarity with Configuration Management using Ansible / Chef / Puppet / SaltStack - Run existing playbooks. Key Responsibilities: - Design and develop scalable, modular solutions that promote reuse and are easily integrated into our diverse product suite. - Collaborate with cross-functional teams to understand their needs and incorporate user feedback into the development. - Establish best practices for modern software architecture, including Microservices, Serverless computing, and API-first strategies. - Drive the strategy for Containerization and orchestration using Docker, Kubernetes, or equivalent technologies. - Ensure the platform's infrastructure is robust, secure, and compliant with industry standards. What We Offer: - An opportunity to be a part of a dynamic and innovative team committed to making a difference in the technology landscape. - Competitive compensation package, including benefits and flexible work arrangements. - A collaborative, inclusive, and diverse work environment where creativity and innovation are valued. - Continuous learning and professional development opportunities to grow your expertise within the industry.,
Posted 4 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a DevOps Operational Support for this project, you will have the opportunity to contribute to the data management architecture of industry-leading software. You will work closely with cross-functional teams and regional experts to design, implement, and support solutions with a focus on data security and global availability to facilitate data-driven decisions for our customers. This is your chance to work on a stable, long-term project with a global client, focusing on digital transformation and change management. Exciting Opportunities await as you work in squads under our customer's direction, utilizing Agile methodologies and Scrum. Contribute to an innovative application that guides and documents the sales order process, aids in market analysis, and ensures competitive pricing. Be part of a team that integrates digital and human approvals, ensuring seamless integration with a broader ecosystem of applications. Collaborate with reputed global clients, delivering top-notch solutions, and join high-caliber project teams with front-end, back-end, and database developers that offer ample opportunities to learn, grow, and advance your career. If you possess strong technical skills, effective communication abilities, and a commitment to security, we want you on our team! Ready to make an impact Apply now and be part of our journey to success! Responsibilities: - Solve Operational Challenges by working with global teams to find creative solutions for customers across our software catalog. - Plan, provision, and configure enterprise-level solutions for customers on a global scale during Customer Deployments. - Monitor customer environments to proactively identify and resolve issues while providing support for incidents in Monitoring and Troubleshooting tasks. - Leverage and maintain automation pipelines to handle all stages of the software lifecycle under Automation responsibilities. - Write and maintain documentation for processes, configurations, and procedures in Documentation tasks. - Lead the team in troubleshooting environment failures within SRE MTTR goals to meet SRE & MTTR Goals. - Collaborate closely with stakeholders to define project requirements and deliverables and understand their needs and challenges. - Ensure the highest standards in coding and security, with a strong emphasis on protecting systems and data by Implementing Best Practices. - Take an active role in defect triage, strategy, and architecture planning in Strategize and Plan activities. - Ensure database performance and resolve development problems to Maintain Performance. - Translate requirements into high-quality solutions, adhering to Agile methodologies to Deliver Quality solutions. - Conduct detailed design reviews to ensure alignment with approved architecture in Review and Validate processes. - Work with application development teams throughout development, deployment, and support phases in Collaborate tasks. Mandatory Skills: Technical Skills: - Database technologies: RDBMS (Postgres preferred), no-SQL (Cassandra preferred) - Software languages: Java, Python, NodeJS, Angular - Cloud Platforms: AWS - Cloud Managed Services: Messaging, Server-less Computing, Blob Storage - Provisioning (Terraform, Helm) - Containerization (Docker, Kubernetes preferred) - Version Control: Git Qualification and Soft Skills: - Bachelors degree in Computer Science, Software Engineering, or a related field - Customer-driven and result-oriented focus - Excellent problem-solving and troubleshooting skills - Ability to work independently and as part of a team - Strong communication and collaboration skills - Strong desire to stay up to date with the latest trends and technologies in the field Nice-to-Have Skills: - Cloud Technologies: RDS, Azure - Knowledge in the E&P Domain (Geology, Geophysics, Well, Seismic, or Production data types) - GIS experience is desirable Languages: English: C2 Proficient,
Posted 4 days ago
5.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
You will be responsible for utilizing the best DevOps practices to optimize the software development process. This includes system administration, design, construction, and operation of container platforms such as Kubernetes, as well as expertise in container technologies like Docker and their management systems. Your role will also involve working with cloud-based monitoring, alerting, and observability solutions, and possessing in-depth knowledge of developer workflows with Git. Additionally, you will be expected to document processes, procedures, and best practices, and demonstrate strong troubleshooting and problem-solving skills. Your proficiency in Network Fundamentals, Firewalls, and ingress/egress Patterns, as well as experience in security configuration Management and DevSecOps, will be crucial for this position. You should have hands-on experience with Linux, CI/CD Tools (Pipelines, GitHub, GitHub Actions/Jenkins), and Configuration Management/Infrastructure as Code tools like CloudFormation, Terraform, and Cloud technologies such as VMware, AWS, and Azure. Your responsibilities will also include build automation, deployment configuration, and enabling product automation scripts to run in CI. You will be required to design, develop, integrate, and deploy CI/CD pipelines, collaborate closely with developers, project managers, and other teams to analyze requirements, and resolve software issues. Moreover, your ability to lead the development of infrastructure using open-source technologies like Elasticsearch, Grafana, and homegrown tools such as React and Python will be highly valued. Minimum Qualifications: - Graduate/master's degree in computer science, Engineering, or related discipline - 5 to 10 years of overall DevOps/Related experience - Good written and verbal communication skills - Ability to manage and prioritize multiple tasks while working both independently and within a team - Knowledge of software test practices, software engineering, and Cloud Technologies discipline - Knowledge/Working experience with Static Code Analysis, License Check Tools, and other Development Process Improvement Tools Desired Qualifications: - Minimum 4 years of working experience in AWS, Kubernetes, Helm, Docker-related technologies - Providing visibility into cloud spending and usage across the organization - Generating and interpreting reports on cloud expenditure, resource utilization, and usage optimization - Network Fundamentals: AWS VPC, AWS VPN, Firewalls, and ingress/egress Patterns - Knowledge/Experience with embedded Linux and RTOS (e.g. ThreadX, FreeRTOS) development on ARM based projects - Domain Knowledge on Cellular wireless and WiFi is an asset - Knowledge of distributed systems, networking, AMQP/MQTT, Linux, cloud security, and Python.,
Posted 4 days ago
5.0 years
0 Lacs
India
Remote
Job Title: Senior Sales Manager *(Future Sales Director) Location: Remote (Australia-based hours / Dayshift) Salary Range: 25 to 30 Lakhs INR ***(+UNCAPPED MONTHLY SALES INCENTIVE)*** Employment Type: Full-Time About Us GetmyCourse (GMC) isn’t just another education company—we’re one of AUSTRALIAN FINANCIAL REVIEW’s fastest-growing education-based companies in Australia! 🏆 🏆 Read all about it: GMC AFR Feature Led by the legendary Peter Cox, the founder and CEO of Leadership Dynamics— a Forbes-featured leader with multiple TED Talks under his belt. 🏆🏆 🏆 Check him out here: Peter Cox in Forbes Joining him at the helm are two powerhouse names in the Australian RTO business: Darshan Chavan & Rejin Rajan — both TEDx Talk speakers with a knack for inspiring success. 🏆 Get inspired: Darshan on Happiness Rejin on Crushing Worry In just 5 years since the pandemic, we’ve skyrocketed with 200% growth, and we’re not stopping anytime soon! 🏆 With over six years of hiring hundreds of staff in the Philippines and further expanding in other markets and region, GMC is a name you can count on. Want a sneak peek at our culture? We’re almost 5-star rated on Glassdoor! ⭐⭐⭐⭐⭐🏆 See what our team says: Glassdoor Reviews The Role : We’re seeking a Senior Sales Manager (with a clear pathway to Sales Director ) who has a solid track record of building, developing, and leading high-performing sales teams . This role starts as a hands-on leadership position and is expected to evolve into overseeing Sales Managers and developing strategic sales initiatives at a director level. Key Responsibilities : Lead, coach, and drive performance of the current sales team to exceed targets and KPIs. Recruit, train, and mentor new sales members for rapid performance ramp-up. Develop and implement effective sales strategies and scalable processes. Monitor and analyze sales metrics to identify areas of improvement and optimization. Collaborate with leadership on sales forecasting, goal setting, and strategic planning. Drive a culture of accountability, motivation, and continuous improvement. Build a strong sales pipeline through effective team management and performance visibility. Succeed in leading innovative and game changing sales development solutions, and prepare to step into a Sales Director role, leading Sales Managers and overseeing the entire sales division. Job Requirements/ Qualifications : Proven experience (5+ years) in senior-level sales leadership roles (Senior Sales Manager, Head of Sales, or Sales Director), preferably in a BPO-B2B/B2C sales. Prior experience in B2B/B2C sales and working with Australian clients and/or other international market experience required. Demonstrated ability to build, scale, and lead a high-performing sales team. Strong coaching and mentoring skills, especially for early-stage sales talent. Data-driven decision-maker, with experience using CRM tools and reporting dashboards. Excellent communication, strategic thinking, and leadership presence. Ease and effectiveness in working in a remote setup . Compensation & Perks 25 to 30 Lakhs INR Uncapped sales incentive structure (average ₹302,871+month) Upward path for Sales Director role. Leadership autonomy and high-impact role in business growth Strong company culture focused on results, innovation, and performance. Permanent Work from Home / WFA flexibility. AU Morning Shift ☀️ (Goodbye, graveyard shifts!) Exclusive Trainings & Seminars with top industry leaders. HMO on Your First Year of Tenure . Quarterly 5-star Hotel Incentive Yearly International Travel Incentives! ✈️ (Watch our Thailand trip & Vietnam trip here! Join Us If you're a strategic sales leader ready to take your career to the next level and help shape the future of a scaling business, we want to meet you. Your next big leadership challenge starts here. 🏆 Ready to lead? Apply now! 🏆 Send your CVs to marygrace.p@getmycourse.com.au
Posted 4 days ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a Network Operations Center (NOC) Analyst at Inspire Brands, you will be responsible for overseeing all technology aspects of the organization. Your primary role will involve acting as the main technology expert for the NOC team, ensuring the detection and resolution of issues in production before they impact the large scale operations. It will be your duty to guarantee that the services provided by the Inspire Digital Platform (IDP) meet user needs in terms of reliability, uptime, and continuous improvement. Additionally, you will play a crucial role in ensuring an outstanding customer experience by establishing service level agreements that align with the business model. In the technical aspect of this role, you will be required to develop and monitor various monitoring dashboards to identify problems related to applications, infrastructure, and potential security incidents. Providing operational support for multiple large, distributed software applications will be a key responsibility. Your deep troubleshooting skills will be essential in enhancing availability, performance, and security to ensure 24/7 operational readiness. Conducting thorough postmortems on production incidents to evaluate business impact and facilitate learning for the Engineering team will also be part of your responsibilities. Moreover, you will create dashboards and alerts for monitoring the platform, define key metrics and service level indicators, and ensure the collection of relevant metric data to create actionable alerts for the responsible teams. Participation in the 24/7 on-call rotation and automation of tasks to streamline application deployment and third-party tool integration will be crucial. Analyzing major incidents, collaborating with other teams to find permanent solutions, and establishing and publishing regular KPIs and metrics for measuring performance, stability, and customer satisfaction will also be expected from you. In terms of qualifications, you should hold a 4-year degree in computer science, Information Technology, or a related field. You should have a minimum of 5 years of experience in a production support role, specifically supporting large scale SAAS Production B2C or B2B Cloud Platforms, with a strong background in problem-solving and troubleshooting. Additionally, you should possess knowledge and skills in various technologies such as Java, TypeScript, Python, Azure Cloud services, monitoring tools like Splunk and Prometheus, containers, Kubernetes, Helm, Cloud networking, Firewalls, and more. Overall, this role requires strong technical expertise, effective communication skills, and a proactive approach to ensuring the smooth operation of Inspire Brands" technology infrastructure.,
Posted 4 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Greenlight is the leading family fintech company on a mission to help parents raise financially smart kids. We proudly serve more than 6 million parents and kids with our award-winning banking app for families. With Greenlight, parents can automate allowance, manage chores, set flexible spend controls, and invest for their family’s future. Kids and teens learn to earn, save, spend wisely, and invest. At Greenlight, we believe every child should have the opportunity to become financially healthy and happy. It’s no small task, and that’s why we leap out of bed every morning to come to work. Because creating a better, brighter future for the next generation depends on it. Greenlight is looking for a Staff Engineer, Production Operations to join our growing team! As a Staff Engineer, you will be a technical leader and individual contributor within our production operations function. You will be responsible for designing, building, and maintaining highly reliable, scalable, and performant cloud infrastructure and systems. You will play a critical role in driving technical excellence, mentoring junior engineers, and solving our most complex scalability and reliability challenges. What you will be doing: Lead the design, implementation, and evolution of Greenlight's core cloud infrastructure and SRE practices to ensure high availability, scalability, and performance Act as a technical authority for complex SRE and cloud engineering challenges, providing expert guidance and solutions Drive significant architectural improvements to enhance system reliability, resilience, and operational efficiency Develop, maintain, and optimize our cloud infrastructure using Infrastructure as Code (primarily Terraform) and automation tools Collaborate closely with development and security teams to embed SRE principles into the software development lifecycle, promoting secure and reliable coding practices Design and implement robust monitoring, logging, and alerting solutions to provide comprehensive visibility into system health Participate in and lead incident response, performing deep dive root cause analysis, and driving actionable blameless postmortems to prevent recurrence Mentor and provide technical guidance to other SRE and Cloud Engineers, contributing to their growth and the team's overall technical capabilities Research, evaluate, and advocate for new technologies and tools that can improve our operational posture and efficiency Contribute to the strategic planning and roadmap development for the SRE and Cloud Engineering functions Enhance existing services and applications to increase availability, reliability, and scalability in a microservices environment Build and improve engineering tooling, process, and standards to enable faster, more consistent, more reliable, and highly repeatable application delivery What you should bring: Technical Leadership: Lead complex technical projects and mentor engineers Communication: Articulate complex technical concepts clearly SRE Expertise: Apply SRE principles (SLIs, SLOs, error budgets) in production Distributed Systems: Understand and troubleshoot complex issues in distributed systems Monitoring & Alerting: Design and optimize monitoring, logging, and alerting systems (e.g., Datadog, Prometheus) Cloud Mastery (AWS): Expert-level knowledge of AWS services (e.g., EC2, S3, EKS) Infrastructure as Code (Terraform): Master IaC for cloud infrastructure management Containerization: Strong experience with Docker and Kubernetes in production Automation: Bias for automation and building self-healing systems Problem Solving: Exceptional analytical and problem-solving skills, proactively identifying bottlenecks Technologies we use: AWS MySQL, DynamoDB, Redis GitHub Actions for CI pipelines Kubernetes (specifically EKS) Ambassador, Helm, Argo CD, LinkerD REST, gRPC, graphQL React, Redux, Swift, Node.js, Kotlin, Java, Go, Python Datadog, Prometheus Who we are: It takes a special team to aim for a never-been-done-before mission like ours. We’re looking for people who love working together because they know it makes us stronger, people who look to others and ask, “How can I help?” and then “How can we make this even better?” If you’re ready to roll up your sleeves and help parents raise a financially smart generation, apply to join our team. Greenlight is an equal opportunity employer and will not discriminate against any employee or applicant based on age, race, color, national origin, gender, gender identity or expression, sexual orientation, religion, physical or mental disability, medical condition (including pregnancy, childbirth, or a medical condition related to pregnancy or childbirth), genetic information, marital status, veteran status, or any other characteristic protected by federal, state or local law. Greenlight is committed to an inclusive work environment and interview experience. If you require reasonable accommodations to participate in our hiring process, please reach out to your recruiter directly or email recruiting@greenlight.me.
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough