Jobs
Interviews

43 Dockerfile Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

0 Lacs

mumbai metropolitan region

On-site

Position Description Company Profile: Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Job Title: Devops Engineer Position: Senior software engineer/Devops engineer Experience: 4-8 years Category: Software Development/ Engineering Shift: should be flexible to work on Night Shift Main location: Mumbai Position ID: J0625-1137 Employment Type: Full Time Education Qualification: Any graduation or related field or higher with minimum 3 years of relevant experience. Position Description: We are seeking a skilled and motivated DevOps Engineer to join our team. The ideal candidate will have hands-on experience in continuous integration, continuous deployment (CI/CD), infrastructure automation, containerization, and cloud-native technologies. You will be responsible for designing, implementing, and maintaining efficient DevOps toolchains and pipelines to support development and operations teams across multiple environments. Your future duties and responsibilities Key Responsibilities: Install, configure, and maintain Jenkins to support continuous integration and end-to-end automation of build and deployment pipelines. Design, develop, and optimize Jenkins CI pipelines to streamline software delivery. Create and manage Docker containers and custom Docker images to ensure consistent application environments across development, QA, and production. Deploy builds and releases across multiple environments including QA, Integration, UAT, and Production. Lead the design, implementation, and maintenance of DevOps infrastructure and toolchains. Implement Infrastructure as Code (IaC) solutions using Terraform, Ansible, or CloudFormation for automated environment provisioning. Support critical application deployments via Rancher on Kubernetes clusters (both on-premises and Azure cloud). Monitor system performance, reliability, scalability, and optimize as needed. Collaborate with development and operations teams to troubleshoot issues and promote continuous improvement. Ensure security, compliance, and disaster recovery best practices are followed. Mentor and provide technical leadership to junior team members. Stay current with emerging trends and technologies to innovate and improve DevOps processes. Required Qualifications To Be Successful In This Role Preferred Qualifications: Proven experience in Jenkins installation, configuration, and pipeline creation. Strong skills in Docker containerization, Dockerfile creation, and Docker management. Proficiency with Infrastructure as Code tools such as Terraform, Ansible, or CloudFormation. Experience managing Kubernetes clusters and deploying applications using Rancher. Familiarity with scripting languages including Python, Shell/Bash, Groovy, and Batch scripts. Experience with version control tools like Git and project management tools like JIRA. Solid understanding of CI/CD best practices, security standards, and compliance requirements. Ability to work independently, solve problems analytically, and lead technical initiatives. Excellent Communication Skills And Experience Mentoring Junior Engineers. Cloud experience with Azure or similar platforms is a plus. Primary Skills: Docker, Kubernetes, Jenkins, Git, Ansible. WebServer – Jboss ,Tomcat etc. Scripting Language – Shell, Python, batch script etc. Secondary Skills Azure, Terraform, Oracle,SQL Server, MySQL, Groovy, Ansible etc. Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 6 days ago

Apply

0 years

0 Lacs

ahmedabad, gujarat, india

On-site

Job Description – DevOps Engineer We are seeking a highly skilled DevOps Engineer to join our dynamic team. The ideal candidate will have strong expertise in automation, cloud infrastructure (with a focus on AWS and GenAI services), CI/CD, and containerization, along with a deep understanding of security best practices, monitoring, and system optimization. This role requires a balance of technical proficiency, problem-solving, and collaboration skills to ensure smooth deployment, scalability, and reliability of applications and infrastructure. Key Responsibilities Design, automate, and manage scalable, secure, and high-availability cloud infrastructure. Implement Infrastructure as Code (IaC) using tools like Terraform or CloudFormation. Develop and maintain CI/CD pipelines with Jenkins, GitLab CI, CircleCI, or AWS CodePipeline. Automate routine tasks using Python and shell scripting. Monitor and optimize system performance using Prometheus, Grafana, ELK stack, or AWS CloudWatch. Manage databases (MySQL, PostgreSQL, MongoDB, DynamoDB), including backup, recovery, and performance tuning. Deploy and manage web applications on production environments with Nginx, Apache, or similar servers. Ensure cloud, networking, and server security using IAM, VPC, security groups, and firewalls. Manage source control and team collaboration using Git and branching strategies. Work with containerization and orchestration technologies (Docker, Kubernetes, ECS). Implement disaster recovery, backup, and high-availability strategies. Troubleshoot incidents, perform root cause analysis, and implement preventive measures. Collaborate with cross-functional teams, ensuring effective communication and documentation. Required Skills & Experience 1. Automation & Infrastructure as Code (IaC) Hands-on experience with Terraform, AWS CloudFormation, or similar. Proficient in automating infrastructure deployment and management tasks. Knowledge of configuration management tools (Ansible, Chef, Puppet). 2. Monitoring & Logging Experience with monitoring tools (Prometheus, Grafana, ELK Stack, AWS CloudWatch). Ability to set up alerts, dashboards, and audit logs for system health and performance. 3. Cloud Platforms (AWS – Must Have GenAI Services Experience) Strong knowledge of AWS services: EC2, S3, RDS, Lambda, Bedrock, OpenSearch, Knowledgebase, IAM, VPC, CodeDeploy, CodePipeline, SQS, etc. Familiar with cloud-native architectures and multi-cloud environments (a plus). 4. Scripting & Automation Python: Scripting, automation, Boto3 for AWS, Flask/Django familiarity (bonus). Shell scripting: Strong skills in bash or similar for deployment and system automation. 5. Database Management Experience with MySQL, PostgreSQL, MongoDB, DynamoDB. Backup, recovery, performance tuning, and database security best practices. 6. Web Application Deployment & Server Management Experience with production deployments, web/application servers (Nginx, Apache). Knowledge of reverse proxies, SSL/TLS setup, and security hardening. 7. Security & Networking Cloud security best practices, IAM management, firewalls, and VPC configurations. Strong understanding of TCP/IP, DNS, HTTP/HTTPS, and load balancer setups. 8. CI/CD & Version Control Proficient in Git workflows (GitFlow, trunk-based) for multi-team management. Experience with CI/CD pipelines (Jenkins, GitLab CI, AWS Code Pipeline). Knowledge of containerization (Docker) and orchestration (Kubernetes, ECS). 9. High Availability & Scaling Load balancing strategies (AWS ELB, HAProxy), failover planning. Auto-scaling in cloud platforms and performance optimization. 10. Backup, Recovery & Incident Response Implementation of disaster recovery, redundancy strategies, and system resilience. Troubleshooting, root cause analysis, and preventive measures. 11. Collaboration & Project Management Strong communication and documentation skills. Ability to collaborate across teams and explain technical concepts to non-technical stakeholders. Familiarity with Agile methodologies (Scrum, Kanban) and tools (Jira, Trello) is a plus. Preferred/Optional Skills Dockerfile and Docker Compose creation for multi-container applications. Serverless architecture with AWS Lambda, SQS, SNS. Project management and task prioritization.

Posted 1 week ago

Apply

1.0 - 3.0 years

0 Lacs

noida, uttar pradesh, india

On-site

Job description or role for an SDE-1 (Software Development Engineer - 1) : Location: Noida Mode: Work from Office. Key Skills & Technologies: Python : Strong proficiency in Python for backend development. Knowledge of Python frameworks like Django / FastAPI for building high-performance APIs. Knowledge of gRPC Server & Client implementation. Experience with Django / FastAPI for creating RESTful APIs with high performance and speed. Understanding of asynchronous programming (async/await) for improving scalability and response time. Docker & Kubernetes : Familiarity with containerization using Docker. Ability to create Docker images, configure Dockerfile, and deploy applications inside containers. Knowledge of container orchestration using Kubernetes. Understanding of Kubernetes concepts like Pods, Deployments, Services, and ConfigMaps. Coding Standards : Strong understanding of coding best practices, code reviews, and maintaining high-quality code. Knowledge of tools like linters, formatters, and version control systems (e.g., Git). Ensuring modular, maintainable, and testable code is a priority. Additional Skills (may vary based on the role): Familiarity with relational and NoSQL databases (e.g., PostgreSQL, MongoDB). Understanding of CI/CD pipelines and automated testing practices. Familiarity with cloud platforms (AWS, GCP, Azure). Responsibilities : Backend Development : Develop, maintain, and optimize APIs using Python and FastAPI/Django. Deployment & Scalability : Utilize Docker for containerization and Kubernetes for orchestration, ensuring scalability and high availability of applications. Collaboration : Work closely with cross-functional teams (e.g., frontend, DevOps) to ensure smooth integration and delivery. Best Practices : Enforce and adhere to coding standards, write unit tests, and contribute to code reviews. Level Expectations : SDE-1 usually refers to a mid-level engineer, so you would be expected to have 1-3 years of experience, demonstrating leadership, problem-solving, and independent development capability.

Posted 1 week ago

Apply

1.5 - 5.0 years

0 Lacs

noida, uttar pradesh, india

On-site

Job description or role for an SDE-1 (Software Development Engineer - 1) : Location: Noida Work Mode: On-site Key Skills & Technologies: Python : Strong proficiency in Python for backend development. Knowledge of Python frameworks like Django / FastAPI for building high-performance APIs. Knowledge of gRPC Server & Client implementation. Experience with Django / FastAPI for creating RESTful APIs with high performance and speed. Understanding of asynchronous programming (async/await) for improving scalability and response time. Docker & Kubernetes : Familiarity with containerization using Docker. Ability to create Docker images, configure Dockerfile, and deploy applications inside containers. Knowledge of container orchestration using Kubernetes. Understanding of Kubernetes concepts like Pods, Deployments, Services, and ConfigMaps. Coding Standards : Strong understanding of coding best practices, code reviews, and maintaining high-quality code. Knowledge of tools like linters, formatters, and version control systems (e.g., Git). Ensuring modular, maintainable, and testable code is a priority. Additional Skills (may vary based on the role): Familiarity with relational and NoSQL databases (e.g., PostgreSQL, MongoDB). Understanding of CI/CD pipelines and automated testing practices. Familiarity with cloud platforms (AWS, GCP, Azure). Responsibilities : Backend Development : Develop, maintain, and optimize APIs using Python and FastAPI/Django. Deployment & Scalability : Utilize Docker for containerization and Kubernetes for orchestration, ensuring scalability and high availability of applications. Collaboration : Work closely with cross-functional teams (e.g., frontend, DevOps) to ensure smooth integration and delivery. Best Practices : Enforce and adhere to coding standards, write unit tests, and contribute to code reviews. Level Expectations : SDE-1 usually refers to a mid-level engineer, so you would be expected to have 1.5-5 years of experience, demonstrating leadership, problem-solving, and independent development capability.

Posted 1 week ago

Apply

0 years

0 Lacs

ahmedabad, gujarat, india

On-site

Job Description Role : DevOps Engineer We are seeking a highly skilled DevOps Engineer to join our dynamic team. The ideal candidate will have strong expertise in automation, cloud infrastructure (with a focus on AWS and GenAI services), CI/CD, and containerization, along with a deep understanding of security best practices, monitoring, and system optimization. This role requires a balance of technical proficiency, problem-solving, and collaboration skills to ensure smooth deployment, scalability, and reliability of applications and infrastructure. Key Responsibilities Design, automate, and manage scalable, secure, and high-availability cloud infrastructure. Implement Infrastructure as Code (IaC) using tools like Terraform or CloudFormation. Develop and maintain CI/CD pipelines with Jenkins, GitLab CI, CircleCI, or AWS CodePipeline. Automate routine tasks using Python and shell scripting. Monitor and optimize system performance using Prometheus, Grafana, ELK stack, or AWS CloudWatch. Manage databases (MySQL, PostgreSQL, MongoDB, DynamoDB), including backup, recovery, and performance tuning. Deploy and manage web applications on production environments with Nginx, Apache, or similar servers. Ensure cloud, networking, and server security using IAM, VPC, security groups, and firewalls. Manage source control and team collaboration using Git and branching strategies. Work with containerization and orchestration technologies (Docker, Kubernetes, ECS). Implement disaster recovery, backup, and high-availability strategies. Troubleshoot incidents, perform root cause analysis, and implement preventive measures. Collaborate with cross-functional teams, ensuring effective communication and documentation. Required Skills & Experience : & Infrastructure as Code (IaC) : Hands-on experience with Terraform, AWS CloudFormation, or similar. Proficient in automating infrastructure deployment and management tasks. Knowledge of configuration management tools (Ansible, Chef, Puppet). Monitoring & Logging Experience with monitoring tools (Prometheus, Grafana, ELK Stack, AWS CloudWatch). Ability to set up alerts, dashboards, and audit logs for system health and performance. Cloud Platforms (AWS Must Have GenAI Services Experience) Strong knowledge of AWS services : EC2, S3, RDS, Lambda, Bedrock, OpenSearch, Knowledgebase, IAM, VPC, CodeDeploy, CodePipeline, SQS, etc. Familiar with cloud-native architectures and multi-cloud environments (a plus). Scripting & Automation Python : Scripting, automation, Boto3 for AWS, Flask/Django familiarity (bonus). Shell scripting : Strong skills in bash or similar for deployment and system automation. Database Management Experience with MySQL, PostgreSQL, MongoDB, DynamoDB. Backup, recovery, performance tuning, and database security best practices. Web Application Deployment & Server Management Experience with production deployments, web/application servers (Nginx, Apache). Knowledge of reverse proxies, SSL/TLS setup, and security hardening. Security & Networking Cloud security best practices, IAM management, firewalls, and VPC configurations. Strong understanding of TCP/IP, DNS, HTTP/HTTPS, and load balancer setups. CI/CD & Version Control Proficient in Git workflows (GitFlow, trunk-based) for multi-team management. Experience with CI/CD pipelines (Jenkins, GitLab CI, AWS Code Pipeline). Knowledge of containerization (Docker) and orchestration (Kubernetes, ECS). High Availability & Scaling Load balancing strategies (AWS ELB, HAProxy), failover planning. Auto-scaling in cloud platforms and performance optimization. Backup, Recovery & Incident Response Implementation of disaster recovery, redundancy strategies, and system resilience. Troubleshooting, root cause analysis, and preventive measures. Collaboration & Project Management Strong communication and documentation skills. Ability to collaborate across teams and explain technical concepts to non-technical stakeholders. Familiarity with Agile methodologies (Scrum, Kanban) and tools (Jira, Trello) is a plus. Preferred/Optional Skills Dockerfile and Docker Compose creation for multi-container applications. Serverless architecture with AWS Lambda, SQS, SNS. Project management and task prioritization. (ref:hirist.tech)

Posted 2 weeks ago

Apply

0 years

0 Lacs

gurugram, haryana, india

On-site

CI/CD (Continuous Integration/Delivery/Deployment) The Core Requirements For The Job Include The Following Tools: Jenkins, GitHub Actions, GitLab CI, CircleCI, ArgoCD, Spinnaker. Concepts: Pipeline design (build, test, deploy), Blue-green / canary deployments, Rollbacks and artifact versioning, GitOps practices. Infrastructure As Code (IaC) Tools: Terraform, Pulumi, AWS CloudFormation, Ansible, Helm. Skills: Writing modular IaC code. Secret and state management. Policy enforcement (OPA, Sentinel). DRY patterns and IaC testing (e. g., Terratest). Cloud Platforms Platforms: AWS, Azure, GCP, OCI. Skills: VPC/networking setup, IAM policies, Managed services (EKS, GKE, AKS, RDS, Lambda), Billing, cost control, tagging governance, Cloud automation with CLI/SDKs. Containerization And Orchestration Tools: Docker, Podman, Kubernetes, OpenShift. Skills: Dockerfile optimization, multi-stage builds, Helm charts, Kustomize, K8s RBAC, admission controllers, pod security policies, Service mesh (Istio, Linkerd). Security And Compliance Tools: HashiCorp Vault, AWS Secrets Manager, Aqua, Snyk. Practices: Image scanning and runtime protection, Least privilege access models, Network policies, TLS enforcement, Audit logging, and compliance automation. Observability And Monitoring Tools: Prometheus, Grafana, ELK stack, Datadog, New Relic. Skills: Metrics, tracing, log aggregation, alerting thresholds and SLOs, Distributed tracing (Jaeger, OpenTelemetry). Reliability And Resilience Engineering Concepts and Tools: SRE practices, error budgets, Chaos engineering (Gremlin, LitmusChaos), Auto-scaling, self-healing infrastructure, Service Level Objectives (SLO/SLI) Platform Engineering (DevEx Focused) Tools: Backstage, Internal Developer Portals, Terraform Cloud. Practices: Golden paths and reusable blueprints, Self-service pipelines, Developer onboarding automation, Platform as a Product mindset. Source Control And Collaboration Tools: Git, Bitbucket, GitHub, GitLab. Practices: Branching strategies (Git Flow, trunk-based), Code reviews, merge policies, commit signing, and DCO enforcement. Scripting And Automation Languages: Bash, Python, Go, PowerShell. Skills: Writing CLI tools, Cron jobs and job runners, ChatOps and automation bots (Slack, MS Teams). This job was posted by Bhavya Chauhan from CloudTechner.

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

noida, uttar pradesh, india

On-site

About Barco Barco designs technology to enable bright outcomes around the world. Seeing beyond the image, we develop visualization and collaboration solutions to help you work together, share insights, and wow audiences. Our focus is on three core markets: Enterprise (from meeting and control rooms to corporate spaces), Healthcare (from the radiology department to the operating room), and Entertainment (from movie theaters to live events and attractions). We have a team of 3,600 employees, located in 90 countries, whose passion for technology is captured in 400 granted patents. As part of GEAX organization which is the Software product development group at Barco our vision is to be a world class software team partnering with our businesses to offer successful software solutions and outcomes that delight our customers and set the trend in our dynamic markets. About Image processing group @ BARCO The demand for extremely high-resolution, video-based installations that impress audiences with an exceptional visual experience is growing. That’s why digital image processing, screen management, and show control are key. Barco provides event professionals and themed venue owners with a very complete range of image processing software and hardware solutions. Unbridled creativity and ultimate ease-of-use, that’s what our processors are all about. The power and flexibility can be used in different applications from live shows to meeting environments and from auditoriums to television studios. The only limit is your imagination. About The Role As a Senior engineer (QA), you will get to work in a fast paced, collaborative environment responsible for architecting, designing, developing and testing cutting edge on-premise & connected cloud applications. You will be responsible for authoring test plans, designing, automating and executing different functional and non-functional test scenarios. You will need to collaborate, as needed, with entire geographically distributed team. You will be the overall incharge of quality of the product. Key Responsibilities Perform system, regression, and performance testing to ensure delivery of high-quality system Develop effective test strategies and test plans using tools like PTC Integrity Coach colleagues for effectiveness in test strategy, processes Collaborate with Developers in geographically distributed environment Collaborate with global test team to ensure timely quality system release Experience in creating test strategies and manual test execution Execute test cases in local/QA environments and document results. Raise, track, and validate bugs in JIRA. Write test cases in PractiTest based on requirement analysis. Estimate efforts for testing tasks and prepare test/release plans. Develop/maintain automated scripts (Playwright, TypeScript). Manage CI/CD pipelines (Jenkins, GitHub Actions) for test execution. Analyze automation reports and proactively fix failing tests. Share test progress, risks, and blockers with stakeholders Preferred Skills and Experience B.Tech./B.E/M.Tech in computer science or equivalent stream 8 - 15 years of experience working in an R&D environment Excellent interpersonal, communication skills Excellent team players Attitude to learn new skill and work on stretched goals Must Have Skills Proficiency in test automation using Playwright and TypeScript/JavaScript. Hands-on experience with CI/CD tools (Jenkins, GitHub Actions). Expertise in test case design, execution, and defect tracking (PractiTest, Testuff, JIRA). Strong requirement analysis skills for epics/stories to derive test scenarios. Ability to test in local and QA environments (cross-browser/cross-platform). Experience maintaining automation frameworks and fixing flaky tests (e.g., outdated locators). Skill in effort estimation for test case creation and automation tasks. Strong analytical, communication, and documentation skills for reports/release plans. Collaboration experience with Product Teams for requirement refinement. Expertise in testing REST API's Ability to learn new languages and technologies Experience in linux concepts Good understanding of release processes Good team player who can work with cross cultured geographically distributed teams Critical thinker and problem solving skills Strong mentoring and coaching skills Self-motivated and result oriented, autonomous worker Working experience with geographically distributed teams preferably Europe, US Experience with Agile development Methodology (SCRUM) Continuous Integration through Jenkins Open to travel for short duration Skills Nice To Have Domain knowledge in the entertainment industry (e.g., image processing, projection mapping). Knowledge of Jenkins master-slave configurations Familiarity with additional frameworks (Cypress, Selenium) or cloud platforms (AWS, Azure). Basic understanding of performance/security testing Strong proficiency in Python, Selenium and OOPs concepts, jmeter, javascript Good experience in testing microservice Very good Experience in testing SAAS based products deployed on AWS/Azure cloud platform Experience how docker works and create images using dockerfile/docker compose Node.js, Angular.js Knowledge of networking concepts Embedded domain experience Experience in C/C++ D&I Statement At Barco, innovation drives everything we do. We believe that diversity fuels creativity, bringing us closer to our colleagues and customers. Inclusion and equity aren't just values—they're core capabilities that propel us toward our shared goals and mission. Read here how we do this

Posted 2 weeks ago

Apply

0 years

0 Lacs

gurgaon, haryana, india

On-site

CI/CD (Continuous Integration/Delivery/Deployment) The core requirements for the job include the following: Tools: Jenkins, GitHub Actions, GitLab CI, CircleCI, ArgoCD, Spinnaker. Concepts: Pipeline design (build, test, deploy), Blue-green / canary deployments, Rollbacks and artifact versioning, GitOps practices. Infrastructure As Code (IaC) Tools: Terraform, Pulumi, AWS CloudFormation, Ansible, Helm. Skills: Writing modular IaC code. Secret and state management. Policy enforcement (OPA, Sentinel). DRY patterns and IaC testing (e. g., Terratest). Cloud Platforms Platforms: AWS, Azure, GCP, OCI. Skills: VPC/networking setup, IAM policies, Managed services (EKS, GKE, AKS, RDS, Lambda), Billing, cost control, tagging governance, Cloud automation with CLI/SDKs. Containerization And Orchestration Tools: Docker, Podman, Kubernetes, OpenShift. Skills: Dockerfile optimization, multi-stage builds, Helm charts, Kustomize, K8s RBAC, admission controllers, pod security policies, Service mesh (Istio, Linkerd). Security And Compliance Tools: HashiCorp Vault, AWS Secrets Manager, Aqua, Snyk. Practices: Image scanning and runtime protection, Least privilege access models, Network policies, TLS enforcement, Audit logging, and compliance automation. Observability And Monitoring Tools: Prometheus, Grafana, ELK stack, Datadog, New Relic. Skills: Metrics, tracing, log aggregation, alerting thresholds and SLOs, Distributed tracing (Jaeger, OpenTelemetry). Reliability And Resilience Engineering Concepts and Tools: SRE practices, error budgets, Chaos engineering (Gremlin, LitmusChaos), Auto-scaling, self-healing infrastructure, Service Level Objectives (SLO/SLI) Platform Engineering (DevEx Focused) Tools: Backstage, Internal Developer Portals, Terraform Cloud. Practices: Golden paths and reusable blueprints, Self-service pipelines, Developer onboarding automation, Platform as a Product mindset. Source Control And Collaboration Tools: Git, Bitbucket, GitHub, GitLab. Practices: Branching strategies (Git Flow, trunk-based), Code reviews, merge policies, commit signing, and DCO enforcement. Scripting And Automation Languages: Bash, Python, Go, PowerShell. Skills: Writing CLI tools, Cron jobs and job runners, ChatOps and automation bots (Slack, MS Teams). This job was posted by Bhavya Chauhan from CloudTechner.

Posted 2 weeks ago

Apply

0 years

0 Lacs

ahmedabad, gujarat, india

On-site

Job Description – DevOps Engineer We are seeking a highly skilled DevOps Engineer to join our dynamic team. The ideal candidate will have strong expertise in automation, cloud infrastructure (with a focus on AWS and GenAI services), CI/CD, and containerization, along with a deep understanding of security best practices, monitoring, and system optimization. This role requires a balance of technical proficiency, problem-solving, and collaboration skills to ensure smooth deployment, scalability, and reliability of applications and infrastructure. Key Responsibilities Design, automate, and manage scalable, secure, and high-availability cloud infrastructure. Implement Infrastructure as Code (IaC) using tools like Terraform or CloudFormation. Develop and maintain CI/CD pipelines with Jenkins, GitLab CI, CircleCI, or AWS CodePipeline. Automate routine tasks using Python and shell scripting. Monitor and optimize system performance using Prometheus, Grafana, ELK stack, or AWS CloudWatch. Manage databases (MySQL, PostgreSQL, MongoDB, DynamoDB), including backup, recovery, and performance tuning. Deploy and manage web applications on production environments with Nginx, Apache, or similar servers. Ensure cloud, networking, and server security using IAM, VPC, security groups, and firewalls. Manage source control and team collaboration using Git and branching strategies. Work with containerization and orchestration technologies (Docker, Kubernetes, ECS). Implement disaster recovery, backup, and high-availability strategies. Troubleshoot incidents, perform root cause analysis, and implement preventive measures. Collaborate with cross-functional teams, ensuring effective communication and documentation. Required Skills & Experience 1. Automation & Infrastructure as Code (IaC) Hands-on experience with Terraform, AWS CloudFormation, or similar. Proficient in automating infrastructure deployment and management tasks. Knowledge of configuration management tools (Ansible, Chef, Puppet). 2. Monitoring & Logging Experience with monitoring tools (Prometheus, Grafana, ELK Stack, AWS CloudWatch). Ability to set up alerts, dashboards, and audit logs for system health and performance. 3. Cloud Platforms (AWS – Must Have GenAI Services Experience) Strong knowledge of AWS services: EC2, S3, RDS, Lambda, Bedrock, OpenSearch, Knowledgebase, IAM, VPC, CodeDeploy, CodePipeline, SQS, etc. Familiar with cloud-native architectures and multi-cloud environments (a plus). 4. Scripting & Automation Python: Scripting, automation, Boto3 for AWS, Flask/Django familiarity (bonus). Shell scripting: Strong skills in bash or similar for deployment and system automation. 5. Database Management Experience with MySQL, PostgreSQL, MongoDB, DynamoDB. Backup, recovery, performance tuning, and database security best practices. 6. Web Application Deployment & Server Management Experience with production deployments, web/application servers (Nginx, Apache). Knowledge of reverse proxies, SSL/TLS setup, and security hardening. 7. Security & Networking Cloud security best practices, IAM management, firewalls, and VPC configurations. Strong understanding of TCP/IP, DNS, HTTP/HTTPS, and load balancer setups. 8. CI/CD & Version Control Proficient in Git workflows (GitFlow, trunk-based) for multi-team management. Experience with CI/CD pipelines (Jenkins, GitLab CI, AWS Code Pipeline). Knowledge of containerization (Docker) and orchestration (Kubernetes, ECS). 9. High Availability & Scaling Load balancing strategies (AWS ELB, HAProxy), failover planning. Auto-scaling in cloud platforms and performance optimization. 10. Backup, Recovery & Incident Response Implementation of disaster recovery, redundancy strategies, and system resilience. Troubleshooting, root cause analysis, and preventive measures. 11. Collaboration & Project Management Strong communication and documentation skills. Ability to collaborate across teams and explain technical concepts to non-technical stakeholders. Familiarity with Agile methodologies (Scrum, Kanban) and tools (Jira, Trello) is a plus. Preferred/Optional Skills Dockerfile and Docker Compose creation for multi-container applications. Serverless architecture with AWS Lambda, SQS, SNS. Project management and task prioritization.

Posted 2 weeks ago

Apply

0 years

0 Lacs

noida, uttar pradesh, india

On-site

Support Coverage : 24x7. Location : Noida. Scope Of Work The scope includes full lifecycle management and operations of OpenShift infrastructure (Kubernetes). Container lifecycle management (creation, deployment, health checks, updates). CI/CD Pipeline Management. Dockerfile and Image Management. Incident/Service/Change/Problem Management. OS Patching, Node Administration. PV/PVC backup and restore. IAM and container registry management. Client/OEM Responsibilities Include Application deployment & container image development. Network design (HLD/LLD), Certificate procurement. Cluster Provisioning, Registration, and CMDB onboarding. Backup configuration, monitoring setup (Prometheus, Zabbix). RBAC, CRD, and LDAP integration. Routine patching, update validation, vulnerability remediations. Cluster scaling, BIOS/firmware updates, and CMDB maintenance. CPU/Memory/Disk/IO health tracking. Cluster/operator/service log analysis. Alerts & automated remediation. OEM case logging & escalation. SLA-compliant incident resolution and RCA reporting. Scheduled patching, cluster backups, and vulnerability fixes. Capacity planning dashboards. DR documentation, SOPs, RPO/RTO assurance. Admin access compliance (RBAC, syslog, NTP, ILO etc. Server resource release and secure OS image deletion. Rebuilds from backup if needed. Audit participation, IDR data reporting, and NC closure tracking. (ref:hirist.tech)

Posted 2 weeks ago

Apply

4.0 years

8 - 12 Lacs

india

On-site

Mode - Work from Office , 5 days working , Noida sector 62 Interview Mode - 2 rounds virtual Experience - 4+Years Budget upto 12lpa , hike on last salary Btech /BE/MCA /MBA only Job Title: MySQL/PostgreSQL DBA – (4 -6) Years’ Experience (Linux Environment) Role Overview We’re seeking an experienced Database Administrator with 4 years of hands‑on expertise in MySQL and PostgreSQL on Linux, who can design, deploy, secure, monitor, and maintain complex database landscapes—including Redis and containerized setups—to ensure high availability, performance, and disaster‑recovery readiness. Key Responsibilities Install, configure, and harden MySQL and PostgreSQL instances on Linux (CentOS/Ubuntu). Configure PostgreSQL replication (streaming, logical) and MySQL replication (master–slave, group replication). MYSQL Cluster Design and implement backup strategies (mysqldump, pg_basebackup, WAL shipping) and automate recovery drills. Deploy and tune monitoring tools (Grafana, Nagios) to track performance and trigger alerts. Configure and manage PgPool-II for connection pooling, failover, and high availability. Enforce password‑rotation policies for DB users and implement access controls, SSL/TLS, audit logging, and firewall rules. Install and configure Redis (on Linux and Docker), set up Sentinel for failover, and configure Redis Cluster (sharding). Implement backup, log rotation, and restore procedures for Redis, PostgreSQL, and MySQL. Manage log rotation using logrotate or custom scripts. Develop and document disaster recovery plans with RPO/RTO targets and conduct failover drills. Build and maintain Docker images for MySQL, PostgreSQL, Redis, and PgPool; integrate DB services into CI/CD pipelines. Required Skills & Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. 4+ years of DBA experience with MySQL (5.7/8.0+) and PostgreSQL (10+). Strong Linux administration skills (Bash scripting, system tuning, package management). Expertise in replication technologies for MySQL and PostgreSQL. Proficiency with backup tools (pg_basebackup, WAL archiving). Experience with monitoring solutions like Grafana, Nagios. Knowledge of high availability tools (PgPool-II, ProxySQL). Hands‑on experience with Redis Sentinel, Cluster, AOF/RDB persistence, and log management. Implementing security best practices including encryption, access control, and auditing. Containerization skills (Dockerfile creation, Docker Compose). Disaster recovery planning and execution, including RPO/RTO definition and drills. Strong analytical, troubleshooting, communication, and collaboration skills. Job Types: Full-time, Permanent Pay: ₹800,000.00 - ₹1,200,000.00 per year Benefits: Health insurance Paid sick time Paid time off Provident Fund Application Question(s): what is your last working day ? mention LWD expectations ctc? Experience: MYSQL/POSTGRESQL: 3 years (Preferred) Linux: 1 year (Preferred) Redis , MariaDB: 1 year (Preferred) Work Location: In person

Posted 2 weeks ago

Apply

4.0 years

12 Lacs

noida

On-site

Job Title: MySQL/PostgreSQL DBA – (4 -6) Years’ Experience (Linux Environment) work from office , 5 days job location - Noida sector 62 , onsite Role Overview We’re seeking an experienced Database Administrator with 4 years of hands‑on expertise in MySQL and PostgreSQL on Linux, who can design, deploy, secure, monitor, and maintain complex database landscapes—including Redis and containerized setups—to ensure high availability, performance, and disaster‑recovery readiness. Key Responsibilities Install, configure, and harden MySQL and PostgreSQL instances on Linux (CentOS/Ubuntu). Configure PostgreSQL replication (streaming, logical) and MySQL replication (master–slave, group replication). MYSQL Cluster Design and implement backup strategies (mysqldump, pg_basebackup, WAL shipping) and automate recovery drills. Deploy and tune monitoring tools (Grafana, Nagios) to track performance and trigger alerts. Configure and manage PgPool-II for connection pooling, failover, and high availability. Enforce password‑rotation policies for DB users and implement access controls, SSL/TLS, audit logging, and firewall rules. Install and configure Redis (on Linux and Docker), set up Sentinel for failover, and configure Redis Cluster (sharding). Implement backup, log rotation, and restore procedures for Redis, PostgreSQL, and MySQL. Manage log rotation using logrotate or custom scripts. Develop and document disaster recovery plans with RPO/RTO targets and conduct failover drills. Build and maintain Docker images for MySQL, PostgreSQL, Redis, and PgPool; integrate DB services into CI/CD pipelines. Required Skills & Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. 4+ years of DBA experience with MySQL (5.7/8.0+) and PostgreSQL (10+). Strong Linux administration skills (Bash scripting, system tuning, package management). Expertise in replication technologies for MySQL and PostgreSQL. Proficiency with backup tools (pg_basebackup, WAL archiving). Experience with monitoring solutions like Grafana, Nagios. Knowledge of high availability tools (PgPool-II, ProxySQL). Hands‑on experience with Redis Sentinel, Cluster, AOF/RDB persistence, and log management. Implementing security best practices including encryption, access control, and auditing. Containerization skills (Dockerfile creation, Docker Compose). Disaster recovery planning and execution, including RPO/RTO definition and drills. Strong analytical, troubleshooting, communication, and collaboration skills. Job Types: Full-time, Permanent Pay: Up to ₹1,200,000.00 per year Benefits: Health insurance Paid sick time Paid time off Provident Fund Application Question(s): Mention LWD (Last Working Day) Expectations CTC Experience: postgresql/mysql: 3 years (Preferred) Linux: 2 years (Preferred) Redis , Mongodb: 1 year (Preferred) Work Location: In person

Posted 2 weeks ago

Apply

5.0 - 9.0 years

12 - 22 Lacs

noida, kolkata, pune

Hybrid

Please find the JD below : 1. AWS, EKS, OpenSearch, DynamoDb, Terraform, Dockerfile, CICD, Datadog, Helm

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

chennai, tamil nadu, india

On-site

We are looking for a DevOps Engineer cum Linux System Administrator in Amazon AWS, who can design, build, deploy, maintain, monitor and cost-effectively scale as per need of our web applications LAMP environments. The person should also be able to maintain and extend our AWS, Puppet, Git, Jenkins CI environment and our containerized local dev and QA environments. The person must be a team player, working closely with the DevOps team members and the developers. Experience: 7 years as Linux System Administrator with recent 3 years in Amazon AWS, Apache HTTP server, Puppet, Jenkins, Docker and building automation scripts using bash and AWS CLI. Responsibilities You will be responsible for designing, configuring, deploying, administering, monitoring, analysing, and supporting cloud based (IaaS/PaaS) application services and systems. You will work alongside developers to deploy our software and systems in the QA and production environments. You will be continuously improving and automating the environment setup using Puppet code. You will be managing and extending our Jenkins based CI / CD environment and maintaining our Git repository. You will be continuously monitoring the servers for load assessment and security risks suggesting appropriate and timely recourse and rectification. You will be supporting the developers in running Docker on their Ubuntu workstations. You will be building and managing images for them to use. Analysing the logs like alert and trace files, syslog, auth.log, mail.log, Apache logs, AWS Logs (ELB, ALB, CloudTrail log, RDS logs VPC Flow log, etc.). Linux update and upgrade, security patches, etc. Backup and restore, log rotation and purging, snapshots and purging, etc. Automating and documenting server maintenance tasks. Strong knowledge of monitoring tools (Nagios, in particular) including experience in designing and implementing new monitoring checks. Strong troubleshooting and analytical skills, ability to comprehend, review and analyse application logs. Requirements 7 years of experience working as a Linux system administrator, managing the LAMP stack with the ability to configure and maintain network with subnets, load balancer, mail service, users, groups, sudoer, file and directory permissions, port access, firewall, log files for all services, secure socket layer (SSL), secure shell (SSH) & key-based access, role-based access, crontab, Apache / vhosts configuration, etc. 3 years of experience designing and building LAMP web application environments on AWS services. 2 years of experience with Puppet. Experience with other open-source configuration management utilities such as Chef, Salt, etc. will be a plus. Puppet certification is preferred. Design, develop and maintain DevOps process comprising several stages including plan, code, build, test, release, deploy, operate and monitor. Experience with setting up and maintaining Git and Jenkins CI environment. Experience with Linux/Unix OS system administration, configuration, troubleshooting, performance tuning, preventative maintenance, and security procedures. Hands-on experience in building VMs and Containers using Docker and Docker-Compose. Experience with NewRelic setup and administration. Experience with MySQL database backup and restore. Experience with setting up opcache, Varnish, memcache, AWS Elasticache. Experience with bash scripting system maintenance tasks. Must have a flair for automation and continuous performance improvement. Must have strong oral and written communication skills, presentation skills and have the ability to self prioritise tasks. Must be able to maintain a balanced composure in high stress situations. Must possess the ability to anticipate and mitigate problems proactively. Cloud migration experience will be a plus. Experience with Terraform will be a plus. Knowledge of basic windows pc maintenance will be a plus. Expected AWS services setup and configuration skills: Well versed with AWS CLI commands, EC2, VPC, VPC Peering, NAT Gateway, RDS, Route 53, ALB, ELB, Security Groups, IAM Permission Policies, S3, S3 Lifecycle, Glacier, SNS, SES, SQS, EFS, CloudFront, ElastiCache (Memcached), CloudWatch, CloudTrail, Cloud Formation, Autoscaling, Athena, ECS, Trusted Advisor, Certificate Manager. Experience with or knowledge of additional AWS services is a PLUS. Certification preferred. Expected software and services installation and configuration skills: LAMP, Puppet, HAProxy, Docker (setup Dockerfile, build Docker images, setup docker-compose, YAML, etc.), Jenkins, MySQL 5.5, MySQL 5.6 and MySQL 5.7, mysql backup, Apache 2.4, PHP 5.5, PHP 5.6, PHP 7.2, PHP 7.4, PHP 8.1, PHP-FPM, AWS CLI, NFS, Varnish, SOLR, Zookeeper, Linux system crons, Postfix, pfsense, s6-svscan, OpenSSL, NetBeans, Eclipse, DokuWiki, NewRelic, Nagios, OpenVPN, PHP opcache, Memcache, node.js, npm, JS frameworks, dnsmasq, Git. Experience with Linux flavours: Ubuntu/Lubuntu 14.04, 16.04, 18.04 & 20.04 Experience with other Linux flavours will be a plus. CM tools & programming experience: Bash, Puppet, ERB templates, YAML, Ruby (basic knowledge), SQL (basic knowledge), PHP (basic knowledge), JSON Minimum education: Graduate in Computer Science, MCA or equivalent.

Posted 3 weeks ago

Apply

0 years

0 Lacs

hyderābād

On-site

Platform Development Full Time Hyderabad, India Number of positions : 2 About BOS BOS Framework is a Cloud infrastructure and DevOps automation platform that enables tech teams to provision, configure and orchestrate their application and data environments in AWS/Azure with built-in observability, resilience, and compliance, without having to learn IaC or DevOps on the job. Creating Massive Impact With BOS, tech-enabled businesses greatly reduce technical debt, assure on-going 99.99% uptime, gain release cycle efficiencies, and save 30 to 80% of the cost and time that goes into building, migrating, and maintaining Cloud environments with fewer tools and resources. Role & Requirements Can write container friendly scalable, memory efficient microservice code in .net core with and without persistent layer with and without exposing API endpoints Excellent database design skills with knowledge of designing and extending a relational db schema supporting Microsoft recommended multi-tenancy architecture Practical knowledge of using AWS native services in a microservice code Practical knowledge of using a Time Series database Design, review and extend a scalable highly dynamic relational database model Query Execution Plan, Indexing, Sharding Can write a secure .Net API .Net Core Minimal API Hands-on experience with Redis, DAPR, EF Core, IDataAccess, Database Connection Pools, API Cashing, Multi-stage Dockerfile, Docker Compose & Desktop. Securing API endpoints, JWT, API Versioning, EF Core - Code First, DB First Garbage Collector - Impact on Performance, Workstation vs Server config, Multiple App Settings Secrets - Github Secrets, KeyVault (Azure), SecretManager (AWS), Cron Jobs, Schedulers, Shared Compute with API Scaling APIs with Distributed Cache, Linux (Shell) Certificates, SSL, SelfSigned, Let's Encrypt Certs, Auto Renewal OpenSource Licenses - Liabilities Benefits 100% Company paid comprehensive medical insurance for you, your spouse, and children Paid time off Market competitive total compensation package Paid Maternity & Parental leave Your voice is heard; no matter your level, we're a team, all going in the same direction Core Values Customer First: Putting Customers at the Heart: We place our clients at the forefront, responding to their needs with respect and efficency. Our growth is intertwined with our customers' success. Walk the Talk: Integrity in Action: Our words and actions align, fostering trust through transparency and long-term commitment. We embrace courage and honesty for the greater good. Team Spirit: Unity in Diversity: We champion collaboration across departments and locations, creating win-win situations and extending our team spirit to include our clients. Together, we find strength in unity. Excellence: Pursuit of Perfection: Our journey is marked by a relentless drive to surpass our acheivements, embracing each day as an oppurtunity to excel further. Drive Innovation: Innovative Mindset: We stay ahead of global tech trends, challenging the status quo with audacity and delivering cutting-edge solutions that drive growth. Outcome-Focused: Results-Driven Approach: We prioritize impactful solutions and maintain a balance between visionary objectives and immediate achievements, ensuring practicality in our pursuit of excellence.

Posted 3 weeks ago

Apply

0 years

3 - 15 Lacs

india

On-site

Job Summary: We are looking for a skilled DevOps Engineer to join our team. The ideal candidate will be responsible for automation, infrastructure management, continuous integration and delivery (CI/CD), and monitoring across various platforms. Key Responsibilities: · Ability to build custom Docker images using Dockerfile, debug running Docker containers, and integrate Docker with CI/CD pipelines. · Ability to setup and manage a Kubernetes cluster, deploy applications on a Kubernetes cluster, knowledge of best practices. · Work with Ingress controllers, service discovery, load balancers, and automate deployments using Helm charts. · Ability to write and debug automation code/scripts, integrate third-party libraries in the code/script. · Ability to write & debug complex infrastructure configurations, integrate with CI/CD pipelines. · Experience with large scale deployments using infrastructure-as-code, knowledge of best practices. · Implement and manage AWS Elastic Load Balancer (ELB), Route 53 for traffic management, AWS Site-to-Site VPN, and AWS Direct Connect · Experience in configuring/managing EKS, including deploying applications on an EKS cluster. · Working experience in agile methodologies (e.g., Scrum, Kanban, SAFe). · Ability to do rapid troubleshooting of issues along with Root Cause Analysis. · Team player with positive attitude. · Good communication skills. · Job Type: Full-time Pay: ₹388,770.44 - ₹1,557,783.89 per year Work Location: In person Speak with the employer +91 9100036902

Posted 4 weeks ago

Apply

8.0 - 12.0 years

22 - 25 Lacs

noida

Work from Office

We are hiring an L3 OpenShift Engineer to manage and support enterprise-grade OpenShift/Kubernetes infrastructure. The role covers lifecycle management, CI/CD, monitoring, incident resolution, backup/restore, and 24x7 cluster operations. Required Candidate profile Experienced OpenShift/Kubernetes administrator with 6-10 years of expertise in container lifecycle,CI/CD,monitoring,troubleshooting and 24x7 infra support.Strong skills in patching, RBAC, cluster ops.

Posted 4 weeks ago

Apply

0 years

0 Lacs

noida, uttar pradesh, india

On-site

Support Coverage: 24x7 Location: Noida Scope of work: The scope includes full lifecycle management and operations of OpenShift infrastructure (Kubernetes) 3.1. OpenShift Container Platform Management Container lifecycle management (creation, deployment, health checks, updates) CI/CD Pipeline Management Dockerfile and Image Management Incident/Service/Change/Problem Management OS Patching, Node Administration PV/PVC backup and restore IAM and container registry management Client/OEM responsibilities include: Application deployment & container image development Network design (HLD/LLD), Certificate procurement 3.2. Cluster Lifecycle Support Cluster Provisioning, Registration, and CMDB onboarding Backup configuration, monitoring setup (Prometheus, Zabbix) RBAC, CRD, and LDAP integration Routine patching, update validation, vulnerability remediations Cluster scaling, BIOS/firmware updates, and CMDB maintenance 3.3. Monitoring & Troubleshooting CPU/Memory/Disk/IO health tracking Cluster/operator/service log analysis Alerts & automated remediation OEM case logging & escalation SLA-compliant incident resolution and RCA reporting 3.4. Maintenance & Administration Scheduled patching, cluster backups, and vulnerability fixes Capacity planning dashboards DR documentation, SOPs, RPO/RTO assurance Admin access compliance (RBAC, syslog, NTP, ILO etc.) 3.5. Decommissioning and Audit Support Server resource release and secure OS image deletion Rebuilds from backup if needed Audit participation, IDR data reporting, and NC closure tracking

Posted 4 weeks ago

Apply

1.0 years

2 - 3 Lacs

Chennai, Tamil Nadu, India

On-site

Experience : Fresher Salary : INR 200000-300000 / year (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Chennai) Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - IppoPay) What do you need for this opportunity? Must have skills required: AWS, CI/CD, Jenkins, Kubernetes IppoPay is Looking for: Position : Junior DevOps Engineer (Fresher) Experience : 0–1 Year Location : Ippopay (On-site) Employment Type : Full-time About The Role We are looking for a passionate and motivated DevOps Engineer (Fresher) to join our technology team. You will work closely with developers and system administrators to help automate deployment processes, manage cloud infrastructure, and ensure smooth operations of our applications. Key Responsibilities Assist in setting up and managing AWS infrastructure (primarily EC2, RDS, S3). Work with Linux systems for server-side configurations, troubleshooting, and monitoring. Use Docker to build, package, and deploy microservices and web applications. Support deployment and orchestration using Kubernetes. Configure and monitor MongoDB Atlas and MySQL databases (open-source and RDS). Manage and automate log collection, analysis, and storage. Help maintain firewall and security group rules across environments. Assist in CI/CD workflows using tools like Jenkins etc. Use basic FTP/SFTP for file transfers between systems when required. Required Skills Strong fundamentals of Linux and hands-on with basic commands. Basic understanding of AWS services like EC2, S3, RDS, VPC. Good knowledge of Docker: containers, images, Dockerfile, docker-compose. Aware of Kubernetes: Pods, Services, basic deployment. Familiarity with relational (MySQL) and NoSQL (MongoDB) databases. Basic networking knowledge: IPs, ports, firewalls, FTP. Ability to debug logs and system issues using standard Linux tools. Good to Have Exposure to CI/CD tools like Jenkins or GitHub Actions. Knowledge of monitoring tools like Prometheus, Grafana, EFK, ELK or AWS CloudWatch. Educational Qualification B.E./B.Tech in Computer Science, IT, BSC. Or any equivalent diploma/degree with proven hands-on practice or training in DevOps. Why Join Us? Hands-on learning in real-world cloud infrastructure and DevOps practices. Opportunity to work on scalable systems and modern tech stack. Friendly, growth-driven team culture. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 4 weeks ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Must have: Git/Bitbucket/ Github - Branching strategies, tagging/merging, git command line, basic to advanced, pull request, integration with Jenkins, creating repos/repo templates, bitbucket APIs, ssh key setup, keytool, etc. Linux command line - profile, bashrc, nc, package managers, top, editing files, sed, grep, curl, wget, expect, etc. Jenkins/Cloudbees CI - administration, access controls, upgrades, multibranch pipelines, folders, CICD setup, credential management, plugins, managed files, node labelling, etc. Sonarqube - setting up quality gates, custom profile creations, upgrades, maintenance, etc. Nexus - creating repos for multiple technologies, upgrades, cleanup policies, creating access controls, configuring proxy repositories, image registry setup Maven - configuring pom.xml files, configuring mvn repos local vs remote, lifecycle, goals/targets, profiles, nexus integration/authentication Npm – Should be able to understand and build Node/ReactJS projects and be able to debug any issues, know what package.json is and understand how dependencies are configured and integrated with tools like nexus. Docker/Kubernetes - Dockerfile creation, replicas, docker commands, conceptual understanding, ingress

Posted 1 month ago

Apply

9.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Location: Hyderabad Work Model: Hybrid (3 days from office) Experience Required: 9+ years Role Summary We are hiring a hands-on Java Backend Developer with 9+ years of experience to support engineering delivery for a global U.S.-based banking client. This is a pure individual contributor role, focused on clean, modular Java backend code, database logic, and service integration. Candidates must demonstrate solid grounding in Java 8+, SQL, Spring, REST APIs, and version control workflows.The selected professional will contribute to backend system components, work with service and DAO layers, and handle REST API logic in a layered architecture. Familiarity with cloud platforms, Docker, and CI/CD pipelines is preferred but not required. Must-Have Skills & Required Depth Skill Skill Depth Core Java 8+ Strong grasp of OOP principles, Exception Handling, Collections Framework, and Functional Programming (Lambdas, Streams). Must have implemented service/business logic layers using these constructs in production-grade backend systems. Java Stream API Demonstrated ability to build and explain stream flows using map, filter, groupingBy, and collect. Expected to have used nested streams for data enrichment or transformation tasks in business applications. SQL (Intermediate–Advanced) Able to write efficient queries involving JOIN, GROUP BY, subqueries, and aggregate functions (SUM, COUNT, etc.). Experience with HAVING clause preferred. Must have applied SQL logic in solving real data segmentation or filter scenarios. Spring Core + JDBC Practical experience building modules using Spring IoC, annotated beans, and JDBC templates. Must understand transaction demarcation, bean lifecycle, and database integration patterns. REST APIs (Spring MVC) Proven experience in building RESTful APIs using Spring MVC. Should know HTTP method semantics, error codes, controller-service mapping, and payload handling via JSON. Git Must have used Git extensively in team setups, with branching strategies, PR handling, and conflict resolution. Knowledge of rebasing and tagging is a plus. Maven Hands-on experience in managing project dependencies, plugins, and multi-module structures using Maven. Must be comfortable resolving dependency conflicts and customizing build behavior. Nice-to-Have Skills Skill Skill Depth Gradle Familiarity with Gradle build scripts is preferred. Not mandatory if Maven experience is strong. Spring Boot Experience working with @RestController, embedded server configuration, actuator endpoints. At least one module built using Spring Boot preferred. Microservices Concepts Conceptual understanding of service registration/discovery, fault isolation, and stateless service design. No design ownership required. CI/CD (Jenkins, GitHub Actions) Exposure to pipeline design, build triggers, and deployment automation. Should understand stages of artifact movement and build failures. Docker Ability to containerize and run Java applications using Docker. Familiarity with Dockerfile, volume mounts, and container logs is desirable. Kafka or Messaging Systems Understanding of producer/consumer model and event-driven workflows. Hands-on with Kafka or similar systems is a plus, not mandatory. Cloud (AWS, GCP, Azure) Awareness of Java app deployment practices on cloud platforms. Should understand environment configs, logging, and deployment topologies. BFSI Domain Knowledge Exposure to transaction workflows, regulatory compliance systems, or financial data handling is beneficial.

Posted 1 month ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Location: Pune, Bengaluru, Chennai, Hyderabad, Gurugram Work Model: Hybrid (3 days from office) Experience Required: 10+ years Role Type: Individual Contributor Client: US-based multinational banking institution Notice Period: Immediate to 21 days Role Summary We are hiring a Senior PHP Developer (Individual Contributor) with over 10 years of experience in designing, building, and scaling enterprise-grade PHP applications. The role demands strong backend expertise with PHP (Laravel or CodeIgniter), advanced OOP, API integration, MySQL tuning, and architectural familiarity with modular services. Candidates should be hands-on, quality-driven, and capable of leading module delivery in hybrid enterprise environments. Must-Have Skills & Required Depth PHP (Laravel, CodeIgniter) Must have led backend module development using Laravel or CodeIgniter. Should be able to independently build, extend, and maintain enterprise-grade applications. Modern PHP (PHP 8) Should have used PHP 8 features like union types, named arguments, and attributes in at least one production project. Demonstrated familiarity with PHP 8 syntax and Programming (OOP) Should have strong command over inheritance, traits, method overriding, and abstract classes. Able to design reusable, modular, and extensible (jQuery) Must have used jQuery for DOM manipulation, AJAX handling, form validation, and interactive frontend features within PHP be able to manage layout styling, apply responsive designs, and collaborate with frontend teams for consistent UI be proficient in writing optimized queries, applying indexes, and resolving performance issues in relational databases. Experience with large data sets is expected. ORM (Eloquent / Doctrine) Should have used Eloquent or Doctrine for basic CRUD, relationships, and query abstraction. Deep optimization or customization is not mandatory. RESTful API Development Should have experience designing and consuming APIs with JSON. Must be able to implement authentication, handle errors, and manage data formatting. Microservices (API-first) Should understand modular service concepts and how services communicate over REST. Prior experience integrating PHP modules via APIs is required. SQL Concepts Must understand the role and differences of primary keys, unique keys, joins, and normalization. Should apply these effectively in schema design and debugging. Nice-to-Have Skills & Required Depth TDD phpUnit Should be able to write unit tests using phpUnit. Exposure to test case structure, assertions, and basic mocking is expected. TDD expertise is not mandatory. Coding Standards (PSR-12) Should be aware of PSR-12 or equivalent standards. Past adherence or enforcement during code reviews is preferred but not to running PHP applications using Docker. Should understand container basics; experience with Dockerfile authoring is not required. AWS / Cloud Deployment Familiarity with deploying applications on EC2, configuring S3, or using RDS. Should understand hosting flows in cloud environments. SQL Server Should be able to write and troubleshoot queries in SQL Server. Used in projects with mixed-database be able to read or update Python scripts for automation or integration; not expected to write standalone services. React JS Should have collaborated with frontend teams using React. Basic understanding of component structures and integration points is sufficient. Event-Driven Architecture Should be aware of concepts like message queues, triggers, or pub-sub. Hands-on experience in at least one real-time or notification-based module is preferred. (ref:hirist.tech)

Posted 1 month ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Hyderabad

Work from Office

Python programming expertise: data structures, OOP, recursions, generators, iterators, decorators, familiarity with regular expressions. Working knowledge and experience with deep learning framework Pytorch or Tensorflow. Embedding representations. Strong experience in Python and production system development. Familiarity with SQL database interactions. Familiarity with Elasticsearch document indexing, querying. Familiarity with Docker, Dockerfile. Familiarity with REST API, JSON structure. Python packages like FastAPI. Familiarity with git operations. Familiarity with shell scripting. Familiarity with PyCharm for development, debugging, profiling. Experience with Kubernetes. Experience with LLms & Gen AI. Desired Skills NLP toolkits like NLTK, spaCy, Gensim, scikit-learn. Familiarity with basic natural language concepts, handling. Tokenization, lemmatization, stemming, edit distances, named entity recognition, syntactic parsing, etc. Good knowledge and experience with deep learning framework Pytorch or Tensorflow. More complex operations with Elasticsearch. Creating indices, indexable fields, etc. Good experience with Kubernetes

Posted 1 month ago

Apply

4.0 - 14.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Experience: 4 to 14 Years Support Coverage : 24x7 L2/L3 Openshift Infra Admin – telecom domain Job Location- Mumbai- Onsite Scope of work: The scope includes full lifecycle management and operations of OpenShift infrastructure – Kubernetes as detailed in subsections below. 3.1. OpenShift Container Platform Management Prodevans will take ownership of the following operations: Container lifecycle management (creation, deployment, health checks, updates) CI/CD Pipeline Management Dockerfile and Image Management Incident/Service/Change/Problem Management OS Patching, Node Administration PV/PVC backup and restore IAM and container registry management Client/OEM responsibilities include: Application deployment & container image development Network design (HLD/LLD), Certificate procurement 3.2. Cluster Lifecycle Support Cluster Provisioning, Registration, and CMDB onboarding Backup configuration, monitoring setup (Prometheus, Zabbix) RBAC, CRD, and LDAP integration Routine patching, update validation, vulnerability remediations Cluster scaling, BIOS/firmware updates, and CMDB maintenance 3.3. Monitoring & Troubleshooting CPU/Memory/Disk/IO health tracking Cluster/operator/service log analysis Alerts & automated remediation OEM case logging & escalation SLA-compliant incident resolution and RCA reporting 3.4. Maintenance & Administration Scheduled patching, cluster backups, and vulnerability fixes Capacity planning dashboards DR documentation, SOPs, RPO/RTO assurance Admin access compliance (RBAC, syslog, NTP, ILO etc.) 3.5. Decommissioning and Audit Support Server resource release and secure OS image deletion Rebuilds from backup if needed Audit participation, IDR data reporting, NC closure tracking 4. Current Tools and Platforms Monitoring: Zabbix, Grafana Cluster Management: Red Hat ACM, Trident (Storage), Quay Security: Prisma Defender Automation & Backups: Custom SOPs, Automation pipelines Kazim Bilgrami 7387207869

Posted 2 months ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Role : OpenShift Engineer Exp : 6+ years Location : Noida (Onsite) Notice period : Immediate Only Support Coverage : 24x7 Scope of work : The scope includes full lifecycle management and operations of OpenShift infrastructure – Kubernetes as detailed in subsections below. 3.1. OpenShift Container Platform Management Container lifecycle management (creation, deployment, health checks, updates) CI/CD Pipeline Management Dockerfile and Image Management Incident/Service/Change/Problem Management OS Patching, Node Administration PV/PVC backup and restore IAM and container registry management Client/OEM responsibilities include: Application deployment & container image development Network design (HLD/LLD), Certificate procurement 3.2. Cluster Lifecycle Support Cluster Provisioning, Registration, and CMDB onboarding Backup configuration, monitoring setup (Prometheus, Zabbix) RBAC, CRD, and LDAP integration Routine patching, update validation, vulnerability remediations Cluster scaling, BIOS/firmware updates, and CMDB maintenance 3.3. Monitoring & Troubleshooting CPU/Memory/Disk/IO health tracking Cluster/operator/service log analysis Alerts & automated remediation OEM case logging & escalation SLA-compliant incident resolution and RCA reporting 3.4. Maintenance & Administration Scheduled patching, cluster backups, and vulnerability fixes Capacity planning dashboards DR documentation, SOPs, RPO/RTO assurance Admin access compliance (RBAC, syslog, NTP, ILO etc.) 3.5. Decommissioning and Audit Support Server resource release and secure OS image deletion Rebuilds from backup if needed Audit participation, IDR data reporting, NC closure tracking 4. Current Tools and Platforms Monitoring: Zabbix, Grafana Cluster Management: Red Hat ACM, Trident (Storage), Quay Security: Prisma Defender Automation & Backups: Custom SOPs, Automation pipelines

Posted 2 months ago

Apply
Page 1 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies