Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
9.0 - 12.0 years
16 - 20 Lacs
Hyderabad
Work from Office
Roles and Responsibilities 1. Architect and design scalable, maintainable, and high-performance backend systems using Python. 2. Lead the development of clean, modular, and reusable code components and services across various domains. 3. Own the technical roadmap for Python-based services, including refactoring strategies, modernization efforts, and integration patterns. 4. Provide expert guidance on system design, code quality, performance tuning, and observability. 5. Collaborate with DevOps teams to build CI/CD pipelines, containerization strategies, and robust cloud-native deployment patterns. 6. Mentor and support software engineers by enforcing strong engineering principles, design best practices, and performance debugging techniques. 7. Evaluate and recommend new technologies or frameworks where appropriate, particularly in the areas of AI/ML and GenAI integration. Qualifications Required Preferred Qualifications: Desirable (Good-to-Have) GenAI / AI/ML Skills 1. Exposure to Large Language Models (LLMs) and prompt engineering. 2. Basic familiarity with Retrieval-Augmented Generation (RAG) and vector databases (FAISS, Pinecone, Weaviate) 3. Understanding of model fine-tuning concepts (LoRA, QLoRA, PEFT) 4. Experience using or integrating LangChain, LlamaIndex, or Hugging Face Transformers 5. Familiarity with AWS AI/ML services like Bedrock and SageMaker Technology Stack Languages: Python (Primary), Bash, YAML/JSON Web Frameworks: FastAPI, Flask, gRPC Databases: PostgreSQL, Redis, MongoDB, DynamoDB Cloud Platform: AWS (must), GCP/Azure (bonus) DevOps & Deployment: Docker, Kubernetes, Terraform, GitHub Actions, ArgoCD Observability: OpenTelemetry, Prometheus, Grafana, Loki GenAI Tools (Optional): Bedrock, SageMaker, LangChain, Hugging Face Private and Confidential Preferred Profile 8+ years of hands-on software development experience 3+ years in a solution/technical architect or lead engineer role Strong problem-solving skills and architectural thinking Experience collaborating across teams and mentoring engineers Passion for building clean, scalable systems and openness to learning emerging technologies like GenAI Skills and Experience Required Core Python & Architectural Skills Strong hands-on experience in advanced Python programming (7+ years), including: 1. Language internals (e.g., decorators, metaclasses, descriptors) 2. Concurrency (asyncio, multiprocessing, threading) 3. Performance optimization and profiling (e.g., cProfile, py-spy) 4. Strong testing discipline (pytest, mocking, coverage analysis) Proven track record in designing scalable, distributed systems: 1. Event-driven architectures, service-oriented and microservice-based systems. 2. Experience with REST/gRPC APIs, async queues, caching strategies, and database modeling. Proficiency in building and deploying cloud-native applications: 1. Strong AWS exposure (EC2, Lambda, S3, IAM, etc.) Infrastructure-as-Code (Terraform/CDK) Private and Confidential 2. CI/CD pipelines, Docker, Kubernetes, GitOps Deep understanding of software architecture patterns (e.g., hexagonal, layered, DDD) Excellent debugging, tracing, and observability skills with tools like OpenTelemetry, Prometheus, Grafana
Posted 1 week ago
3.0 - 8.0 years
10 - 15 Lacs
Gurugram
Work from Office
As an L2 AWS Support Engineer, you will be responsible for providing advanced technical support for AWS-based solutions. You will troubleshoot and resolve complex technical issues, including those related to networking, security, and automation. Key Responsibilities: Develop, manage, and optimize CI/CD pipelines using tools like Jenkins and Opsera. Automate infrastructure provisioning using Terraform and CloudFormation. Administer and optimize key AWS services, including EC2, S3, RDS, Lambda, and IAM. Strengthen security by implementing best practices for IAM, encryption, and network security (VPC, Security Groups, WAF, NACLs, etc.). Design, configure, and maintain AWS networking components such as VPCs, Subnets, Route53, Transit Gateway, and Security Groups. Advanced Troubleshooting: Investigate and resolve issues related to networking (VPC, subnets, security groups) and storage. Analyze and fix application performance issues on AWS infrastructure. Cluster Management: Create and manage EKS clusters using AWS Management Console, AWS CLI, or Terraform. Manage Kubernetes resources such as namespaces, deployments, and services. Backup & Recovery: Configure and verify backups, snapshots, and disaster recovery plans. Perform DR drills as per defined procedures. Optimization: Monitor and optimize AWS resource utilization and costs. Suggest improvements for operational efficiency. Technical Skills: Advanced understanding of AWS core services (EC2, S3, VPC, IAM, Lambda, etc.) Strong knowledge of AWS automation, scripting (Bash, Python, PowerShell), and CLI. Experience with AWS CloudFormation and Terraform. Understanding of AWS security best practices and identity and access management. Migration and Modernization: Assist with migrating workloads to AWS and modernizing existing infrastructure. Performance Optimization: Analyze AWS resource usage and identify optimization opportunities. Cost Optimization: Implement cost-saving measures, such as rightsizing instances and using reserved instances. Soft Skills: Excellent problem-solving and analytical skills. Strong communication and interpersonal skills. Ability to work independently and as part of a team. Customer-focused approach. Certifications (Preferred): AWS Certified Solutions Architect - Associate AWS Certified DevOps Engineer Professional
Posted 1 week ago
5.0 - 9.0 years
8 - 12 Lacs
Ahmedabad, Vadodara
Work from Office
Job Summary: We are seeking an experienced and highly motivated Database Administrator (DBA) to join our team. The ideal candidate will be responsible for the design, implementation, performance tuning, and maintenance of relational (MSSQL, PostgreSQL) and NoSQL (MongoDB) databases, both on-premises and in cloud environments (AWS, Azure, GCP). You will ensure data integrity, security, availability, and optimal performance across all platforms. Key Responsibilities: Database Management & Optimization Install, configure, and upgrade database servers (MSSQL, PostgreSQL, MongoDB). Monitor performance, optimize queries, and tune databases for efficiency. Implement and manage database clustering, replication, sharding, and high availability. Cloud Database Administration Manage cloud-based database services (e.g., Amazon RDS, Azure SQL Database, GCP Cloud SQL, MongoDB Atlas). Automate backup, failover, patching, and scaling in the cloud environment. Ensure secure access, encryption, and compliance in the cloud. ETL and Dev Ops experience is desirable. Backup, Recovery & Security Design and implement robust backup and disaster recovery plans. Regularly test recovery processes to ensure minimal downtime. Apply database security best practices (roles, permissions, auditing, encryption). Scripting & Automation Develop scripts for automation (using PowerShell, Bash, Python, etc.). Automate repetitive DBA tasks using DevOps/CI-CD tools (Terraform, Ansible, etc.). Collaboration & Support Work closely with developers, DevOps, and system admins to support application development. Assist with database design, indexing strategy, schema changes, and query optimization. Provide 24/7 support for critical production issues (on-call rotation may apply). Key Skills & Qualifications: Bachelors degree in computer science, Information Technology, or related field. 5+ years of experience as a DBA with production experience in: o MSSQL Server (SQL Server 2016 and above) o PostgreSQL (including PostGIS, logical/physical replication) o MongoDB (including MongoDB Atlas, replica sets, sharding) Experience with cloud database services (AWS RDS, Azure SQL, GCP Cloud SQL). Strong understanding of performance tuning, indexing, and query optimization. Solid grasp of backup and restore strategies, disaster recovery, and HA setups. Familiarity with monitoring tools (e.g., Prometheus, Datadog, New Relic, Zabbix). Knowledge of scripting languages (PowerShell, Bash, or Python). Understanding of DevOps principles, version control (Git), CI/CD pipelines. Preferred Qualifications: Certification in any cloud platform (AWS/Azure/GCP). Microsoft Certified: Azure Database Administrator Associate. Experience with Kubernetes Operators for databases (e.g., Crunchy Postgres Operator). Experience with Infrastructure as Code (Terraform, CloudFormation).
Posted 1 week ago
3.0 - 8.0 years
10 - 20 Lacs
Hyderabad, Ahmedabad, Bengaluru
Work from Office
SUMMARY Sr. Site Reliability Engineer Keep Planet-Scale Systems Reliable, Secure, and Fast (On-site only) At Ajmera Infotech , we build planet-scale platforms for NYSE-listed clients from HIPAA-compliant health systems to FDA-regulated software that simply cannot fail. Our 120+ elite engineers design, deploy, and safeguard mission-critical infrastructure trusted by millions. Why You’ll Love It Dev-first SRE culture automation, CI/CD, zero-toil mindset TDD, monitoring, and observability baked in not bolted on Code-first reliability script, ship, and scale with real ownership Mentorship-driven growth with exposure to regulated industries (HIPAA, FDA, SOC2) End-to-end impact own infra across Dev and Ops Requirements Key Responsibilities Architect and manage scalable, secure Kubernetes clusters (k8s/k3s) in production Develop scripts in Python, PowerShell, and Bash to automate infrastructure operations Optimize performance, availability, and cost across cloud environments Design and enforce CI/CD pipelines using Jenkins, Bamboo, GitHub Actions Implement log monitoring and proactive alerting systems Integrate and tune observability tools like Prometheus and Grafana Support both development and operations pipelines for continuous delivery Manage infrastructure components including Artifactory, Nginx, Apache, IIS Drive compliance-readiness across HIPAA, FDA, ISO, SOC2 Must-Have Skills 3 8 years in SRE or infrastructure engineering roles Kubernetes (k8s/k3s) production experience Scripting: Python, PowerShell, Bash CI/CD tools: Jenkins, Bamboo, GitHub Actions Experience with log monitoring, alerting, and observability stacks Cross-functional pipeline support (Dev + Ops) Tooling: Artifactory, Nginx, Apache, IIS Performance, availability, and cost-efficiency tuning Nice-to-Have Skills Background in regulated environments (HIPAA, FDA, ISO, SOC2) Multi-OS platform experience Integration of Prometheus, Grafana, or similar observability platforms Benefits What We Offer Competitive salary package with performance-based bonuses. Comprehensive health insurance for you and your family. Flexible working hours and generous paid leave . High-end workstations and access to our in-house device lab. Sponsored learning: certifications, workshops, and tech conferences.
Posted 1 week ago
3.0 - 8.0 years
0 - 2 Lacs
Hyderabad, Ahmedabad, Bengaluru
Work from Office
SUMMARY Sr. Cloud Infrastructure Engineer Build the Backbone of Mission-Critical Software (On-site only) Ajmera Infotech is a planet-scale engineering firm powering NYSE-listed clients with a 120+ strong team of elite developers. We build fail-safe, compliant software systems that cannot go down and now, we’re hiring a senior cloud engineer to help scale our infrastructure to the next level. Why You’ll Love It Terraform everything Zero-click, GitOps-driven provisioning pipelines Hardcore compliance Build infrastructure aligned with HIPAA, FDA, and SOC2 Infra across OSes Automate for Linux, MacOS, and Windows environments Own secrets & state Use Vault, Packer, Consul like a HashiCorp champion Team of pros Collaborate with engineers who write tests before code Dev-first culture Code reviews, mentorship, and CI/CD by default Real-world scale Azure-first systems powering critical applications Requirements Key Responsibilities Design and automate infrastructure as code using Terraform, Ansible, and GitOps Implement secure secret management via HashiCorp Vault Build CI/CD-integrated infra automation across hybrid environments Develop scripts and tooling in PowerShell, Bash, and Python Manage cloud infrastructure primarily on Azure, with exposure to AWS Optimize for performance, cost, and compliance at every layer Support infrastructure deployments using containerization tools (e.g., Docker, Kubernetes) Must-Have Skills 3 8 years in infrastructure automation and cloud engineering Deep expertise in Terraform (provisioning, state management) Hands-on with HashiCorp Vault, Packer, and Consul Strong Azure experience Proficiency with Ansible and GitOps workflows Cross-platform automation: Linux, MacOS, Windows CI/CD knowledge for infra pipelines REST API usage for automation tasks PowerShell, Python, and Bash scripting Nice-to-Have Skills AWS exposure Cost-performance optimization experience in cloud environments Containerization for infra deployments (Docker, Kubernetes) Benefits What We Offer Competitive salary package with performance-based bonuses. Comprehensive health insurance for you and your family. Flexible working hours and generous paid leave . High-end workstations and access to our in-house device lab. Sponsored learning: certifications, workshops, and tech conferences.
Posted 1 week ago
5.0 - 8.0 years
14 - 16 Lacs
Mohali
Work from Office
About the Role- We are seeking a highly skilled and motivated Senior Java Developer with 5 8 years of experience to join our engineering team. The ideal candidate will have strong backend development expertise, a deep understanding of microservices, and a solid grasp of agile methodologies. This is a hands-on role focused on designing, developing, and maintaining scalable applications in a collaborative, fast-paced environment. Key Responsibilities- Design, develop, test, and maintain scalable Java-based applications using Java 8 or higher and Spring Boot. Build RESTful APIs and microservices with clean, maintainable code. Work with SQL and NoSQL databases to manage data storage and retrieval effectively. Collaborate with cross-functional teams in an Agile/Scrum environment. Write unit and integration tests using JUnit, Mockito, and apply Test-Driven Development (TDD) practices. Manage source code with Git and build applications using Maven. Create and manage Docker containers for development and deployment. Troubleshoot and debug production issues in Unix/Linux environments. Participate in code reviews and ensure adherence to best practices. Must-Have Qualifications- 5 8 years of hands-on experience with Java 8 or higher . Strong experience with Spring Boot and microservices architecture. Proficiency in Git , Maven , and Unix/Linux . Solid understanding of SQL and NoSQL databases. Experience working in Agile/Scrum teams. Hands-on experience with JUnit , Mockito , and TDD . Working knowledge of Docker and containerized deployments. Good to Have- Experience with Apache Kafka for event-driven architecture. Familiarity with Ansible and/or Terraform for infrastructure automation. Knowledge of Docker Swarm or container orchestration tools. Exposure to Jenkins or other CI/CD tools. Proficiency in Bash scripting for automation and environment setup. We are seeking a highly skilled and motivated Senior Java Developer with 5 8 years of experience to join our engineering team.
Posted 1 week ago
4.0 - 9.0 years
7 - 17 Lacs
Pune, Chennai, Bengaluru
Hybrid
Hello Folks, We are looking for a azure DevOps Engineer to work with one of the MNC based in Bangalore/Pune/Chennai/Gurugram/Hyderabad. Please find below the job description: 6 + months of hands-on experience with Databricks CI/CD implementation . Strong proficiency with CI/CD tools : Azure DevOps, GitHub Actions, Jenkins, or similar. Familiarity with Databricks CLI, Databricks REST APIs, or Databricks Terraform provider . Experience with Git, GitOps practices, and version control in collaborative environments.r Proficient in scripting languages such as Bash, Python, or PowerShell.
Posted 1 week ago
3.0 - 5.0 years
5 - 11 Lacs
Chennai
Hybrid
Job Role: DevOps Engineer Position Type: Full Time Base Location: Guindy, Chennai Mode of work: Hybrid model Work Experience Level: 3 to 5 Years About Us Access is a global leader in information management solutions, offering comprehensive services across the entire document and records management lifecycle. From secure offsite storage for paper and digital records to advanced software for privacy, retention, and document management, Access enables organizations to protect, govern, and maximize the value of their information. Our end-to-end services include backfile imaging and digital delivery, scanning and digitization, business process automation, and secure destruction, purges, shredding and data archiving. About the Role Job Description: As a DevOps Engineer at Access Information Management, you will play a critical role in building the tools and process to ensure Software and Infrastructure is delivered smoothly, quickly, safely, and continuously. They help deliver the DevOps Methodology and ensure the right technology is used to solve a problem. The DevOps Engineer is expected to be able to clearly communicate and collaborate with other team members, work independently on items assigned, and be able to complete work items in a reasonable amount of time. Roles & Responsibilities: CI/CD Pipeline Development: Set up and maintain continuous integration and continuous deployment (CI/CD) pipelines using tools like Azure DevOps, Jenkins, GitLab CI/CD, or AWS Code Pipeline. Implement security tools via CI/CD pipelines for DAST, SAST, container vulnerabilities, and SCA. Implement automated testing and quality assurance processes within CI/CD pipelines. Provide solutions for various deployment strategies. Cloud Infrastructure Management: Design, deploy, and maintain scalable and highly available cloud infrastructure Implement and manage Azure services such as VMs, SQL Databases, Storage accounts, Apps Services and others as needed. Implement and manage AWS services such as EC2, RDS, S3, Lambda, and others as needed. Monitor and optimize cloud resources for cost efficiency. Ensure proper cloud governance is followed. Creating organization level policies and reports, with the assurance of security best practices are followed and enforced. Create the capabilities to patch resources, testing and reporting. Automation and Scripting: Develop and maintain infrastructure as code (IaC) scripts using tools like Terraform, AWS CloudFormation, or others. Create and maintain automation scripts for deployment, scaling, and orchestration. Manage system configuration through Configuration Management tools such as Ansible or Puppet. Collaboration and Documentation: Collaborate with development teams to understand application requirements and assist with infrastructure design. Create and maintain comprehensive documentation for infrastructure and processes. Platform Monitoring: Monitor system performance and implement optimizations to ensure optimal resource utilization. Troubleshoot and resolve infrastructure and application-related issues. Skills: AWS Services Azure DevOps (ADO) Azure (nice to have) CI/CD Pipeline, Terraform, PowerShell, Bash, Ansible Qualifications - Bachelor's degree in computer science, Information Technology, or a related field (or equivalent work experience). - Proven experience as a DevOps Engineer or a similar role. - Strong proficiency with cloud computing platforms, particularly AWS. - Proficiency in scripting and automation using languages such as Python, Shell, or PowerShell. - Experience with containerization, Docker. - Familiarity with version control systems (e.g., Git) and CI/CD pipelines. - Knowledge of infrastructure as code (IaC) tools like Terraform or AWS CloudFormation. - Understanding of security best practices for cloud environments. - Excellent problem-solving and communication skills. - Ability to work collaboratively in a team-oriented environment. Competencies: Problem-Solving Analytical Thinking Team Collaboration Communication Skills Process Orientation Adaptability & Continuous Learning
Posted 1 week ago
6.0 - 11.0 years
25 - 40 Lacs
Bengaluru
Work from Office
Hi, Greetings from Thales India Pvt Ltd.....! We are hiring for Technical Lead - Devops Engineer for our Engineering competency center for Bangalore location . Experience: 8 to 12 years. Notice Period: Immediate to Max 30 Days. Location: Thales India Private Limited, Richmond Town, Bengaluru, Karnataka 560025. About Thales: Thales people architect identity management and data protection solutions at the heart of digital security. Business and governments rely on us to bring trust to the billons of digital interactions they have with people. Our technologies and services help banks exchange funds, people cross borders, energy become smarter and much more. More than 30,000 organizations already rely on us to verify the identities of people and things, grant access to digital services, analyze vast quantities of information and encrypt data to make the connected world more secure. Present in India since 1953, Thales is headquartered in Noida, Uttar Pradesh, and has operational offices and sites spread across Bengaluru, Delhi, Gurugram, Hyderabad, Mumbai, Pune among others. Over 1800 employees are working with Thales and its joint ventures in India. Since the beginning, Thales has been playing an essential role in Indias growth story by sharing its technologies and expertise in Defense, Transport, Aerospace and Digital Identity and Security markets. Additional: Imperva, a Thales Company is a cybersecurity leader Together, we provide innovative platforms designed to reduce the complexity and risks of managing and protecting more applications, data, and identities than any other company can. Our solutions enable over 35,000 organizations to deliver trusted digital services to billions of consumers around the world every day. JOB Summary: We're building a first-of-its-kind AI Firewall to protect applications using Large Language Models (LLMs). As one of the first DevOps Engineers on the team, you'll build and maintain the CI/CD pipelines, observability stack, and deployment infrastructure for a cutting-edge AI Firewall. Your work ensures our services are secure, fast, and always available. Job Knowledge, Skill and Qualifications: BE, M.Sc. in Computer Science or equivalent 8+ years of experience in DevOps, SRE, or Infrastructure Engineering Proficient with Kubernetes, Docker, and cloud platforms (AWS/GCP/Azure) Experience in developing performance-oriented applications. Strong scripting skills (Bash, Python, or Groovy) Background in AI/ML, Networking concepts such as TCP/UDP, HTTP, TLS etc. Bonus: Experience with security tooling, API gateways, or LLM-related infrastructure
Posted 1 week ago
1.0 - 2.0 years
1 - 3 Lacs
Bengaluru
Work from Office
Required Skills & Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field. 12 years of hands-on experience in a DevOps or Systems Engineer role. Experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions). Basic understanding of cloud platforms (AWS/Azure/GCP). Familiarity with container technologies (Docker) and orchestration tools (Kubernetes is a plus). Strong scripting skills in Bash, Python, or similar. Exposure to Infrastructure as Code tools like Terraform or Ansible is a plus. Good problem-solving skills and attention to detail.
Posted 1 week ago
2.0 - 6.0 years
8 - 12 Lacs
Kochi
Work from Office
Develop and enhance automated network configuration tools using 100% open-source software (e.g. Ansible, Python). Key Requirement: Experience in data center operations. Plan, design, and maintain intercontinental network across six global locations. Required Candidate profile Programming experience in at least one scripting language, ideally Bash, Perl, or Python. Strong knowledge of Linux and networking, with proficiency in command-line operations.
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
noida, uttar pradesh
On-site
You will be responsible for solution design, architecture blueprints, cost estimates of components, and detailed documentation. Proactively identifying data-driven cost optimization opportunities for customers and supporting their team to achieve the same will be a key part of your role. You will also need to perform proof of concept on new services/features launched by AWS and integrate them with existing systems for improved performance and cost savings. Independently reviewing client infrastructure, conducting cost optimization audits, and well-architected reviews to identify cost inefficiencies like underutilized resources, architectural pitfalls, and pricing options will be crucial. Implementing governance standards such as resource tagging, account structure, provisioning, permissions, and access is also part of the job. Building a cost-aware ecosystem and enhancing cost visibility through alerting and reporting will be essential tasks. To be successful in this role, you should have a B.E/B.Tech/MCA degree with a minimum of 4+ years of experience working on the AWS cloud. A deep understanding of AWS cloud offerings and consumption models is required, along with proficiency in scripting languages like Python and Bash. Experience in DevOps practices and effective communication skills to engage stakeholders ranging from entry-level to C-suite is necessary. It would be advantageous if you have experience with third-party cost optimization tools like CloudCheckr, CloudAbility, CloudHealth, etc. Additionally, familiarity with AWS billing constructs including pricing options like On-demand, Reserved/Savings Plan, Spot, Cost and Usage Reports, and AWS Cost Management Tools would be beneficial. Possessing certifications such as AWS Certified SysOps Associate, AWS Certified Solutions Architect Associate, AWS Certified Solutions Architect Professional, or AWS Certified DevOps Professional is a plus. Prior experience in client communications, being a self-starter, and the ability to deliver under critical timelines are desirable traits for this role.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
indore, madhya pradesh
On-site
You are a skilled Database Engineer responsible for designing, building, and maintaining reliable database systems to support applications and data infrastructure. Your expertise in database architecture, data modeling, and performance tuning, coupled with hands-on experience in SQL and NoSQL systems, is crucial for this role. Your primary responsibilities will include designing and implementing scalable and high-performing database architectures, optimizing complex queries, stored procedures, and indexing strategies, collaborating with backend engineers and data teams to model databases, performing data migrations, transformations, and integrations, and ensuring data consistency, integrity, and availability across distributed systems. You will also develop and maintain ETL pipelines, monitor database performance, automate repetitive tasks, deploy schema changes, and assist with database security practices. To excel in this role, you must have strong experience in relational databases such as PostgreSQL, MySQL, MS SQL Server, or Oracle, proficiency in writing optimized SQL queries, experience with NoSQL databases like MongoDB, Cassandra, DynamoDB, or Redis, a solid understanding of database design principles, and expertise in Oracle and GoldenGate. Additionally, hands-on experience with ETL pipelines, data transformation, scripting, version control systems, DevOps tools, cloud database services, data backup, disaster recovery, and high availability setups is essential. This is a full-time position located in Indore, requiring a minimum of 4 years of relevant experience. If you are passionate about database engineering, data management, and system performance optimization, we encourage you to apply and be part of our dynamic team. Please note that this job description is sourced from hirist.tech.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You will be joining our client's team as a Site Reliability Engineer, where your main responsibility will be to ensure the reliability and uptime of critical services. Your focus will include Kubernetes administration, CentOS servers, Java application support, incident management, and change management. The ideal candidate for this role will have strong experience with ArgoCD for Kubernetes management, Linux skills, basic scripting knowledge, and familiarity with modern monitoring, alerting, and automation tools. We are looking for a self-motivated individual with excellent communication skills, both oral and written, who can work effectively both independently and collaboratively. Your responsibilities will include monitoring, maintaining, and managing applications on CentOS servers to ensure high availability and performance. You will be conducting routine tasks for system and application maintenance and following SOPs to correct or prevent issues. Responding to and managing running incidents, including post-mortem meetings, root cause analysis, and timely resolution will also be part of your responsibilities. Additionally, you will be monitoring production systems, applications, and overall performance, using tools to detect abnormal behaviors in the software and collecting information to help developers understand the issues. Security checks, running meetings with business partners, writing and maintaining policy and procedure documents, writing scripts or code as necessary, and learning from post-mortems to prevent new incidents are also key aspects of the role. Technical skills required for this position include: - 5+ years of experience in a SaaS and Cloud environment - Administration of Kubernetes clusters, including management of applications using ArgoCD - Linux scripting to automate routine tasks and improve operational efficiency - Experience with database systems like MySQL and DB2 - Experience as a Linux (CentOS / RHEL) administrator - Understanding of change management procedures and enforcement of safe and compliant changes to production environments - Knowledge of on-call responsibilities and maintaining on-call management tools - Experience with managing deployments using Jenkins - Prior experience with monitoring tools like New Relic, Splunk, and Nagios - Experience with log aggregation tools such as Splunk, Loki, or Grafana - Strong scripting knowledge in one of Python, Ruby, Bash, Java, or GoLang - Experience with API programming and integrating tools like Jira, Slack, xMatters, or PagerDuty If you are a dedicated professional who thrives in a high-pressure environment and enjoys working on critical services, this opportunity could be a great fit for you.,
Posted 1 week ago
9.0 - 13.0 years
0 Lacs
haryana
On-site
Join our Privileged Access Management (PAM) team and you will have the opportunity to work in a collaborative and dynamic global environment. Our team is focused on enabling privileged access management related tools and frameworks for our businesses. At Macquarie, our advantage is bringing together diverse people and empowering them to shape all kinds of possibilities. We are a global financial services group operating in 34 markets and with 55 years of unbroken profitability. You'll be part of a friendly and supportive team where everyone - no matter what role - contributes ideas and drives outcomes. You have a significant role to contribute to the organization's risk management by strengthening security controls. You will support in uplifting our Secrets Management platform with contemporary solutions for various workloads to manage Macquarie's privileged accounts, unmanaged secrets and access. Overall, you will help in delivering a more sustainable and effective controls across the organization. **What you offer:** - 9 - 12 years of experience in Java development with strong technical knowledge of REST based microservice architecture design. - Proficiency in Java, Springboot, Hibernate, ReactJs, Go Lang, Python, Bash, and SQL. - Expertise in Cloud technologies preferably AWS & CI/CD Pipeline tools. - Experience with HashiCorp Vault, Terraform OS/Enterprise, and Camunda is advantageous. - Strong troubleshooting skills with significant experience in DevOps Culture. We love hearing from anyone inspired to build a better future with us; if you're excited about the role or working at Macquarie, we encourage you to apply. **About Technology:** Technology enables every aspect of Macquarie, for our people, our customers, and our communities. We're a global team that is passionate about accelerating the digital enterprise, connecting people and data, building platforms and applications, and designing tomorrow's technology solutions. **Our commitment to diversity, equity, and inclusion:** Our aim is to provide reasonable adjustments to individuals who may need support during the recruitment process and through working arrangements. If you require additional assistance, please let us know in the application process.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
You should have over 10 years of experience working in a large enterprise with diverse teams. Specifically, you should possess at least 6 years of expertise in APM and Monitoring technologies and a minimum of 3 years of experience with ELK. Your responsibilities will include designing and implementing efficient log shipping and data ingestion processes, collaborating with development and operations teams to enhance logging capabilities, and configuring components of the Elastic Stack such as Filebeat, Metricsbeat, Winlogbeat, Logstash, and Kibana. Additionally, you will be required to create and maintain comprehensive documentation for Elastic Stack configurations, ensure seamless integration between various components, advance Kibana dashboards and visualizations, and manage Elasticsearch Clusters on premise. Hands-on experience in Scripting & Programming languages like Python, Ansible, and bash, as well as knowledge in Security Hardening, Vulnerability/Compliance, and CI/CD deployment pipelines, are essential for this role. You should also have a strong understanding of performance monitoring, metrics, planning, and management, and the ability to apply systematic and creative problem-solving approaches. Experience in application onboarding, influencing other teams to adopt best practices, effective communication skills, and familiarity with tools like ServiceNow, Confluence, and JIRA are highly desirable. Understanding of SRE and DevOps principles is also crucial. In terms of technical skills, you should be proficient in APM Tools like ELK, AppDynamics, PagerDuty, programming languages such as Java, .Net, Python, operating systems like Linux and Windows, automation tools including GitLab and Ansible, container orchestration with Kubernetes, and cloud platforms like Microsoft Azure and AWS. If you meet these qualifications and are interested in this opportunity, please share your resume with gopinath.sonaiyan@flyerssoft.com.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Field Bioinformatics Scientist at MGI, you will play a crucial role in designing, developing, optimizing, and maintaining in-house bioinformatics pipelines and various genomics applications. Your responsibilities will include understanding customer research questions, providing tailored solutions, and addressing technical issues from global sales and customers. You will collaborate with various teams such as FAS, FSE, Product Managers, and R&D to provide feedback, develop customized solutions, and support solution provision. Your role will also involve staying updated on sequencing, bioinformatics, MGI products, and industry trends, evaluating new software/tool suitability for different applications, and applying best-practice statistical and computational methods. Additionally, you will be responsible for providing technical pre-sales and post-sales support services, conducting training sessions and webinars for customers and the front-end sales team, and managing documentations such as training materials, FAQs, slides, videos, SOPs, and knowledge base content. The ideal candidate for this position should hold a Bachelor's degree or higher with at least 3 years of experience in academic or industrial settings in Bioinformatics, Computational Biology, Biostatistics, or Computer Science. You should have expertise in developing next-generation sequencing data analysis methods for DNA, RNA, and Long read analysis, along with proficiency in scripting languages like Python, bash, Perl, C++, and R. Excellent communication skills, the ability to work independently and collaboratively, manage tasks efficiently, and adapt to a fast-paced environment are essential for this role. Experience with version control systems such as Git, a wide range of bioinformatics tools, and proficiency in Linux operating systems like Ubuntu, Fedora, CentOS, etc., will be advantageous. At MGI, we believe in promoting advanced life science tools for future healthcare and transforming lives for the better. We encourage innovation, bold decision-making, and a commitment to improving the world we live in. Join us in leading life science innovation and contributing to a healthier and longer life for everyone. With a focus on transparency, fairness, and a friendly environment, we value our employees as partners and prioritize their physical and mental well-being. Embrace our agile management approach, enjoy independence with guidance, and foster a balanced life-work culture as we strive towards a brighter and more equal future together. #Omicsforall,
Posted 1 week ago
0.0 - 3.0 years
0 Lacs
indore, madhya pradesh
On-site
You are a motivated Trainee AWS Engineer with 0-6 months of experience, eager to start your career in cloud computing. You will receive hands-on training and mentorship to work on real-world projects, manage and optimize AWS services, support cloud infrastructure, and contribute to the deployment and maintenance of cloud-based solutions. In this role, you will engage in on-the-job training to gain a deep understanding of AWS services, architecture, and best practices. Participation in training sessions and certification programs will help you build expertise in AWS. You will assist in monitoring and maintaining AWS infrastructure for optimal performance, troubleshooting issues under the guidance of senior engineers. As a Trainee AWS Engineer, you will learn to deploy and configure AWS resources such as EC2 instances, S3 buckets, and RDS databases. Supporting automation of infrastructure tasks using AWS tools like CloudFormation, Lambda, and IAM will be a key aspect. You will also assist in implementing security best practices, ensuring compliance with organizational policies, and learning about identity and access management in AWS. Documentation and reporting tasks will involve creating and maintaining documentation for AWS infrastructure, configurations, and procedures. You will help generate reports on system performance, cost management, and security audits. Collaboration with senior AWS engineers and other IT teams to understand business requirements and provide basic support to internal teams for AWS-related inquiries and issues is essential. To qualify for this position, you should have 0-6 months of experience in cloud computing, preferably with AWS, and a basic understanding of cloud concepts and AWS services. Familiarity with AWS services like EC2, S3, RDS, and IAM is a plus, along with basic knowledge of networking, Linux, and scripting languages (e.g., Python, Bash). Your eagerness to learn and adapt to new technologies and tools will be beneficial. Strong analytical and problem-solving abilities, good communication and teamwork skills, and the ability to follow instructions and work independently are essential soft skills for this role. Being detail-oriented with strong organizational skills will contribute to your success as a Trainee AWS Engineer. This position is based in Indore, Madhya Pradesh, and the working model is on-site. You should be willing to work at various locations in India and globally, including the USA, UK, Australia, and the Middle East. Apply now if you are ready to kickstart your career in cloud computing and contribute to cutting-edge cloud solutions.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
kochi, kerala
On-site
As an AWS Cloud Engineer at our company based in Kerala, you will play a crucial role in designing, implementing, and maintaining scalable, secure, and highly available infrastructure solutions on AWS. Your primary responsibility will be to collaborate closely with developers, DevOps engineers, and security teams to support cloud-native applications and business services. Your key responsibilities will include designing, deploying, and maintaining cloud infrastructure using various AWS services such as EC2, S3, RDS, Lambda, and VPC. Additionally, you will be tasked with building and managing CI/CD pipelines, automating infrastructure provisioning using tools like Terraform or AWS CloudFormation, and monitoring and optimizing cloud resources through CloudWatch, CloudTrail, and other third-party tools. Furthermore, you will be responsible for managing user permissions and security policies using IAM, ensuring compliance, implementing backup and disaster recovery plans, troubleshooting infrastructure issues, and responding to incidents promptly. It is essential that you stay updated with AWS best practices and new service releases to enhance our overall cloud infrastructure. To be successful in this role, you should possess a minimum of 3 years of hands-on experience with AWS cloud services, a solid understanding of networking, security, and Linux system administration, as well as experience with DevOps practices and Infrastructure as Code (IaC). Proficiency in scripting languages such as Python and Bash, familiarity with containerization tools like Docker and Kubernetes (EKS preferred), and holding an AWS Certification (e.g., AWS Solutions Architect Associate or higher) would be advantageous. It would be considered a plus if you have experience with multi-account AWS environments, exposure to serverless architecture (Lambda, API Gateway, Step Functions), familiarity with cost optimization, and the Well-Architected Framework. Any previous experience in a fast-paced startup or SaaS environment would also be beneficial. Your expertise in AWS CloudFormation, Kubernetes (EKS), AWS services (EC2, S3, RDS, Lambda, VPC), cloudtrail, cloud, scripting (Python, Bash), CI/CD pipelines, CloudWatch, Docker, IAM, Terraform, and other cloud services will be invaluable in fulfilling the responsibilities of this role effectively.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a DevOps Engineer at Wabtec Corporation, you will play a crucial role in performing CI/CD and automation design/validation activities. Reporting to the Technical Project Manager and working closely with the software architect, you will be responsible for adhering to internal processes, including coding rules, and documenting implementations accurately. Your focus will be on meeting Quality, Cost, and Time objectives set by the Technical Project Manager. To qualify for this role, you should hold a Bachelor's or Master's degree in engineering in Computer Science with a web option in CS, IT, or a related field. You should have 6 to 10 years of hands-on experience as a DevOps Engineer and possess the following abilities: - A good understanding of Linux systems and networking - Proficiency in CI/CD tools like GitLab - Knowledge of containerization technologies such as Docker - Experience with scripting languages like Bash and Python - Hands-on experience in setting up CI/CD pipelines and configuring Virtual Machines - Familiarity with C/C++ build tools like CMake and Conan - Expertise in setting up pipelines in GitLab for build, Unit testing, and static analysis - Experience with infrastructure as code tools like Terraform or Ansible - Proficiency in monitoring and logging tools such as ELK Stack or Prometheus/Grafana - Strong problem-solving skills and the ability to troubleshoot production issues - A passion for continuous learning and staying up-to-date with modern technologies and trends in the DevOps field - Familiarity with project management and workflow tools like Jira, SPIRA, Teams Planner, and Polarion In addition to technical skills, soft skills are also crucial for this role. You should have a good level of English proficiency, be autonomous, possess good interpersonal and communication skills, have strong synthesis skills, be a solid team player, and be able to handle multiple tasks efficiently. At Wabtec, we are committed to embracing diversity and inclusion. We value the variety of experiences, expertise, and backgrounds that our employees bring and aim to create an inclusive environment where everyone belongs. By fostering a culture of leadership, diversity, and inclusion, we believe that we can harness the brightest minds to drive innovation and create limitless opportunities. If you are ready to join a global company that is revolutionizing the transportation industry and are passionate about driving exceptional results through continuous improvement, then we invite you to apply for the role of Lead/Engineer DevOps at Wabtec Corporation.,
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
pune, maharashtra
On-site
You are a highly skilled DevOps Lead with over 10 years of experience, responsible for leading a DevOps team and ensuring best practices in automation and infrastructure management. Your expertise includes Terraform, AWS, GitLab pipelines, and Cloud Cost Optimization. You must possess strong leadership capabilities, hands-on technical skills, and the ability to drive automation, infrastructure-as-code (IaC), and efficient cloud cost management. Your key responsibilities include leading and mentoring a team of DevOps engineers, designing, implementing, and maintaining Terraform infrastructure-as-code solutions for cloud deployments, managing and optimizing AWS infrastructure for high availability, scalability, and cost efficiency, maintaining CI/CD pipelines, supporting cloud cost optimization initiatives, collaborating with cross-functional teams, and ensuring system reliability and performance through monitoring, logging, and alerting solutions. You will be required to build, deploy, automate, maintain, manage, and support AWS cloud-based infrastructure, troubleshoot and resolve system issues, integrate with on-premises systems and third-party cloud applications, evaluate new cloud technology options, and design highly available BC/DR strategies for all cloud resources. Your qualifications should include a Bachelor's degree in computer science or a related field, proficiency in AWS cloud services, expertise in infrastructure as code tools like Terraform, hands-on experience with scripting languages, deep understanding of network architectures and cloud security protocols, knowledge of Linux and Windows server environments, and relevant certifications such as AWS Certified DevOps Engineer and Terraform Associate. Critical competencies for your success in this role include technical proficiency, problem-solving skills, innovation and continuous improvement mindset, effective communication skills, collaboration and teamwork abilities, adaptability to fast-paced environments, and a strong emphasis on security awareness within cloud-based infrastructures.,
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
delhi
On-site
As a DevOps Engineer at AuditorsDesk, you will be responsible for designing, deploying, and maintaining AWS infrastructure using Terraform for provisioning and configuration management. Your role will involve implementing and managing EC2 instances, application load balancers, and AWS WAF to ensure the security and efficiency of web applications. Collaborating with development and operations teams, you will integrate security practices throughout the software development lifecycle and automate testing and deployment processes using CI/CD pipelines. You should have a Bachelor's degree in Computer Science, Information Technology, or a related field, along with 5 to 10 years of experience working with AWS services and infrastructure. Proficiency in infrastructure as code (IaC) using Terraform, hands-on experience with load balancers, and knowledge of containerization technologies like Docker and Kubernetes are required. Additionally, familiarity with networking concepts, security protocols, scripting languages for automation, and troubleshooting skills are essential for this role. Preferred qualifications include AWS certifications like AWS Certified Solutions Architect or AWS Certified DevOps Engineer, experience with infrastructure monitoring tools such as Prometheus and knowledge of compliance frameworks like PCI-DSS and GDPR. Excellent communication skills and the ability to collaborate effectively with cross-functional teams are key attributes for success in this position. This is a permanent, on-site position located in Delhi with compensation based on industry standards. If you are a proactive and detail-oriented professional with a passion for ensuring high availability and reliability of systems, we invite you to join our team at AuditorsDesk and contribute to making audit work paperless and efficient.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Staff Engineer (SDE 4) at our esteemed company, you will be an integral part of our lean, high-impact team in Bangalore. We are looking for a seasoned and passionate individual dedicated to creating robust, scalable, and customer-centric products. Your role will involve owning significant technical and operational responsibilities, guiding projects from concept to delivery. Your responsibilities will include collaborating closely with upper management, product, and engineering teams to gather and deeply understand feature requirements. You will be responsible for defining clear, scalable system designs and technical specifications that align with our product vision. Additionally, you will play a crucial role in breaking down complex tasks into manageable deliverables and driving their execution proactively. Writing clean, maintainable, and scalable code will be a key part of your role, along with leading technical discussions, mentoring team members, and effectively communicating with stakeholders. Moreover, you will champion best practices in software development, testing, deployment, and system monitoring. Optimizing infrastructure for cost efficiency, stability, and performance will be another aspect of your responsibilities. This role serves as a direct pathway towards becoming an engineering leader, where you will be involved in recognizing, hiring, and grooming top engineering talent based on business needs. To be successful in this role, you should have at least 5 years of full-stack engineering experience and proven experience as a tech lead in projects. Deep expertise in modern JavaScript, TypeScript, reactive frameworks, backend systems, and SQL databases is essential. You should also have familiarity with data stores, streaming services, Linux-based systems, containerization, orchestration tools, DevOps practices, cloud infrastructure providers, API design principles, and collaborative software development tools. Furthermore, your passion for Agile methodologies, continuous learning, clean coding, and software design best practices will be valued in our fast-paced startup environment. You should be comfortable working from our office in Koramangala, Bangalore for at least 3 days per week. Join us at Hireginie, a prominent talent search company, where you will have the opportunity to make a significant impact and grow as a key member of our team.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Middleware Developer for Linux-based IVI Development with 3-5+ years of experience, you will be responsible for designing, developing, and integrating middleware components for our In-Vehicle Infotainment (IVI) system based on Linux. Your role will involve building and maintaining communication services, multimedia frameworks, and other platform services that connect the Board Support Package (BSP) with the Human-Machine Interface (HMI) layers. Your key responsibilities will include developing and maintaining middleware components such as multimedia frameworks (GStreamer, PulseAudio), communication services (Bluetooth, Wi-Fi, GPS), and vehicle data interfaces. You will collaborate with BSP teams to ensure seamless integration of middleware with hardware and low-level software. Working with HMI developers, you will efficiently expose middleware services for UI consumption. Implementing inter-process communication (IPC) mechanisms and service discovery protocols will also be part of your tasks. Additionally, optimizing middleware performance and resource utilization on embedded Linux platforms, debugging and troubleshooting middleware issues, and participating in architectural discussions, code reviews, and documentation are essential aspects of your role. You will be responsible for ensuring that middleware complies with automotive standards and security best practices. To qualify for this position, you should hold a Bachelor's degree in Computer Science, Software Engineering, or a related field, and have at least 3 years of experience in middleware development for embedded Linux systems. Strong knowledge of multimedia frameworks (GStreamer, PulseAudio) and networking protocols is required, along with experience in Bluetooth, Wi-Fi, GPS, and CAN bus communication protocols. Proficiency in C/C++ and scripting languages like Python or Bash is essential. Familiarity with Linux IPC mechanisms (DBus, sockets), a good understanding of embedded Linux architecture and cross-layer integration, and strong problem-solving and collaborative skills are also necessary. Preferred skills for this role include experience in automotive IVI or embedded systems development, knowledge of Yocto Project or Buildroot build systems, familiarity with containerization (Docker) and CI/CD pipelines, understanding of automotive safety (ISO 26262) and cybersecurity requirements, and exposure to Agile development methodologies.,
Posted 1 week ago
0.0 - 4.0 years
0 Lacs
maharashtra
On-site
The ideal candidate for this role should have completed their education with a Bachelor of Engineering in IT, Computers, Electronics, or Telecommunication. You must possess strong skills in object-oriented programming and relational database management concepts, as well as proficiency in the programming language Python. In addition, knowledge of databases such as Postgresql and Mysql, along with programming languages like PHP, Bash, C, and C++, would be considered advantageous. It is crucial that you have excellent verbal and written communication skills in English to effectively collaborate and convey ideas within the team.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough