Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 3.0 years
16 - 31 Lacs
vapi
Work from Office
Role & responsibilities Monitor real-time metrics, dashboards, and alerts in Datadog to ensure platform uptime and performance. Investigate and respond to alerts using Datadog APM, infrastructure monitoring, log analysis, and custom monitors. Triage incoming support tickets from Odoo Helpdesk and Jira, resolving issues or escalating to the proper technical teams. Support backend services (.NET Core microservices on AWS Lambda) and Angular- based frontend applications. Analyze and troubleshoot logs and traces using OpenSearch and Datadog. Assist with user configuration tasks, permissions, access management, and integration support. Work with QA and developers to reproduce bugs, validate resolutions, and perform basic platform tests. Automate recurring support and diagnostic tasks via scripting (Bash, Python, PowerShell, etc.). Contribute to internal documentation in Confluence (how-tos, known issues, FAQs, playbooks). Collaborate with developers, DevOps, and QA teams to improve reliability and customer satisfaction. Preferred candidate profile 2+ years of experience in technical support, DevOps support, or platform engineering. Proficiency in using Datadog for application monitoring, performance tracing, alerting, and dashboarding. Experience working with distributed systems and cloud-native applications, especially on AWS. Ability to analyze logs and metrics from OpenSearch or similar tools. Familiarity with APIs, HTTP troubleshooting, and debugging tools (Postman, browser dev tools). Strong problem-solving and communication skills. • Familiarity with .NET Core, AWS Lambda, and serverless architecture. • Exposure to Odoo, Jira, and Confluence. • Understanding of Angular apps and basic frontend diagnostics. • Experience automating investigations or support workflows. • Knowledge of DevOps practices and CI/CD tools.
Posted 19 hours ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
You will be responsible for providing 1st line support for all Ticketmaster alerts and queries. Additionally, you will perform on-call duty as part of a global team monitoring the availability and performance of ticketing systems and APIs. Your role will involve resolving advanced issues, providing advanced troubleshooting for escalations, and offering Subject Matter Expertise to cross-functional teams on threats issues. Key Responsibilities: - Provide 1st line support for all Ticketmaster alerts and queries - Perform on-call duty to monitor availability and performance of ticketing systems and APIs - Resolve advanced issues and provide troubleshooting for escalations - Provide Subject Matter Expertise to cross-functional teams on threats issues - Drive continuous improvements to products, tools, APIs, and processes - Independently learn new technologies and master Ticketmaster ticketing platforms Qualifications Required: - BA/BS degree in computer science or related field or relevant work experience - Experience with bot detection and blocking systems - Troubleshooting skills from diagnosing low-level request issues to large-scale issues - Proficiency in Bash/Python/Go for operations scripts and text processing - Working knowledge of HTTP protocol, basic web systems, and analysis tools - Experience working with a 24/7 shift based team - Experience in a global, fast-paced environment - Strong English language communication skills - Ability to collaborate closely with remote team members - Embrace continuous learning and improvement In addition to the above responsibilities and qualifications, you will work on automation to reduce toil. You should be passionate, motivated, resourceful, innovative, forward-thinking, and committed to adapting quickly in a fast-paced environment. Your ability to work autonomously while sharing knowledge with technology teams and embrace continuous learning will be essential for success in this role.,
Posted 1 day ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
As a Senior DevOps Engineer at BMC, you will play a crucial role in designing, developing, and implementing complex applications using the latest technologies. Your contributions will directly impact BMC's success and help shape your own career growth. Here's what you can expect in this exciting position: - Participate in all aspects of SaaS product development, from requirements analysis to product release and sustaining. - Drive the adoption of the DevOps process and tools across the organization. - Learn and implement cutting-edge technologies and tools to build best-of-class enterprise SaaS solutions. - Deliver high-quality enterprise SaaS offerings on schedule. - Develop Continuous Delivery Pipeline. Qualifications required for this role: - Embrace, live, and breathe BMC values every day. - Minimum 8-10 years of experience in a DevOps/SRE role. - Implemented CI/CD pipelines with best practices. - Experience in Kubernetes. - Knowledge in AWS/Azure Cloud implementation. - Proficiency in GIT repository and JIRA tools. - Passionate about quality, demonstrating creativity, and innovation in enhancing the product. - Strong problem-solving skills and analytical abilities. - Effective communication skills and a team player. Nice-to-have skills that our team can help you develop: - SRE practices. - Familiarity with GitHub/Spinnaker/Jenkins/Maven/JIRA, etc. - Automation Playbooks using Ansible. - Infrastructure-as-code (IaaC) using Terraform/Cloud Formation Template/ARM Template. - Scripting in Bash/Python/Go. - Experience with Microservices, Database, and API implementation. - Proficiency in Monitoring Tools such as Prometheus/Jager/Grafana/AppDynamic, DataDog, Nagios, etc. - Understanding of Agile/Scrum processes. At BMC, our culture revolves around our people. With over 6000 brilliant minds working together globally, we value authenticity and individuality. Your unique contributions will be celebrated, and we encourage diversity in experiences and backgrounds. If you're excited about BMC and this opportunity but unsure if you meet all the qualifications, we still encourage you to apply. We are committed to attracting talents from diverse backgrounds and experiences to foster innovation and collaboration. Additionally, BMC offers a comprehensive employee compensation package that goes beyond salary, including variable plans and country-specific benefits. We ensure fair and transparent compensation practices for all our employees. If you have had a break in your career, BMC welcomes candidates looking to re-enter the workforce. Visit [BMC Returnship](https://bmcrecruit.avature.net/returnship) to learn more about this opportunity and how to apply.,
Posted 1 day ago
4.0 - 7.0 years
7 - 11 Lacs
bengaluru
Work from Office
Role Overview Seeking an experienced Senior Cybersecurity Engineer with 4+ years of expertise to lead our security initiatives and fortify our healthcare platform In this role, you will design, architect, and implement robust security strategies for cloud environments and enterprise applications, ensuring the protection of sensitive healthcare data while enabling rapid business growth Your mastery of incident response, threat detection, and comprehensive security operations will be critical in maintaining the trust and security of our user base, Key Responsibilities Lead and execute advanced incident response, red team operations, and threat detection initiatives across critical security domains, Architect and implement secure cloud systems and network infrastructures for AWS, GCP, and Kubernetes environments using industry frameworks such as CIS, NIST, and ISO 27001, Develop and maintain robust vulnerability assessment, penetration testing, and threat hunting strategies to preemptively combat malicious activities, Design and enforce security automation and orchestration through scripting (Python, C#, JSON, shell scripting) and Infrastructure as Code tools like Terraform, Collaborate with cross-functional teams to integrate security posture monitoring, conduct regular audits, and ensure compliance with regulatory standards (ISO27001, GDPR, SOC2), What We Look For Skills: Expert-level proficiency in cloud security architectures with deep knowledge of AWS and GCP security services, A minimum of 4+ years of hands-on experience in cybersecurity roles within highly regulated industries, Advanced capabilities in incident response, forensics, and threat detection, including red team operations and vulnerability assessments, Strong programming and scripting skills in Python, Bash, and related technologies, combined with expertise in Infrastructure as Code, Proven experience with enterprise security tools and frameworks, along with a solid understanding of modern identity protocols and zero-trust architectures, Qualifications Required Experience: 4+ years in cybersecurity roles with a proven track record of securing production environments at scale, Bonus Points: Professional certifications such as AWS Certified Security Specialty, Google Professional Cloud Security Engineer, CISSP, CKS, CISM, CompTIA Security+, or CEH; experience in healthcare, fintech, or similarly regulated industries, What We Offer Competitive compensation and benefits package Professional growth and career development opportunities Collaborative and innovative work environment Show more Show less
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Software Engineer at Nbyula, you will be responsible for training on production level systems to build next-generation web and/or native solutions that are efficacious, highly extensible, and scalable. You will be part of a full-time terraformer position, where you are expected to quickly absorb functional domain skills, business and market domain knowledge, and Nbyula's core organizational values. **Roles, Responsibilities & Expectations:** - Define architecture considering deployment of code and libraries, and focus on scalability & extensibility - Understand user requirements and use cases, translating them into tangible deliverables - Participate in all development phases from design to implementation, unit testing, and release - Write clean, maintainable code while iterating and shipping rapidly - Ensure timely delivery of a fully functional product while maintaining high quality - Take ownership of deliverables beyond technical expertise - Prominently Used Technologies: Cloud computing, OSS, Schemaless DB, Machine Learning, Web Crawling, Data Scraping, Progressive Web Apps, Continuous Integration & Deployment, RDBMS, Nix **Qualifications & Skills:** - Open to considering candidates with B.Sc./BE/B.Tech/M.Tech/BCA/MCA specializing in Computer Science/Information Technology/Data Science/Cloud Computing/Artificial Intelligence/Cyber Security/Blockchain/Mobile Computing - Hands-on experience in building web/client-server systems using Python, Django/Flask, Nginx, MySql/MongoDB/Redis/Neo4J, etc. - Proficiency in HTML5, CSS, client-server architectures, asynchronous request handling, and partial page updates - Product-oriented mindset focusing on solving problems for a large user base - Good knowledge of Python, JavaScript, React or equivalent JS framework, and Express - Exceptional analytical skills and ability to meet time-bound deliverables - Excitement to work in a fast-paced startup environment solving challenging technology problems - Ability to script in bash, browser automation scripts like iMacros, selenium will be a plus - Knowledge of GoLang, TypeScript, and Java is a bonus **About Us:** Nbyula is a German technology brand headquartered in Berlin with a Research & Development Center in Bengaluru, Karnataka, operational since 2014. Nbyula aims to create an open world leveraging cutting-edge technologies for international work and studies, empowering "Skillizens without Borders". **Job Perks:** - Opportunity to work on groundbreaking projects in the Ed-tech space - Gaming chairs, live music, access to books, snacks, and health coverage - Long weekend breaks, birthday leave, and annual holiday break - Company-aided accommodation, stock options, casual work attire policy - Respect for logical and analytical minds with innovative ideas Join Nbyula to shape the future of technology and education! [Learn more about us here](https://nbyula.com/about-us) For any queries regarding this position or application process, contact people@nbyula.com.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
Role Overview: You are a highly experienced and technically proficient Senior DevOps Engineer who will play a crucial role in driving the DevOps initiatives of the company. Your responsibilities will include ensuring the reliability, scalability, and security of the infrastructure and applications, mentoring junior engineers, leading complex projects, and contributing to the overall DevOps strategy. Key Responsibilities: - Provide technical leadership and guidance to junior DevOps engineers. - Mentor team members on best practices and emerging technologies. - Lead complex projects from inception to completion. - Contribute to the development and execution of the overall DevOps strategy. - Design and implement scalable, resilient, and secure infrastructure architectures. - Evaluate and recommend new tools and technologies. - Design, implement, and maintain infrastructure as code using tools like Terraform, CloudFormation, or Ansible. - Optimize infrastructure for performance and cost-efficiency. - Develop and enforce IaC standards and best practices. - Lead the development and maintenance of CI/CD pipelines using tools like Jenkins, GitLab CI, CircleCI, or Azure DevOps. - Implement advanced CI/CD techniques such as blue/green deployments, canary releases, and feature flags. - Manage and automate system configurations using tools like Ansible, Chef, or Puppet. - Develop and maintain configuration management policies and standards. - Lead the implementation and management of containerized applications using Docker. - Design and manage Kubernetes clusters for high availability and scalability. - Design and implement comprehensive monitoring and logging solutions using tools like Prometheus, Grafana, ELK stack (Elasticsearch, Logstash, Kibana), or Splunk. - Develop and maintain dashboards and alerts to proactively identify and resolve system issues. - Architect and manage cloud infrastructure on platforms like AWS, Azure, or Google Cloud Platform (GCP). - Utilize cloud-native services to build scalable and resilient applications. - Implement security best practices throughout the CI/CD pipeline and infrastructure. - Conduct security audits and vulnerability assessments. - Collaborate effectively with development, operations, and security teams. - Troubleshoot and resolve complex system issues. - Create and maintain clear and comprehensive documentation for all processes and systems. Qualifications: - Bachelor's degree in Computer Science, Engineering, or a related field. - 5+ years of experience in a DevOps or related role. - Proven experience leading complex projects and mentoring junior engineers. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi.,
Posted 1 day ago
5.0 - 10.0 years
5 - 11 Lacs
vadodara
Work from Office
Role & responsibilities Configure, deploy, and maintain Azure, AWS, and Hyper-V Server environments, ensuring adherence to industry best practices and company standards. Administer and address user support tickets in alignment with ITIL principles, ensuring timely resolutions and maintaining superior levels of user satisfaction. Configure, deploy, and maintain Active Directory services, such as Domain Controllers, User/Group Management, and Group Policy Administration. Administer and maintain a hybrid Active Directory environment ensuring timely synchronizations between On-Premises and Cloud Infrastructure. Administer and maintain Microsoft 365 services, such as Exchange Online, SharePoint, Teams, and other tenant services. Collaborate closely with the IT Security team to implement security best practices and controls, ensuring the confidentiality, integrity, and availability of our Infrastructure. Administer and maintain regular backups and disaster recovery strategies for all Infrastructure services ensuring business continuity. Preferred candidate profile 5+ years of experience in system administration, with a focus on Windows Server environments. MCSE or equivalent certification. Proficiency managing on-premises/cloud Windows server deployments in Hyper-V, Azure and AWS. Proficiency in hybrid Active Directory environments. Proficiency in scripting languages (e.g., PowerShell, Bash) Excellent verbal, written, and interpersonal skills with a proven ability to communicate at various levels within the organization and with external parties. Ability and confidence to take calculated risks in uncertain or ambiguous situations. Excellent organizational skills and demonstrated ability to manage multiple competing priorities and assignments. Passion for delivering business value and willingness to perform other assigned tasks. Ability to deliver regular quick updates, system solutions, and communicate issues to management
Posted 1 day ago
4.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
Role Overview: You will be responsible for managing and implementing infrastructure on Google Cloud Platform (GCP) with a focus on Terraform and Kubernetes. Your strong understanding of networking fundamentals and experience with AWS/GCP will be crucial in this role. Additionally, you will need to have proficiency in infrastructure as code (IAC) tools, Git-based workflows, and CI/CD pipelines. Key Responsibilities: - Hands-on experience in GCP, Terraform, Kubernetes, and DevOps - Proficiency in infrastructure as code (IAC) tools, especially Terraform - Experience with Git-based workflows, version control, and pull requests - Familiarity with CI/CD pipelines and tools, with preference for Codefresh - Knowledge of scripting languages such as Bash, Python, or Groovy - Experience with containerization (Docker) and orchestration (Kubernetes - EKS/GKE) - Familiarity with collaboration and ITSM tools like Jira, ServiceNow, and Confluence - Strong troubleshooting and communication skills - IAM Management: Ability to manage IAM roles, policies, and permissions for users, groups, and service accounts in AWS and GCP Qualifications Required: - Total experience of 7-9 years - Relevant experience in GCP, Terraform, Kubernetes, and DevOps - Proficiency in managing IAM roles, policies, and permissions for users, groups, and service accounts in AWS and GCP - Experience with containerization (Docker) and orchestration (Kubernetes - EKS/GKE) - Strong troubleshooting and communication skills Please note that this is a contract position for a duration of 12 months. The work schedule is from Monday to Friday and the work location is in person. Benefits include health insurance and provident fund.,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As an Observability Developer at GlobalLogic, you will play a crucial role in alert configuration, workflow automation, and AI-driven solutions within the observability stack. Your responsibilities will involve designing and implementing alerting rules, configuring alert routing and escalation policies, building workflow integrations, developing AI-based solutions, collaborating with teams, and automating alert lifecycle management. Here's a breakdown of your role: **Role Overview:** You will be a proactive and technically versatile Observability Developer responsible for ensuring actionable alerts, well-integrated workflows, and AI-enhanced issue resolution within the observability stack. **Key Responsibilities:** - Design and implement alerting rules for metrics, logs, and traces using tools like Grafana, Prometheus, or similar. - Configure alert routing and escalation policies integrated with collaboration and incident management platforms (e.g., Slack, PagerDuty, ServiceNow, Opsgenie). - Build and maintain workflow integrations between observability platforms and ticketing systems, CMDBs, and automation tools. - Develop or integrate AI-based solutions for mapping telemetry signals, porting configurations, and reducing alert fatigue. - Collaborate with DevOps, SRE, and development teams to contextualize alerts and ensure their meaning. - Automate alert lifecycle management through CI/CD and GitOps pipelines. - Maintain observability integration documentation and provide support to teams using alerting and workflows. **Qualification Required:** - 3+ years of experience in DevOps, SRE, Observability, or Integration Development roles. - Hands-on experience with alert configuration in tools like Grafana, Prometheus, Alertmanager, or similar. - Experience integrating alerts with operational tools such as Slack, PagerDuty, Opsgenie, or ServiceNow. - Solid understanding of observability concepts (metrics, logs, traces). - Scripting or development experience in Python, Bash, or similar. - Experience with REST APIs and webhooks for creating workflow integrations. - Familiarity with CI/CD and GitOps tooling (e.g., ArgoCD, GitHub Workflows). *Note: Preferred qualifications were not explicitly mentioned in the provided job description.* GlobalLogic prioritizes a culture of caring, continuous learning and development, interesting and meaningful work, balance and flexibility, and integrity. As a part of the GlobalLogic team, you will have the opportunity to work on impactful projects and collaborate with forward-thinking companies to shape the digital landscape.,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
Role Overview: You will be a crucial part of our team as a DevOps Engineer with 3 to 5 years of hands-on experience. Your role will involve enhancing development and operations processes to ensure seamless integration and delivery of software solutions. If you are passionate about automation, scalability, and collaboration, we welcome you to join our dynamic and innovative team. Key Responsibilities: - Implement and maintain infrastructure automation using tools such as Terraform or CloudFormation to ensure efficient and scalable cloud environments. - Design, implement, and maintain CI/CD pipelines to automate the software delivery process, enabling rapid and reliable releases. - Utilize configuration management tools like Ansible, Puppet, or Chef to manage and automate server configurations, ensuring consistency and reliability across environments. - Work with containerization technologies such as Docker to create, deploy, and manage containerized applications. - Implement container orchestration solutions like Docker- swarm mode or Kubernetes for managing the deployment, scaling, and operation of application containers. - Set up and maintain monitoring and logging solutions (e.g., Prometheus, ELK stack) to proactively identify and address issues, ensuring system reliability and performance. - Collaborate with development, operations, and QA teams to streamline processes, improve efficiency, and promote a culture of collaboration. - Implement security best practices throughout the DevOps lifecycle, including code scanning, vulnerability assessments, and access controls. - Work with API GW and Security Services to manage and secure the on-prem hosted services. - Work with various cloud providers (AWS, Azure, GCP) to deploy and manage infrastructure and applications. - Develop and maintain scripts (Bash, Python, etc.) for automation of routine tasks, system provisioning, and troubleshooting. - Create and maintain comprehensive documentation for processes, configurations, and troubleshooting steps. - Identify and implement optimizations to enhance system performance, scalability, and reliability. Qualification Required: - Bachelors degree in computer science, Information Technology, or a related field. - Proven 3 to 5 years of experience as a DevOps Engineer. - Strong expertise in scripting languages (Bash, Python, etc.). - Hands-on experience with CI/CD tools (Jenkins, GitLab CI, etc.). - Proficiency in containerization and orchestration tools (Docker swarm mode, Kubernetes). - Experience with infrastructure as code tools (Ansible, Terraform, CloudFormation). - Hands-on experience with API gateways (WSO2 knowledge). - Hands-on experience with securitization of APIs using tools like Cloudflare. - Hands-on experience working with different databases (SQL Mysql, NoSQL- Mongo, Elastic). - Solid understanding of cloud computing concepts and services. - Excellent problem-solving and troubleshooting skills. - Strong communication and collaboration skills. - Certifications such as AWS Certified DevOps Engineer or Certified Kubernetes Administrator (CKA) are a plus.,
Posted 2 days ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
Role Overview: As a subject matter expert in technology, you will play a crucial role in modernizing the Cloud platform for Retail banking and integrated products at Finastra. Your primary focus will be on leveraging modern cloud technologies such as Azure, Azure DevOps, Containers, Kubernetes, and Service Mesh to improve monitoring, logging, developer efficiency, and continuous integration and deployments. Additionally, you will mentor junior team members to help them grow as technologists and be responsible for designing and managing DevOps infrastructure services at scale. Key Responsibilities: - Design and manage highly scalable, reliable, and fault-tolerant CI/CD Pipeline infrastructure & networking for SaaS Based Retail Banking Solution at Finastra. - Enhance monitoring and security posture of infrastructure/application by implementing protective measures efficiently to achieve better ROI and TCO. - Collaborate with Dev Engineering teams to automate Application/Infrastructure/Network processes and meet long-term business needs. - Research and evaluate parallel products, define and govern application/infrastructure baselines. - Communicate and collaborate effectively across distributed teams in a global environment. - Implement toolsets to facilitate developers" use of Containers, Kubernetes, and Service Mesh. - Develop tools for developers, operations, and release teams to utilize Kubernetes and Service Mesh seamlessly. - Ensure platform security and monitoring using tools like Prometheus/Grafana and implement best practices. - Deliver zero-defect and highly resilient code, exceeding availability and defect SLA. - Present technical solutions, capabilities, considerations, and features in business terms. - Convert user stories into detailed development tasks. - Communicate status, issues, and risks precisely and in a timely manner. Qualifications Required: - 8 to 10 years of hands-on experience in SaaS/IaaS with expertise in DevOps techniques and continuous integration solutions using Ansible, Bash, Docker, Git, Maven. - Proficiency in Load Balancing, Rate Limiting, Traffic Shaping, and managing connectivity between Applications and Networks. - Deep knowledge of Linux, Container technologies (e.g., Docker), Terraform, Kubernetes administration, and cluster orchestrators/schedulers (Kubernetes, Mesos). - Strong scripting fundamentals in programming languages like Spring, Python, Java, Ruby, etc. - Understanding of distributed system fundamentals, interactive application development paradigm, memory management, performance optimization, database interactions, network programming, and more. - Working knowledge of Oracle, DB2, PostgreSQL, or Mongo DB databases. - Experience with microservices architecture, RESTful services, CI/CD, and at least one Cloud Service provider like Azure AKS, AWS EKS, GCP GKE, OpenShift, Rancher, etc. - Familiarity with Kubernetes controllers/operators, Docker, CKA or CKAD certification, and operational experience in deploying and managing Kubernetes. - AZ400 certification is a plus. (Note: The additional details of the company were not explicitly mentioned in the provided job description.),
Posted 2 days ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
Role Overview: Gruve, an innovative software services startup, is looking for an experienced Kubernetes Data Center Administrator to manage and maintain multiple infrastructure systems running Kubernetes across data centers. The ideal candidate will play a crucial role in creating, managing, and debugging Kubernetes clusters and services, ensuring operational excellence through collaboration with IT teams. This position requires deep technical expertise in Kubernetes, virtualization, and data center operations, as well as strong experience in ITSM platforms and compliance management. Key Responsibilities: - Design, deploy, and maintain multiple Kubernetes clusters across data center environments. - Manage and troubleshoot Kubernetes services including MinIO (object storage), Prometheus (monitoring), Istio (service mesh), MongoDB, and PostgreSQL (databases). - Collaborate with IT teams to support operational needs such as change management, patch and software update cycles, data protection, disaster recovery planning, DCIM systems, compliance audits, and reporting. - Diagnose and resolve complex Kubernetes configuration issues. - Modify platform components and scripts to enhance reliability and performance. - Administer and integrate multiple ITSM platforms for asset management, change management, incident management, and problem management. - Maintain detailed documentation of Kubernetes environments and operational procedures. - Ensure systems meet regulatory and organizational compliance standards. Qualifications: - 8-10 years of experience in Kubernetes administration and virtualization technologies. - Proven experience managing production-grade Kubernetes clusters and services. - Strong understanding of data center operations and infrastructure systems. - Hands-on experience with ITSM platforms (e.g., Jira Service Management). - Proficiency in scripting (e.g., Bash, Python) and automation tools. - Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana). - Experience with disaster recovery planning and compliance audits. - At least one CNCF Kubernetes certification (e.g., CKA, CKS, CKAD). - Experience with container security and policy enforcement preferred. - Familiarity with GitOps workflows and tools like ArgoCD or Flux preferred. - Knowledge of infrastructure-as-code tools (e.g., Terraform, Ansible) preferred. Note: The job description also includes information about the company, its culture, and the work environment, which has been omitted for brevity.,
Posted 2 days ago
0.0 - 4.0 years
0 Lacs
pune, maharashtra
On-site
As a Robotics Software Intern at Unbox Robotics, you will have the opportunity to be part of a team that is revolutionizing warehouses and distribution centers with cutting-edge mobile robotics systems. Unbox Robotics is known for developing the world's most compact, powerful, and flexible AI-powered parcel sorting robotic system, aimed at increasing efficiency and productivity in warehousing. Founded in 2019 and backed by marquee investors and angels, we are seeking individuals who are passionate about innovation and eager to shape the future of on-demand robotics logistics solutions at our Pune, India office. **Roles & Responsibilities:** - Collaborate with the team to design, develop, and debug software systems. - Deploy software solutions in coordination with the product development team. - Integrate existing/new software into the principal architecture, meeting performance metrics and complexity requirements. - Evaluate technical solutions, develop POCs to assess feasibility, and provide alternatives and recommendations. - Develop efficient tools and evaluation pipelines for the Software System Modules. **Requirements:** - Excellent knowledge of Data Structures and Algorithms with a strong understanding of OOPs concepts. - Proficiency in C++ and familiarity with scripting languages like Python, Bash, etc. - Experience with Linux Development Environment and Build mechanisms such as Cmake. - Familiarity with robotics frameworks like ROS, ROS2, and simulators including Gazebo, Stage, Webots, etc. - Knowledge of SLAM related algorithms like Gmapping, Google Cartographer, RtabMap, GraphSLAM, etc. - Understanding of path planning algorithms like A*, Dijkstra, RRTs, etc. - Familiarity with communication protocols like TCP, MQTT, DDS, ZMQ, etc. - Experience in integrating sensors like IMU, LIDAR, etc. - Strong mathematical foundation and understanding of robot kinematics. **Good to Have:** - Experience working with autonomous mobile robots. - Previous involvement with Robotic Systems and competitions like e-yantra, WRO, etc. - Contribution to open source projects. **Eligible candidates:** - BS or MS in Computer Science or relevant engineering discipline (Applied Physics, Mechanical, Electrical, or Computer Engineering) or equivalent work experience. **We Value:** - Continuous learning mindset to become a Subject Matter Expert. - Proven track record in a start-up environment focusing on innovations. - Exposure to a high-paced working environment. - Ability to conduct detailed procedures efficiently in a time-constrained environment. Join us at Unbox Robotics in Pune and be part of a dynamic team that is shaping the future of warehousing and distribution centers with state-of-the-art robotics solutions. Your dedication and expertise will contribute to our mission of transforming the logistics industry.,
Posted 2 days ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
As a Software Development Engineer at Unbox Robotics, you will be part of a team that is revolutionizing warehouses and distribution centers by creating the world's most compact, powerful, and flexible mobile robotics systems. Your role will involve collaborating with the team to design, develop, and debug software systems, architecting, building, and deploying software solutions, integrating existing/new software into the principal architecture, evaluating technical solutions, and developing efficient tools and evaluation pipelines for the Software System Modules. You will also be responsible for designing, building, and maintaining efficient, reusable, and reliable C++ code, implementing performance and quality modules, and identifying bottlenecks and bugs to devise solutions. Key Responsibilities: - Collaborate with the team to design, develop, and debug software systems. - Architect, build, and deploy software solutions in coordination with the product development team. - Integrate existing/new software into the principal architecture meeting performance metrics and complexity requirements. - Evaluate technical solutions, develop POCs, provide alternatives and recommendations. - Build efficient tools and evaluation pipelines for the Software System Modules. - Design, build, and maintain efficient, reusable, and reliable C++ code. - Implement performance and quality modules. - Identify bottlenecks and bugs, and devise solutions to these problems. Qualifications Required: - Strong software design skills with expertise in debugging and performance analysis. - Excellent knowledge of Data Structures, Algorithms, and OOPs concepts. - Proficiency in C++ and scripting languages like Python, Bash, etc. - Experience with Linux Development Environment and Build mechanisms like Cmake. - Familiarity with robotics frameworks such as ROS, ROS2, and simulators like Gazebo, Stage, Webots. - Knowledge of SLAM related algorithms and motion planning algorithms. - Strong understanding and experience with communication protocols and integration of sensors. - Experience with version control systems and Unit Testing frameworks. - Solid mathematical foundation and understanding of robot kinematics. Good to Have: - Experience in development using Design patterns. - Past relevant experience with SMACH, Behavior Trees, Finite State Machines. - Experience with AMRs, AGVs, multi-agent systems, fleet management, and robotics logistics solutions. - Knowledge of perception algorithms, computer vision, Testing Frameworks, and CI/CD pipelines. - Understanding of frameworks like RESTful services, APIs, MySQL, MongoDB, and modular architectures. Join Unbox Robotics in Pune, India, and be a part of our team of thinkers, innovators, and doers shaping the future of on-demand robotics logistics solutions. We value candidates who are constant learners, have a proven record in a startup environment, exposure to high-paced working environments, and the ability to conduct detailed procedures in a time-constrained environment. *Note: Additional details about the company were not provided in the job description.,
Posted 2 days ago
4.0 - 8.0 years
12 - 18 Lacs
hyderabad
Work from Office
Responsibilities: * Design, implement & optimize high performance computing solutions using EDA tools, Python, Perl & Bash.
Posted 2 days ago
6.0 - 11.0 years
8 - 12 Lacs
noida
Work from Office
Mandatory Skills 6+ years of hands-on strong experience in Ab-Initio, Conduct IT technology Very good knowledge of Datawarehouse, ETL concepts, Unix commands and shell/bash script Strong knowledge of Oracle database, SQL queries, performance tuning of queries Good understanding of processing data using XML / JSON Experience with Agile (Scrum) methodology and software development approach Must come financial services/capital markets/investment banking Great interpersonal and communication skills Can work independently with minimum supervision Good Communication and analytical skills Flexible and adaptable working style to work with multiple stakeholders Open to work in UK Shift (10 AM to 9 PM IST with 8 hours of productive work) Desirable Prior experience in Continuous Flow would be nice to have skills Prior experience of working with AWS, APIs, Kafka, XML would be an added advantage. Certification in Banking domain Education Qualification: B.Tech or MCA Mandatory Competencies ETL - ETL - Ab Initio Operating System - Operating System - Unix Database - Oracle - Database Design Database - Database Programming - SQL Agile - Agile - SCRUM Middleware - Message Oriented Middleware - Messaging (JMS, ActiveMQ, RabitMQ, Kafka, SQS, ASB etc) ETL - ETL - AWS Glue Beh - Communication and collaboration
Posted 2 days ago
8.0 - 13.0 years
9 - 14 Lacs
gurugram
Work from Office
Shift: 7 pm IST to 4 am IST Responsibilities Architect, engineer, implement, and administer Splunk solutions in highly available, redundant, distributed computing environments. Lead design and deployment of new Splunk environments, including clustered, multi-site, and large-scale configurations. Perform Splunk forwarder deployment, configuration, and troubleshooting across diverse platforms. Integrate, curate, and normalize diverse log sources into Splunk, ensuring CIM compliance and high data fidelity. Configure and maintain Splunk dashboards, searches, and alerts to meet PCI DSS logging requirements, and deliver evidentiary reports to auditors to support compliance verification Develop advanced content for SIEM correlation, including custom correlation searches, dashboards, and alerts. Administer, maintain, and tune Splunk components (Indexers, Search Heads, Forwarders, Cluster Masters, Deployer, Deployment Server, and License Master). Proactively monitor platform health using internal logs, KPIs, and custom monitoring solutions to identify and address performance bottlenecks. Lead capacity planning, storage forecasting, and continuity of operations for large Splunk deployments. Optimize Splunk performance through configuration tuning, search optimization, and data model acceleration strategies. Troubleshoot complex ingestion, performance, and search-related issues, identifying root causes and implementing sustainable fixes or workarounds. Reproduce customer or internal issues, document findings, and work with Splunk Support or vendor engineers for resolution. Create, maintain, and enforce Splunk engineering documentation, including SOPs, design diagrams, architecture runbooks, and KB articles. Develop custom scripts and automation tools (e.g., Python, Bash, PowerShell) to improve Splunk administration, onboarding, and operational workflows. Utilize Splunk APIs for integration with enterprise tools and automation frameworks. Serve as a technical escalation point for Splunk Engineer I/II and Splunk Admin roles. Administer, tune, and troubleshoot Splunk Enterprise Security, maintaining data models, correlation searches, and notable events pipeline. Configure and manage HEC (HTTP Event Collector) connections and onboard new data sources. Manage Splunk RBAC (Role-Based Access Control) including SAML and AD group integrations for search heads and API endpoints. Collaborate with security, infrastructure, application, and DevOps teams to ensure Splunk aligns with enterprise monitoring, compliance, and operational goals. Design and implement Splunk solutions supporting compliance frameworks (e.g., PCI DSS, HIPAA, SOX), including dashboard/report development and audit evidence. Research, evaluate, and implement new Splunk apps, add-ons, and integrations to enhance platform capabilities. Mentor junior Splunk engineers and guide cross-functional teams on Splunk best practices, search optimization, and data onboarding. Requirements 8+ years of IT experience in technical engineering, security operations, or infrastructure roles. 5+ years of direct, hands-on Splunk engineering and administration experience in large-scale, distributed environments. Expert-level knowledge of Splunk Enterprise and Splunk Enterprise Security, including architecture, clustering, and scaling strategies. Proficiency in Linux/Unix administration and shell scripting. Strong knowledge of Splunk APIs, including use for automation and tool integrations. Expertise in regex, field extractions, and key-value parsing. Strong programming/scriptingskills in one or more languages (Python, Bash, PowerShell, Perl, JavaScript). Experience with storage systems (DAS, SAN, object storage) and understanding of their performance implications for Splunk indexing. Solid understanding of networking (switches, routers, firewalls, load balancers, DNS, SSL/TLS) and how it impacts Splunk architecture. Familiarity with Enterprise Management and automation tools. Experience with Splunk ITSI (preferred) and other premium Splunk apps. Strong knowledge of data formats including JSON, XML, and CSV. Demonstrated experience delivering Splunk-based compliance reporting and audit support. Strong communication skills for interacting with technical and non-technical stakeholders. Proven ability to lead projects, mentor team members, and provide architectural guidance. Education & Certifications Bachelors degree in Computer Science, Information Systems, or related technical field (or equivalent experience). Splunk Certified Architect and/or Splunk Certified Consultant preferred. Additional certifications in security, cloud, or automation tools are a plus.
Posted 2 days ago
6.0 - 8.0 years
8 - 13 Lacs
noida
Work from Office
Design and implement cloud and hybrid (mix of cloud and on-premises) based infrastructure solutions that includes both virtualized compute and storage. Design and implement high availability and disaster recovery solutions that span the cloud and on-premises. Design and implement VPCs to deploy mission critical production applications using AWS infrastructure services. Act as a Technical Lead for customer onboarding projects and work with Customers to establish G2C connectivity. Install, configure, implement and support Windows/Linux servers including management of user/group accounts & policies, integration to Entra ID and MEDS. Manage the patching regime of all systems. Manage the global hosting asset/inventory and perform lifecycle planning and execution. Manage monitoring systems such as Prometheus, Nagios or equivalent. Manage other hosting related technologies such as proxy/web filtering, load balancers, WAF, backup and replication solutions, and so forth as DRBD, GlusterFS, NetApp ONTAP, etc. Provide L3 support for the cloud team on Azure, Storage, Networking, Linux platforms. Create and review monthly operations reports. Define SOPs and participate in Audits and DR activities. Ensure all solutions comply to corporate security policies and standards. Gather design requirements from stakeholders (e.g., Business and application groups) and translate into functional and technical requirements. Automate repeatable operations activities to optimize delivery time. Able to perform Capacity Planning on customer environment on Managed Cloud and provide recommendations for improving availability and cost optimization. Manage the lifecycle of all requests and incidents that arise, driving for root cause of problems to prevent incidents from recurring. Participate in after-hours support (on call) and scheduled implementation activities as required. Technical Skills/Experience: 6+ years of Experience with working on Azure Infrastructure services (Architecture / administration / operations) Strong working knowledge of AZURE services: AKS; Storage Accounts; Load Balancers; Virtual Network; DNS; IAM; Vault, VPN; Private Link; Express Route; WAF; Entra ID; Billing; Trusted Advisor; SSO; Monitor; Backup; Should be proficient in Terraform, Python, Ruby, Bash, PowerShell Proficiency in Linux/Unix OS is must. Good working knowledge of Networking concepts and tools: Routing, Switching, Firewall security, Proxy, Reverse Proxy, HAProxy, Nginx Experience in using the tools: GiT, Jenkins, Puppet, Ansible Experience with System hardening guidelines e.g., CIS, NIST Experience with Infrastructure Monitoring tools is required. Strong knowledge on Kubernetes platform Experience in working with Ticketing tools like Salesforce, Service now, Jira is preferred. Require minimal supervision and works well in Production Operations team. Ability to work efficiently under pressure on non-routine and highly complex tasks. Team player with strong interpersonal, written and verbal communication skills. Ability to work in a multicultural team spread across the globe (Europe, India, NA) Must have customer first mind set. Azure Solutions Architect Expert is a plus. Knowledge of AWS Cloud is a plus. Education: Bachelors or Master s Degree in CS/CE or equivalent Total Experience Expected: 06-08 years Qualifications B.E./ B.Tech./ MCA
Posted 2 days ago
6.0 - 8.0 years
8 - 10 Lacs
noida
Work from Office
Expert on any of the Axway products / Any other competitor B2B product Knowledge of EDI domain and standards Familiar with Databases like MySQL/DB2/Oracle Good working experience on Linux and Windows environment Familiar with various network protocols e.g., HTTP/FTP/SFTP/SMTP Knowledge of SSL, SSH, LDAP/firewalls, Certificates Knowledge of SOAP/REST API Well versed on designing, developing, modifying and debugging various technical issues. Desirable: Familiar with Linux (RHEL/SUSE) Familiar with Automation and pipeline tools for CI/CD (Ansible/Puppet, GiT) usage and exceution. Good to have experience with Bash and python/ Terraform. Familiar with Any Cloud (AWS or Azure) Familiar with networking concepts will be add on.. 2) Operational 24*7*365 shift (with night shift and India Festivals / Public holidays). On call support from Home 3) Soft Skills / Experience Mandatory: Good communications skills - both written and verbal. Experience on Upgrade and Migration Projects. Client engagement experience knowledge of ITIL processes including Incident/Problem/Change Management processes experience Desirable: Strong ability to map customer requirements to a solution approach Excellent customer service skills, both verbal and written Strong analytical skills and incisive thinking ability Excellent in Client Interfacing and customer co-ordination Skills Knowledge of product architecture Autonomous with one programming language/scripting Total Experience Expected: 06-08 years Qualifications B.Tech(CS)/BE(CS)/MCA
Posted 2 days ago
10.0 - 15.0 years
10 - 14 Lacs
pune
Work from Office
We are seeking a skilled AWS DevOps Engineer with hands-on experience in Infrastructure as Code (IaC) using CloudFormation and Terraform . The ideal candidate will be responsible for designing, implementing, and maintaining scalable cloud infrastructure and CI/CD pipelines, ensuring high availability, security, and performance across environments. Key Responsibilities: Design and manage AWS infrastructure using CloudFormation and Terraform . Develop and maintain CI/CD pipelines using tools like Jenkins, GitLab CI, or CircleCI. Automate deployment processes and infrastructure provisioning. Monitor and troubleshoot cloud environments to ensure optimal performance. Collaborate with development and operations teams to streamline workflows. Implement security best practices and compliance standards. Document infrastructure setups, processes, and configurations. Required Skills & Experience: 10-15 years of experience in DevOps roles with a focus on AWS. Strong expertise in AWS services (EC2, ECS, Lambda, VPC, IAM, etc.). Hands-on experience with CloudFormation and Terraform for IaC. Proficiency in scripting languages like Python , Bash , or PowerShell . Experience with monitoring tools (CloudWatch, Prometheus, etc.). Familiarity with version control systems (Git) and agile methodologies. Excellent problem-solving and communication skills. Preferred Qualifications: AWS Certified DevOps Engineer or equivalent certification. Experience in regulated industries (e.g., finance, healthcare). Knowledge of containerization (Docker, Kubernetes).
Posted 2 days ago
10.0 - 15.0 years
15 - 20 Lacs
pune
Work from Office
We are seeking a skilled AWS DevOps Engineer with hands-on experience in Infrastructure as Code (IaC) using CloudFormation and Terraform . The ideal candidate will be responsible for designing, implementing, and maintaining scalable cloud infrastructure and CI/CD pipelines, ensuring high availability, security, and performance across environments. Key Responsibilities: Design and manage AWS infrastructure using CloudFormation and Terraform . Develop and maintain CI/CD pipelines using tools like Jenkins, GitLab CI, or CircleCI. Automate deployment processes and infrastructure provisioning. Monitor and troubleshoot cloud environments to ensure optimal performance. Collaborate with development and operations teams to streamline workflows. Implement security best practices and compliance standards. Document infrastructure setups, processes, and configurations. Required Skills & Experience: 10-15 years of experience in DevOps roles with a focus on AWS. Strong expertise in AWS services (EC2, ECS, Lambda, VPC, IAM, etc.). Hands-on experience with CloudFormation and Terraform for IaC. Proficiency in scripting languages like Python , Bash , or PowerShell . Experience with monitoring tools (CloudWatch, Prometheus, etc.). Familiarity with version control systems (Git) and agile methodologies. Excellent problem-solving and communication skills. Preferred Qualifications: AWS Certified DevOps Engineer or equivalent certification. Experience in regulated industries (e.g., finance, healthcare). Knowledge of containerization (Docker, Kubernetes).
Posted 2 days ago
8.0 - 10.0 years
7 - 11 Lacs
pune
Work from Office
analyse, design, engineer, deploy and maintain global Cyber Security systems work closely with Project Managers, Technical Architects, 2nd level support, and IT Business Analysts provide technical consultancy to the project team maintain technical documentation relevant to operations (operational manual, installation guide, etc.) Job Requirement Excellent knowledge of Red Hat Linux environments in a large enterprise BASH scripting and Python programming skills (or equivalent programming experience) Skills to design, plan and deliver solutions in a large-scale enterprise environment Experience with change control using Git Fluency in English Technical communication and documentation skills Experience with Microsoft Azure system development Experience working in a large organization a plus Experience with a SOAR, SIEM and/or incident management systems a plus
Posted 2 days ago
6.0 - 12.0 years
8 - 12 Lacs
pune
Work from Office
Experience on CICD - Azure DevOps & GitLab Exp in Azure Cloud Platform foundational resources ( AKS, Azure Databases, Virtual Networks, Function App, Webapp, etc ) Experience in implementing Azure Data lake with Azure Data Factory, Azure Databricks (Python Spark), ADLS, Azure SQL Database Sound Knowledge on Azure Data Lake Storage, Storage Accounts & File Shares Experience on Azure PowerShell , Azure CLI commands and knowledge of ARM templates to create resources Configured, deployed and managed centrally provided common cloud services e.g., Virtual machine, Azure Data Lake-Services, Azure Data Factory, SQL, Azure Key Vaults. Worked on synchronizing version updates between Gitlab and Azure DevOps using pipelines Experience with containers and orchestration technologies (Kubernetes, Helm charts, GitLab) Excellent Linux and Windows system administration including Bash, PowerShell (or Python)
Posted 2 days ago
2.0 - 6.0 years
2 - 6 Lacs
hyderabad
Work from Office
Job Purpose The Systems Operations Analyst is part of a support organization that is responsible for the daily operations of multiple industry leading trading exchanges. This is a customer-facing position, providing immediate assistance to ICE/NYSE exchanges, back office, support personnel and IT staff, in an effort to achieve the highest customer satisfaction and minimize the impact of IT related problems. This is a critical support role within the overall architecture of ICE/NYSE exchanges, divisions, and infrastructure. This is a 24x7 environment and the position requires shift rotation and/or weekend work. Responsibilities Monitoring and Incident Management Monitor systems and applications within the production environment Diagnose and fix incidents raised through monitoring tools, conference bridges and chats Work with and escalate to internal and external teams to implement incident fixes, work-around and data recovery Open and update production incident tickets according to company standards Problem Management Investigate and update incident tickets with root cause and incident description, ensuring appropriate corrective action follow-up tickets are assigned Manage incident tickets to closure, ensuring incident details are complete and accurate, and all corrective actions have been completed System and Application Production Readiness Work with internal and external teams to expand and maintain operational runbooks and other documentation Check application and infrastructure availability and tasks at scheduled times Configure monitoring tools and alarms Deployment Management Production deployments Approve and execute production deployment tasks Participate in disaster recovery, business continuity and workplace recovery events Participate in continuous improvement programs, such as trend analysis of recurring issues Provide and report on performance metrics of the environment Follow the handover process documented to bring the next shift up to speed and highlight priority items or issues Knowledge and Experience Bachelors degree (IT-based) or experience within IT systems support and/or operational support of applications databases within a Linux/Unix OS environment. Proficiency in Bash and working knowledge of a broad range of Linux core utilities and scripting Working knowledge of networking: specifically TCP and UDP Strong communication skills High level of general IT skills with email and MS Office Applications Able to think logically and critically Analytical problem-solving skills with an ability to identify root cause(s) Able to work as a team player across the organization Able to build and maintain effective relationships with individuals and the team as a whole Ability to be organized and decisive while under pressure Excellent time management skills Able to manage priorities and multi-task Self-confident and assertive
Posted 2 days ago
5.0 - 8.0 years
15 - 30 Lacs
mumbai, bengaluru
Hybrid
About the Role Part of the Core DB Platform Operations team managing large-scale Greenplum & Postgres environments. Responsible for installation, patching, upgrades, automation, performance tuning, and troubleshooting. Collaborate with global engineering teams to ensure high availability and plant stability . Key Responsibilities Install, configure, upgrade, and maintain Postgres (v12+) and Greenplum (v6+) databases. Perform performance tuning , SQL query optimization, and routine health checks. Plan and execute backup/restore, disaster recovery , and capacity planning. Implement HADR/Replication and manage clusters (e.g., Patroni / GoldenGate ). Provide 247 production support with rotational on-call coverage. Diagnose and resolve complex issues across application, network, database, and storage layers . Automate repetitive tasks using Ansible / Shell / Python / Perl scripting . Create best practices, documentation, and training for L1/L2 support teams. Coordinate hardware changes with Infra teams and support cloud migration projects . Required Skills & Experience Greenplum DBA experience (v5/v6 or higher). PostgreSQL DBA experience (v12 or higher). Strong SQL performance tuning & troubleshooting skills. Proficiency in Linux/Unix OS , kernel/OS tuning, and system security. Expertise in automation tools : Ansible, Shell, Python, or Perl. Familiarity with Cloud Platforms (AWS / Azure / GCP) and Data-Warehousing (Snowflake a plus). Knowledge of disaster recovery , high availability, and security best practices. Excellent communication, organizational, and problem-solving abilities. Qualifications BE/BTech/MCA or equivalent in Computer Science/IT. 5+ years as a Database Engineer/DBA in production environments. Why Join Us Work on mission-critical, global database platforms . Opportunity to lead automation and modernization initiatives. Competitive salary, performance bonuses, and a flexible, collaborative culture .
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The bash job market in India is thriving with numerous opportunities for professionals who have expertise in bash scripting. Organizations across various industries are actively seeking individuals with these skills to streamline their operations and automate repetitive tasks. If you are a job seeker looking to explore bash jobs in India, read on to learn more about the job market, salary range, career progression, related skills, and interview questions.
These cities are known for their vibrant tech industries and offer a plethora of opportunities for bash professionals.
The average salary range for bash professionals in India varies based on experience level. Entry-level positions may start at around INR 3-4 lakhs per annum, while experienced professionals can earn up to INR 12-15 lakhs per annum.
In the field of bash scripting, a typical career path may involve starting as a Junior Developer, progressing to a Senior Developer, and eventually moving up to a Tech Lead role. With experience and expertise, individuals can also explore roles such as DevOps Engineer or Systems Administrator.
In addition to bash scripting, professionals in this field are often expected to have knowledge of: - Linux operating system - Shell scripting - Automation tools like Ansible - Version control systems like Git - Networking concepts
chmod
command in bash? (basic)grep
and awk
commands. (medium)#!
(shebang) at the beginning of a bash script? (basic)cut
command in bash? (basic)cron
? (medium)$1
and $@
in bash scripting. (medium)exec
command in bash? (advanced)tr
command in bash? (basic)export
command in bash. (medium)As you prepare for bash job interviews in India, remember to showcase your expertise in bash scripting, along with related skills and experience. By mastering these interview questions and demonstrating your passion for automation and scripting, you can confidently pursue rewarding opportunities in the field. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |