Jobs
Interviews

478 Bash Scripting Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

0 Lacs

noida, uttar pradesh

On-site

Trantor is a leading provider of enterprise technology solutions and CaptiveCoE, specializing in digital transformations for global enterprises since 2012. With expertise in cloud strategy, cloud-native development, AI/ML, and security, Trantor empowers clients to harness AWS cloud for innovation and operational excellence. The company is dedicated to delivering intelligent automation solutions and has extensive experience across Fintech, Martech, E-commerce, and Captive Centers. We are seeking a full-time Senior Professional Services Engineer for a hybrid position based in Noida, offering flexibility for remote work. In this role, you will troubleshoot technical issues, provide professional services, ensure network security, offer technical support, and implement cybersecurity measures. Responsibilities - Collaborate directly with customers to resolve complex post-sales issues, requiring thorough analysis and evaluation. - Engage in technical discussions with multi-functional teams to foster transparency, leading to improved products, work environments, and cybersecurity. - Provide timely technical assistance, especially in high-pressure situations. - Deliver implementation and technical support to customers and partners. - Offer configuration guidance, troubleshooting advice, and best practices. - Manage support cases to ensure issues are documented, tracked, and resolved promptly. - Conduct fault isolation and root cause analysis for technical issues. - Publish Technical Support Bulletins and maintain technical documentation in the Knowledge Base. - Review technical documentation for training materials, marketing collateral, manuals, and troubleshooting guides. Shift: APAC/EMEA/EST Candidate Profile Experience: 6-10 years in a Professional Services and/or Technical Support environment. Technical Skills: - Proficient in Remote Access VPN solutions, IPSEC, PKI & SSL, TCP/IP, and Authentication Protocols (e.g., LDAP, RADIUS). - Mandatory hands-on experience with at least one of the following SD-WAN/ZTNA technologies: Cisco Viptela, Palo Alto Prisma Access, Zscaler, Fortinet, Cato Networks, or SilverPeak, along with expertise in managing multi-vendor WAN/SD-WAN routing (NHRP, EIGRP, BGP, OSPF). - Strong understanding of SD-WAN principles and advanced troubleshooting skills across diverse tools and systems. - In-depth knowledge of Networking and Network Security, with experience in multi-vendor networking devices (routers, switches, firewalls, etc.). - Familiarity with Palo Alto, Cisco, Checkpoint, Juniper, and Fortinet products is advantageous. - Experience with Firewall Central Management Systems and multi-factor authentication systems (tokens, certificates, CAC cards, etc.). - Solid understanding of security services (IDS/IPS, Firewalls, etc.). - Strong ability to independently debug complex networks with mixed media and protocols. - Virtualization experience (AWS, Azure, VMware, OpenStack) is a plus. - Proficiency in Windows and macOS, including debugging and registry editing; Linux knowledge is beneficial. - Working knowledge of Python and BASH scripting. Communication Skills: Excellent written and verbal communication skills. Certifications: Candidates with CCIE Routing and Switching certification are preferred. Education: BE/B.Tech in Computer Engineering, Electronics & Communications Engineering, or a related field.,

Posted 4 days ago

Apply

3.0 - 7.0 years

3 - 8 Lacs

Ahmedabad

Work from Office

Job purpose: Design & implement the best-engineered technical solutions using the latest technologies and tools. Who you are: Lead our DevOps team and drive the adoption of DevOps best practices within the organization Excellent communication and leadership skills, with the ability to lead and inspire a team Strong problem-solving and troubleshooting skills Knowledge on DevOps Tools, deploying and managing infrastructure with automation. Proficient knowledge with Networking & Firewall. Strong proficiency in Linux, Open Source, Web-based and Cloud-based environments (ability to use open-source technologies and tools) Hands-on experience with Apache, Nginx, Git, GNU tools Must have exposure on AWS, BitBucket Pipeline, Docker, npm, Maven, SNKY Experience working with Shell Scripting Hands-on with SonarQube, Jenkins, and Kubernetes. Strong understanding / experience with logging, logging frameworks API related skills (REST, and any other like google, AWS, Atlassian) DevOps (ansible, apache, python) Web services / REST (JSON) Good team player who can help and support team Good To have Knowledge on All DevOps Tools Exposure to PCI DSS environments Experience with any programming language Knowledge on third party integration with automation. FinTech domain experience What will excite us: Previous experience of delivering solution from idea to technicality. You are a whiz enough to Communication solution and troubleshooting. You aware maturely your temperament to handle any urgency or critical points. What will excite you: Opportunity to build Enterprise Grade applications Complete ownership and independence of execution Innovation is rewarded Learn from accomplished UI tech architects A great and rewarding work environment Job location : Ahmedabad

Posted 4 days ago

Apply

5.0 - 8.0 years

10 - 20 Lacs

Hyderabad

Work from Office

At Meazure Learning, we believe in transforming learning and assessment experiences to unlock human potential. As a global leader in online testing and exam services, we support credentialing, licensure, workforce education, and higher education through purpose-built solutions that are secure, accessible, and deeply human-centered. With a global footprint across the U.S., Canada, India, and the U.K., our team is united by a passion for innovation and a commitment to integrity, quality, and learner success. About the Role We are looking for a seasoned Senior DevOps Engineer to help us scale, secure, and optimize our infrastructure and deployment processes. This role is critical to enabling fast, reliable, and high-quality software delivery across our global engineering teams. Youll be responsible for designing and maintaining cloud-based systems, automating operational workflows, and collaborating across teams to improve performance, observability, and uptime. The ideal candidate is hands-on, proactive, and passionate about creating resilient systems that support product innovation and business growth. Join Us and Youll Help define and elevate the user experience for learners and professionals around the world Collaborate with talented, mission-driven colleagues across regions Work in a culture that values trust, innovation, and transparency Have the opportunity to grow, lead, and make your mark in a high-impact, global organization Key Responsibilities Design, implement, and maintain scalable, secure, and reliable CI/CD pipelines Manage and optimize cloud infrastructure (e.g., AWS, Azure) and container orchestration (e.g., Kubernetes) Drive automation across infrastructure and development workflows Build and maintain monitoring, alerting, and logging systems to ensure reliability and observability Collaborate with Engineering, QA, and Security teams to deliver high-performing, compliant solutions Troubleshoot complex system issues in staging and production environments Guide and mentor junior engineers and contribute to DevOps best practices Desired Attributes: Key Skills 5+ years of experience in a DevOps or Site Reliability Engineering role Deep knowledge of AWS cloud infrastructure with a focus on Elastic Container Services Proficiency with containerization (Docker, Kubernetes) and Infrastructure as Code tools (Terraform, CloudFormation) Hands-on experience with CI/CD platforms (Jenkins, GitHub Actions, or similar) Strong scripting capabilities (Bash, Python, or PowerShell) Familiarity with monitoring and logging tools (Prometheus, Grafana, OpenSearch, or Datadog) A problem-solver with excellent communication and collaboration skills The Total Rewards - The Benefits: Competitive Pay Healthy Work Culture Career Growth Opportunities Learning and Development Opportunities Company Sponsored Health Insurance Referral Award Program Company Provided IT Equipment (for remote team members) Transportation Program (on-site team members) Company Provided Meals (on-site team members) 14 Company Provided Holidays Generous Leave Program

Posted 4 days ago

Apply

5.0 - 10.0 years

35 - 40 Lacs

Bengaluru

Work from Office

As a Senior Data Engineer, you will proactively design and implement data solutions that support our business needs while adhering to data protection and privacy standards. In addition to this, you would also be required to manage the technical delivery of the project, lead the overall development effort, and ensure timely and quality delivery. Responsibilities : Data Acquisition : Proactively design and implement processes for acquiring data from both internal systems and external data providers. Understand the various data types involved in the data lifecycle, including raw, curated, and lake data, to ensure effective data integration. SQL Development : Develop advanced SQL queries within database frameworks to produce semantic data layers that facilitate accurate reporting. This includes optimizing queries for performance and ensuring data quality. Linux Command Line : Utilize Linux command-line tools and functions, such as bash shell scripts, cron jobs, grep, and awk, to perform data processing tasks efficiently. This involves automating workflows and managing data pipelines. Data Protection : Ensure compliance with data protection and privacy requirements, including regulations like GDPR. This includes implementing best practices for data handling and maintaining the confidentiality of sensitive information. Documentation : Create and maintain clear documentation of designs and workflows using tools like Confluence and Visio. This ensures that stakeholders can easily communicate and understand technical specifications. API Integration and Data Formats : Collaborate with RESTful APIs and AWS services (such as S3, Glue, and Lambda) to facilitate seamless data integration and automation. Demonstrate proficiency in parsing and working with various data formats, including CSV and Parquet, to support diverse data processing needs. Key Requirements: 5+ years of experience as a Data Engineer , focusing on ETL development. 3+ years of experience in SQL and writing complex queries for data retrieval and manipulation. 3+ years of experience in Linux command-line and bash scripting. Familiarity with data modelling in analytical databases. Strong understanding of backend data structures, with experience collaborating with data engineers ( Teradata, Databricks, AWS S3 parquet/CSV ). Experience with RESTful APIs and AWS services like S3, Glue, and Lambda Experience using Confluence for tracking documentation. Strong communication and collaboration skills, with the ability to interact effectively with stakeholders at all levels. Ability to work independently and manage multiple tasks and priorities in a dynamic environment. Bachelors degree in Computer Science, Engineering, Information Technology, or a related field. Good to Have: Experience with Spark Understanding of data visualization tools, particularly Tableau. Knowledge of data clean room techniques and integration methodologies.

Posted 4 days ago

Apply

4.0 - 9.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Job Area: Engineering Group, Engineering Group > Hardware Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Hardware Engineer, you will plan, design, optimize, verify, and test electronic systems, bring-up yield, circuits, mechanical systems, Digital/Analog/RF/optical systems, equipment and packaging, test systems, FPGA, and/or DSP systems that launch cutting-edge, world class products. Qualcomm Hardware Engineers collaborate with cross-functional teams to develop solutions and meet performance requirements. Minimum Qualifications: Bachelor's degree in Computer Science, Electrical/Electronics Engineering, Engineering, or related field and 4+ years of Hardware Engineering or related work experience. OR Master's degree in Computer Science, Electrical/Electronics Engineering, Engineering, or related field and 3+ years of Hardware Engineering or related work experience. OR PhD in Computer Science, Electrical/Electronics Engineering, Engineering, or related field and 2+ years of Hardware Engineering or related work experience. About The Role We are seeking a skilled LSF Cluster Support Engineer to join our Infrastructure team at Qualcomm. This role involves both supporting and developing for our LSF clusters. The ideal candidate will have extensive experience with LSF, Python, Bash, and Linux systems. You will be responsible for troubleshooting, maintaining, and optimizing our LSF environments, while also contributing to development projects. Key Responsibilities Provide expert support for LSF cluster environments. Troubleshoot and resolve issues related to job scheduling and cluster performance. Collaborate with development teams to enhance LSF functionalities. Develop and maintain scripts and tools using Python and Bash. Ensure high availability and reliability of cluster operations. Preferred Qualifications Proven experience of 5+ years with LSF cluster support and administration, including user-facing support Strong proficiency in Python, Bash scripting, and Linux systems. Excellent problem-solving skills and ability to work under pressure. Experience in development and support roles, ideally in a high-performance computing environment. Experience with additional cluster management tools like PBS, NC, etc. Experience in containers and Docker. Experience & Knowledge of other programming languages. Principal Duties and Responsibilities Support internal users with grid job issues Act as the first point of contact for internal teams facing problems with job submission, monitoring, or execution on the grid; provide timely troubleshooting and guidance. Diagnose and resolve grid-related failures Investigate job failures, queue delays, and resource allocation issues; work with users to resolve errors and optimize job configurations. Configure and maintain job submission environments Assist users in setting up and adjusting job submission tools, scripts, and environment variables to ensure compatibility with grid requirements. Collaborate on internal tooling Work with developers to improve, test, and support internal tools that interface with the grid, such as job submission portals or automation scripts. Document processes and common issues Create and maintain internal documentation covering grid usage, troubleshooting steps, and best practices to streamline support and reduce recurring issues. Level of Responsibility Works independently with minimal supervision. Provides supervision/guidance to other team members. Decision-making is significant in nature and affects work beyond immediate work group. Requires verbal and written communication skills to convey complex information. May require negotiation, influence, tact, etc. Has a moderate amount of influence over key organizational decisions. Tasks do not have defined steps; planning, problem-solving, and prioritization must occur to complete the tasks effectively. Applicants Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.

Posted 4 days ago

Apply

6.0 - 8.0 years

0 - 0 Lacs

noida

On-site

Production Control Analyst_Full-Time_Noida (Remote)_Shift Timing: PST and overlap IST/PST Job Title: Production Control Analyst Job Type: Full-Time Location: Noida (Remote) Experience: 6-8 Years Shift Timing: PST and overlap IST/PST Job Description: Seeking a Production Control Security Analyst that has strong technical knowledge IBM Security Verify Access, IBM Security Verify Governance Identity Manager and IBM Mainframe ZOS RACF products to provide development and support. Production Control Analyst: Major Responsibilities: *Provide support for day to day operations *Support access provisioning / de-provisioning and resolve all access issues on mainframe, web security, SAP and other platforms *Support COTS products as it relates to Production Control Security Analyst *Must have a good understanding of Windows, Linux and Mainframe systems *3 Years of AD Management and automation with PowerShell *Provide support to developers to analyze and resolve production issues. *Provide 24/7 on call support for production processing. *Review and resolve tickets assigned. *Good documentation and communication skills *Provide cross training support for other team members *1-2 Years Perl, Python, DOS, Bash scripting experience *1-2 Years Java and JavaScript development expertise as it relates to TDIIBM Security Directory Integrator) *3-5 Years Operational and Maintenance Activities *IBM Security Verify Governance Identity Manager (ISVGIM)Version 10.0.1.x Operational Activities Administration Activities: *Assist and troubleshoot manual creation of users *Manual reconciliation or adoption of any target users *Management of provisioning and password issue *Creation and management of roles and roles to entitlement mapping *Management of role owner approver *Creation and management of user recertification campaigns *Automate manual process wherever possible. *Work with technical and business stakeholders to troubleshoot. *Ongoing maintenance of the existing IBM environment includes. *Monitoring the reconciliation of the target system Monitoring the user feed files: *Add, modify, update provisioning policies wherever necessary in the application. *Create, modify, update SDI(TDI) assembly lines to maintain the data current. *Troubleshooting any of the user provisioning and password related processes *Troubleshooting any application component failures *Troubleshooting role and access provisioning and revocation. *Troubleshooting any application component failures *Coordinating the application of fix packs to IBM Security Verify Governance Identity Manager and its components as required. *Coordinating the production change and migration activities *IBM (Tivoli)Security Directory Integrator development and code maintenance. *IBM Security Verify Access and Federation Module Operational Activities Administration Activities: *Create and maintain junctions, groups, ACLs, Objectspace, dynamic URLs, configuration files *Provide support for the application team to troubleshoot webproxy related issues. *Setup and troubleshoot SSO federation with SAML 2.0 WS-Federation, and OpenID Connect protocols for federated access. *Handle federation certificates udates with the clients. *Setup and maintain Advanced Access Control (AAC) policiesenabled Mobile Multi Factor Authentication (MMFA) data for all user types. *Ongoing maintenance of the existing environment *Troubleshooting any of the single sign on and federation related incidents *Troubleshooting any application component failures *Coordinating the application of fix packs to IBM Security Verify Access and its components as required *Coordinating the production change and migration activities *Troubleshooting access related issues with application team whenever necessary. Skills that are needed: *IBM Security Verify Access (v10.0.3 and above) administration for user and access provisioning. *IBM Security Verify Governance Identity Manager (v10.0.1and above) administration for user and access provisioning. *IBM Security (Tivoli) Directory Integrator (ISDI) (TDI) *IBM Security Directory Server (SDS) *IBM Security Verify Access Federation Module *Web Sphere Application Server Network Deployment (WAS-ND) *IBM DB2 Database Server Version 11.5 and above (DB2), GSKit, update certificate. *Mainframe RACF administration for user and access provisioning *Oracle EBS user provisioning *SAP ECC 6.0 User Security Administration *Knowledge of Certificate generation. *Key skills include user account management, group policy management, knowledge of Federation Services (ADFS), LDAP queries, PowerShell scripting, backup and recovery processes, security management, DNS management, and troubleshooting user authentication issues. Skills that would be nice to have: *Must have a good understanding of DOS and Bash scripting. *Support PGP encryption and Secure file transfer using Serv-U *Microsoft power Apps, Microsoft Power BI *SAP Hana User provisioning *Knowledge of DNS, Load balancer, TCP/IP concepts

Posted 4 days ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Bengaluru

Work from Office

SRE (Site Reliability Engineer 2) We are looking for engineers who are passionate about reliability, performance, and efficiency, and with experience in building tools, services, and automation to manage and improve production services. Systems internals/security, Linux, Network, and Monitoring work to improve the reliability and performance of the next generation of distributed systems and containerized deployments Diagnose and troubleshoot complex distributed systems handling millions of queries per second Knowledge of Linux cloud services using kvm/qemu/lvm. Knowledge of containerization technologies like docker and deployment and troubleshooting of containers Understanding of cloud platform Azure, ability to set up, configure, monitor and troubleshoot various PaaS components like Firewalls, VPN gateways, Load Balancers, Storage accounts, Networks and others In-depth knowledge in Perl/GoLang/Python to automate tasks with minimal intervention. Day-to-day work is heavily command-line driven, which requires a strong understanding of Linux. Troubleshoot issues across the entire stack - hardware, software, application, and network Knowledge in Database technologies, specifically in MySQL/NoSQL is good to have. Participate in 24x7 on-call rotations. Design, build and maintain core infrastructure that enables Phonepe scaling to support hundreds of thousands of concurrent users. Actively take part in the Analysis and System improvement plan. Drive performance testing, capacity planning and high availability practices. Own implementations of new technologies while ensuring proper testing and documentation. Proactively monitor/identify/solve issues which could have a potential impact to our Infrastructure. Natural team player and also have a resourceful attitude. Buddy new team members, and get them production ready.

Posted 5 days ago

Apply

2.0 - 5.0 years

5 - 10 Lacs

Noida, Delhi / NCR

Hybrid

PREFERENCE: 1. Early joiners preferred 2. Work Mode: Hybrid (Combination of in-office and remote work) 3. Working days will be 5 per week. 4. Prior experience in a startup or fast-paced environment 5. Immediate availability for interviews 6. Strong communication skills and team fit Key Responsibilities Manage and maintain CI/CD pipelines using Jenkins and Git Monitor system performance across all environments Troubleshoot issues related to builds, deployments, and environments Automate deployments and environment setup using Python , Bash , or Groovy Collaborate with development , QA , and release teams Ensure uptime and reliability of infotainment software services Required Skills Strong experience with CI/CD operations (Jenkins, Git) Proficiency in scripting ( Python , Bash , Groovy ) Good understanding of system monitoring and troubleshooting Familiarity with Agile/Scrum workflows Excellent communication and incident-handling skills Good to Have Experience in infotainment or embedded systems Knowledge of cloud platforms , Docker , or Kubernetes

Posted 5 days ago

Apply

3.0 - 6.0 years

10 - 20 Lacs

Pune, Gurugram, Bengaluru

Work from Office

Join our Data & Technology Services team as a DevOps Engineer and play a pivotal role in deploying and managing infrastructure that supports advanced AI & ML engagements for the investment banking sector. You will be instrumental in automating and optimizing our operations and development processes, ensuring seamless, scalable, and efficient delivery of services 2-5 years of experience in - DevOps, Azure, Kubernetes, CI/CD, Jenkins, GitHub, Docker, Terraform Good to have: Helm, ArgoCD GitOps Experience in a DevOps role, preferably within the financial services industry. Strong experience with cloud platforms, particularly Azure. Proficiency in scripting and automation using Python, Bash, or PowerShell. Experience with CI/CD tools such as Azure DevOps. Experience with Azure Kubernetes Service, Azure Storage, Azure Redis, and other Azure cloud related technologies. Knowledge of containerization and orchestration tools such as Docker and Kubernetes. Strong understanding of networking, security, and infrastructure management. E.g., Terraform. Implement backup and disaster recovery strategies and participate in annual DR tests and assist with executing the DR test plan. Should have a good understanding of high availability options in Azure. Excellent problem-solving skills and attention to detail. Strong communication skills, both written and oral, with a business and technical aptitude. Additionally, desired skills: Familiarity with Spark technologies (e.g., PySpark). Data Analytical and AI/ML experience or skills. Knowledge of financial securities and asset classes. Strong experience with cloud platforms, particularly Snowflake, or Databricks. Key Responsibilities Deploying and managing cloud infrastructure to enable and support data storage, processing, and analytics. Developing CI/CD pipelines to build and maintain continuous integration and continuous deployment pipelines to automate software delivery and infrastructure changes. Managing code repositories to ensure proper version control and collaboration using tools such as Git. Set up monitoring tools to track performance and troubleshoot issues, ensuring availability and scalability. Collaboration with data teams by working closely with data scientists and data engineers to automate data pipelines and integrate machine learning models into production environments. Implement security best practices and ensure compliance with industry standards. Create detailed documentation of processes and provide training to internal teams on DevOps practices. Managing client relationships, effectively communicating with stakeholders, and ensuring the delivery of highquality solutions.

Posted 5 days ago

Apply

6.0 - 11.0 years

5 - 9 Lacs

Navi Mumbai

Work from Office

We are seeking a skilled Performance Tester with expertise in the public safety domain to join our team. The ideal candidate will have a strong background in performance testing and experience working with public safety applications and systems. Telecome Domain is must Key Responsibilities: 6+ years of previous hands-on experience as Performance tester in telecom & public safety domain . Strong experience in performance test planning, test estimation, script development, test execution, test results analysis Executing and analyzing performance tests routinely and recording history of the results Programming skills in programming languages such as Bash scripting, perl, Java, Python, Groovy, ELK stack, GRAFANNA, Jenkins, Justle programming, Node Js and Linux Expertise in configuring performance counters using PerfMon tool to identify the bottlenecks for CPU, Memory, Disk IO, Network Experience with Database testing and tuning Understanding of Networking concepts at all layers Experience with New Relic monitoring tool Experience with Log Monitoring tools Lets Work Together

Posted 5 days ago

Apply

2.0 - 5.0 years

5 - 10 Lacs

Noida, Navi Mumbai, Delhi / NCR

Work from Office

RedHat OpenShift DevOps Engineer Job Title: Red Hat OpenShift DevOps Engineer Experience: Min 2 Years Location: Navi Mumbai Employment Type: Full-Time/Prmanant Job Summary: We are seeking a passionate and detail-oriented OpenShift DevOps Engineer with 2 years of hands-on experience to join our growing DevOps team. The ideal candidate will have a solid understanding of CI/CD pipelines, containerization, and experience managing applications on Red Hat OpenShift. Key Responsibilities: Deploy, manage, and monitor containerized applications on Red Hat OpenShift Container Platform Build and maintain CI/CD pipelines using tools like Jenkins , GitLab CI , or Tekton Implement and manage infrastructure as code using Terraform or Ansible Support Docker containerization and orchestration using Kubernetes/OpenShift Automate operational tasks and improve system reliability and scalability Perform regular system monitoring, optimization, and tuning Work closely with development and QA teams to support release cycles Maintain proper documentation of deployment processes, environments, and configurations Required Skills and Qualifications: Bachelors degree in Computer Science, Information Technology, or a related field. 2 years of professional experience in DevOps practices Hands-on experience with Red Hat OpenShift and Kubernetes Proficiency in Linux administration and Bash/Shell scripting Experience with CI/CD tools (e.g., Jenkins, GitLab CI, Tekton) Familiarity with Docker , Helm , and container lifecycle management Working knowledge of Git and version control workflows Understanding of networking , load balancers , and monitoring tools (e.g., Prometheus, Grafana) Exposure to cloud platforms (AWS, Azure, or GCP) is a plus Soft Skills: Strong analytical and troubleshooting skills Effective communication and team collaboration Ability to work in a fast-paced, agile environment Good to have: Red Hat Certified Specialist in OpenShift Administration (or similar certification) Experience in Agile/Scrum methodologies Exposure to security best practices and compliance

Posted 5 days ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

The role of this team is to provide 3rd line technical support for GTTs customer support organizations. As a highly intelligent, flexible, and efficient technical support team, it collaborates with other parts of GTT to deliver a world-class customer experience. The team is accountable for maintaining the stability of GTTs next-generation native IPv4/IPv6 IP network and delivers advanced technical support for the full range of GTT IP products, closely working with Tier 1 and Tier 2 teams on customer and core issues. Key Responsibilities include: - Providing 3rd line break/fix support for customer services across the multinational and multivendor GTT network. - Taking ownership of incidents, resolving them swiftly, and ensuring final fixes are implemented, collaborating with customers and other internal teams as necessary. - Escalating to vendor support and other internal teams to ensure prompt and satisfactory issue resolution. - Offering concise and relevant action plans for teams to efficiently resolve network and customer issues. - Providing work instructions, guidelines, and training sessions to junior engineers when needed. - Communicating and documenting customer and service-specific support information to the 1st line teams. - Proactively managing problems based on technical and trend analysis to enhance network performance. - Conducting technical reviews and overseeing key network management systems. - Scoping new developments to support continuous improvement of network quality and resilience. - Ensuring operational processes are followed to deliver best-in-class technical support while maintaining a stable and resilient network. - Designing, developing, and implementing automated solutions to streamline network operations, enhance efficiency, and reduce manual intervention. - Utilizing networking expertise along with programming and scripting skills to optimize network structure for scalability, reliability, and security. - Maintaining up-to-date documentation of automated processes, scripts, and network changes. Required Experience/Qualifications: - Minimum 5 years of experience within an IP Operations environment of a Telco or large ISP. - Minimum 3 years of work in an IP technical 2nd line support position. - Excellent and demonstrable experience of IP, MPLS, and routing knowledge (ISIS, BGP, OSPF) in an ISP environment. - Knowledge and understanding of SD WAN and components, with working knowledge of Fortinet, VeloCloud, Aruba preferred. - Strong experience in the use of IP management tools, both commercial and open source. - CCNP or JNCIP certification is preferred. - Proficiency in Ansible, Python, and Bash scripting language desirable.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Software Engineer III (Python Developer) at Panasonic Avionics Corporation, you will play a crucial role in designing, developing, and maintaining backend services for the Companion App Ground Service. Your responsibilities will include collaborating with cross-functional teams to ensure high performance, scalability, and reliability of the application. You will be tasked with implementing, testing, and debugging Python-based services in a cloud-based environment using AWS. Additionally, you will develop and maintain Bash scripts for automation and system management tasks, troubleshoot and optimize existing code for improved performance and functionality, and ensure that security and compliance standards are met in all deliverables. To excel in this role, you must possess a strong expertise in C++ programming, hands-on experience with AWS services and cloud architecture, proficiency in Bash scripting for system automation, and familiarity with Redhat Package Manager (RPM) for package building and management. Knowledge of C++ for potential cross-platform integrations and experience with Nginx for web server configuration and optimization will be advantageous. Ideally, you should hold a Bachelor's degree in Computer Science, Information Systems, Engineering, or a related field, and have at least 5 years of software development experience. At Panasonic Avionics Corporation, we value principles such as Contribution to Society, Fairness & Honesty, Cooperation & Team Spirit, Untiring Effort for Improvement, Courtesy & Humility, Adaptability, and Gratitude. Join us and be a part of an innovative company with a rich history of over 40 years. Experience stability, career growth opportunities, and the chance to collaborate with industry experts. Visit www.panasonic.aero to learn more about us and explore our open job opportunities at www.panasonic.aero/join-us/. We offer a highly competitive, comprehensive, and flexible benefits program because we understand that our employees are the key to our success.,

Posted 6 days ago

Apply

7.0 - 11.0 years

0 Lacs

navi mumbai, maharashtra

On-site

As an AWS Cloud Consultant with 7 years of experience, you will be working in a remote setting during UK Shift timings. Your expertise in Cloud Formation, Jenkins, Bash scripting, YAML, JSON, AWS Services, AWS Security, Prisma Cloud, and Splunk will be crucial for the successful deployment of various projects. Your primary responsibilities will include working on AWS services, focusing on migration tasks, and being proficient in Pager Duty. Documentation and adherence to best practices are essential aspects of this role. In terms of design and requirements, you will conduct workshops with MA application teams to align on design, requirements, and any necessary enhancements. Implementation activities will involve updating the AWS architecture, reviewing it with MA teams, and enhancing artifacts based on feedback. Your role will also involve updating the service catalog product, network connectivity, and endpoints for applications. Documentation plays a key role in this position, requiring you to develop and update MA application documentation, document AWS architecture for business continuity and disaster recovery, and conduct knowledge transfer sessions to teams. Overall, your expertise in AWS technologies, migration processes, documentation, and collaboration with cross-functional teams will be instrumental in the success of our projects.,

Posted 6 days ago

Apply

2.0 - 6.0 years

0 Lacs

kolkata, west bengal

On-site

You are a passionate and customer obsessed AWS Solutions Architect looking to join Workmates, the fastest growing partner to the worlds major cloud provider, AWS. Your role will involve driving innovation, building differentiated solutions, and defining new customer experiences to help customers maximize their AWS potential in their cloud journey. Working alongside industry specialist organizations and technology groups, you will play a key role in leading our customers towards native cloud transformation. Choosing Workmates and the AWS Practice will enable you to elevate your AWS experience and skills in an innovative and collaborative environment. At Workmates, you will have the opportunity to lead the worlds AWS growing partner in pioneering cloud transformation and be at the forefront of cloud advancements. Join Workmates in delivering innovative work as part of your extraordinary career. People are considered the biggest assets at Workmates, and together we aim to achieve best-in-class cloud native operations. Be part of our mission to drive innovations across Cloud Management, Media, DevOps, Automation, IoT, Security, and more, where independence and ownership are valued, allowing you to thrive and contribute your best. Responsibilities: - Building and maintaining cloud infrastructure environments - Ensuring availability, performance, security, and scalability of production systems - Collaborating with application teams to implement DevOps practices - Creating solution prototypes and conducting proof of concepts for new tools - Designing repeatable, automated, and scalable processes to enhance efficiency - Automating and streamlining operations and processes - Troubleshooting and diagnosing issues/outages and providing operational support - Engaging in incident handling and supporting a culture of post-mortem and knowledge sharing Requirements: - 2+ years of hands-on experience in building and supporting large-scale environments - Strong Architecting and Implementation Experience with AWS Cloud - Proficiency in AWS CloudFormation and Terraform - Experience in Docker Containers and container environment deployment - Good understanding and work experience in Kubernetes and EKS - Sysadmin and infrastructure background (Linux internals, filesystems, networking) - Proficiency in scripting, particularly writing Bash scripts - Familiarity with CI/CD pipeline build and release - Experience with CICD tools like Jenkins/GitLab/TravisCI - Hands-on experience with AWS Developer tools such as AWS Code Pipeline, Code Build, Code Deploy, AWS Lambda, AWS Step Function, etc. - Experience in log management solutions (ELK/EFK or similar) - Experience with Configuration Management tools like Ansible or similar - Proficiency in modern Monitoring and Alerting tools like CloudWatch, Prometheus, Grafana, Opsgenie, etc. - Strong passion for automating routine tasks and solving production issues - Experience in automation testing, script generation, and integration with CI/CD - Familiarity with AWS Security features (IAM, Security Groups, KMS, etc.) - Good to have experience in database technologies (MongoDB/MySQL, etc.) Desired Skills: - AWS Professional Certifications - CKA/CKAD Certifications - Knowledge of Python/Go - Experience with Service Mesh and Distributed tracing - Familiarity with Scrum/Agile methodology Join Workmates and be part of a team that values innovation, collaboration, and continuous improvement in the cloud technology landscape. Your expertise and skills will play a crucial role in driving customer success and shaping the future of cloud solutions.,

Posted 6 days ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

The client is a global leader in delivering cutting-edge inflight entertainment and connectivity (IFEC) solutions. As a developer in this role, you will be responsible for building user interfaces using Flutter, React.js, or similar frontend frameworks. You will also develop backend services and APIs using Python, ensuring smooth data flow between the frontend and backend by working with REST APIs. Additionally, you will utilize Linux terminal and bash scripting for basic automation and tasks, manage code using Git, and set up CI/CD pipelines using tools like GitLab CI/CD. Deployment and management of services on AWS (CloudFormation, Lambda, API Gateway, ECS, VPC, etc.) will be part of your responsibilities. It is essential to write clean, testable, and well-documented code while collaborating with other developers, designers, and product teams. Requirements: - Minimum 3 years of frontend software development experience - Proficiency in GUI development using Flutter or other frontend stacks (e.g., React.js) - 3+ years of Python development experience - Experience with Python for backend and API server - Proficiency in Linux terminal and bash scripting - Familiarity with GitLab CI/CD or other CI/CD tools - AWS experience including CloudFormation, API Gateway, ECS, Lambda, VPC - Bonus: Data science skills with experience in the pandas library - Bonus: Experience with the development of recommendation systems and LLM-based applications If you find this opportunity intriguing and aligning with your expertise, please share your updated CV and relevant details with pearlin.hannah@antal.com.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

haryana

On-site

You should have 7-9+ years of experience in software build engineering, release & deployment. Your technical skills should include expertise in Embedded Linux build environment development, build root, Yocto, Cmake, custom Makefiles, Dockers, C/C++ knowledge. You must possess excellent knowledge of build tools like Yocto, buildroot, Makefile, etc., and have strong scripting skills in Bash, Perl, or similar scripting languages. Hands-on experience in building Linux-based OS images using industry standard build tools and testing based on standard test frameworks is required. Additionally, you should have in-depth knowledge about user administration and file system management activities and experience in Linux OS hardening, performance, and kernel tuning. Experience with standard software development tools/CI CD pipelines such as Git, Subversion, Docker containerized environments, and Linux commands is essential. Configuration, build, deployment, and release management skills are a must-have for this role. You should also be capable of preparing installation guide documentation. In terms of qualifications, you should hold a graduate degree in Computer Science, Electronic Engineering, or a related discipline. Your educational background combined with your extensive experience in software build engineering, release & deployment will make you a suitable candidate for this position.,

Posted 1 week ago

Apply

4.0 - 9.0 years

6 - 16 Lacs

Pune

Work from Office

Dear Consultant, Greetings from SRS Infoway We are having openings with top MNC company Skill: Security Analyst Imperva Web Application Firewall (WAF) Experience: 4+ Years Job location : Pune This is a contract to hire opportunity. Duration 1+Year (Extendable). Detailed JD: Detailed JD (Roles and Responsibilities) We are seeking a highly skilled Senior Security Engineer with deep expertise in Imperva Web Application Firewall (WAF). The ideal candidate will be responsible for designing, implementing, and managing Imperva WAF rules via scripts, as well as troubleshooting and resolving complex WAF-related issues across production and non-production environments. Key Responsibilities: Mandatory skills Imperva Web Application Firewall (WAF). Desired/ Secondary skills scripting skills in Python, Bash, or PowerShell to automate WAF configurations and monitoring.

Posted 1 week ago

Apply

7.0 - 9.0 years

27 - 37 Lacs

Pune

Hybrid

Responsibilities may include the following and other duties may be assigned: Develop and maintain robust, scalable data pipelines and infrastructure automation workflows using GitHub, AWS, and Databricks. Implement and manage CI/CD pipelines using GitHub Actions and GitLab CI/CD for automated infrastructure deployment, testing, and validation. Deploy and manage Databricks LLM Runtime or custom Hugging Face models within Databricks notebooks and model serving endpoints. Manage and optimize Cloud Infrastructure costs, usage, and performance through tagging policies, right-sizing EC2 instances, storage tiering strategies, and auto-scaling. Set up infrastructure observability and performance dashboards using AWS CloudWatch for real-time insights into cloud resources and data pipelines. Develop and manage Terraform or CloudFormation modules to automate infrastructure provisioning across AWS accounts and environments. Implement and enforce cloud security policies, IAM roles, encryption mechanisms (KMS), and compliance configurations. Administer Databricks Workspaces, clusters, access controls, and integrations with Cloud Storage and identity providers. Enforce DevSecOps practices for infrastructure-as-code, ensuring all changes are peer-reviewed, tested, and compliant with internal security policies. Coordinate cloud software releases, patching schedules, and vulnerability remediations using Systems Manager Patch Manage. Automate AWS housekeeping and operational tasks such as: Cleanup of unused EBS Volumes, snapshots, old AMIs Rotation of secrets and credentials using secrets manager Log retention enforcement using S3 Lifecycle policies and CloudWatch Log groups Perform incident response, disaster recovery planning, and post-mortem analysis for operational outages. Collaborate with cross-functional teams including Data Scientists, Data Engineers, and other stakeholders to gather, implement the infrastructure and data requirements. Required Knowledge and Experience: 8+ years of experience in DataOps / CloudOps / DevOps roles, with strong focus on infrastructure automation, data pipeline operations, observability, and cloud administration. Strong proficiency in at least one Scripting language (e.g., Python, Bash) and one infrastructure-as-code tool (e.g., Terraform, CloudFormation) for building automation scripts for AWS resource cleanup, tagging enforcement, monitoring and backups. Hands-on experience integrating and operationalizing LLMs in production pipelines, including prompt management, caching, token-tracking, and post-processing. Deep hands-on experience with AWS Services, including Core: EC2, S3, RDS, CloudWatch, IAM, Lambda, VPC Data Services: Athena, Glue, MSK, Redshift Security: KMS, IAM, Config, CloudTrail, Secrets Manager Operational: Auto Scaling, Systems Manager, CloudFormation/Terraform Machine Learning/AI: Bedrock, SageMaker, OpenSearch serverless Working knowledge of Databricks, including: Cluster and workspace management, job orchestration Integration with AWS Storage and identity (IAM passthrough) Experience deploying and managing CI/CD workflows using GitHub Actions, GitLab CI, or AWS CodePipeline. Strong understanding of cloud networking, including VPC Peering, Transit Gateway, security groups, and private link setup. Familiarity with container orchestration platforms (e.g., Kubernetes, ECS) for deploying platform tools and services. Strong understanding of data modeling, data warehousing concepts, and AI/ML Lifecycle management. Knowledge of cost optimization strategies across compute, storage, and network layers. Experience with data governance, logging, and compliance practices in cloud environments (e.g., SOC2, HIPAA, GDPR) Bonus: Exposure to LangChain, Prompt Engineering frameworks, Retrieval Augmented Generation (RAG), and vector database integration (AWS OpenSearch, Pinecone, Milvus, etc.) Preferred Qualifications: AWS Certified Solutions Architect, DevOps Engineer or SysOps Administrator certifications. Hands-on experience with multi-cloud environments, particularly Azure or GCP, in addition to AWS. Experience with infrastructure cost management tools like AWS Cost Explorer, or FinOps dashboards. Ability to write clean, production-grade Python code for automation scripts, operational tooling, and custom CloudOps Utilities. Prior experience in supporting high-availability production environments with disaster recovery and failover architectures. Understanding of Zero Trust architecture and security best practices in cloud-native environments. Experience with automated cloud resources cleanup, tagging enforcement, and compliance-as-code using tools like Terraform Sentinel. Familiarity with Databricks Unity Catalog, access control frameworks, and workspace governance. Strong communication skills and experience working in agile cross-functional teams, ideally with Data Product or Platform Engineering teams. If interested, please share below details on ashwini.ukekar@medtronic.com Name: Total Experience: Relevant Experience: Current CTC: Expected CTC: Notice Period: Current Company: Current Designation: Current Location: Regards , Ashwini Ukekar Sourcing Specialist

Posted 1 week ago

Apply

8.0 - 12.0 years

25 - 30 Lacs

Bengaluru

Work from Office

Hire Top Talents from Largest Talent Network | TESTQ. TQUKE0658_4503 - Cloud Engineer Roles & Responsibilities, Mandatory: JD 8-12 years of professional work experience in a relevant field Proficient in Azure cloud services, Azure Webapps, Terraform, Azure DevOps , Azure Entra ID integration with applications, Key Vault, Storage, Networking, Private Endpoints, Kubernetes, API Mgmt., Experience in Bash scripting, Docker, Python, FastAPI. Lead the technical team of 4-6 resource. Good knowledge of SQL and Python for data manipulation, transformation, and analysis knowledge on Power bi would be beneficial. Understand business requirements to set functional specifications for reporting applications Additional Information: Skill set: Azure Cloud services, Terraform, Azure DevOps, Azure Entra ID integration with applications, Key Vault, Storage, Networking, Private Endpoints, Kubernetes, API Mgmt., Bash scripting, Docker, Python, FastAPI. Good to have skills : Certification in Azure Devops Engineer. Apply NOW We can only accept MS Word and PDF format under 10 MB

Posted 1 week ago

Apply

2.0 - 5.0 years

2 - 5 Lacs

Vadodara

Work from Office

Role & responsibilities Pro-actively maintain and develop all Linux infrastructure technology to maintain a 24x7x365 uptime service Perform day-to-day system administration tasks, including server and workstation setup, configuration, maintenance, and troubleshooting. Monitor system performance and proactively address issues to ensure uninterrupted operations. Implement and maintain security measures to protect systems from threats and vulnerabilities. Develop and maintain shell scripts to automate routine tasks and streamline administrative processes. Engineering of systems administration-related solutions for various project and operational needs Maintain best practices on managing systems and services across all environments Fault finding, analysis and of logging information for reporting of performance exceptions Pro-actively monitoring system performance and capacity planning Manage, coordinate, and implement software upgrades, patches, hot fixes on servers, workstations, and network hardware Create and modify scripts or applications to perform tasks Provide input on ways to improve the stability, security, efficiency, and scalability of the environment Collaborate with other teams and team members to develop automation strategies using tools that include Puppet and Chef and deployment processes Provide technical support and guidance to end-users regarding Linux environment issues. Participate in department 24x7 On-Call rotation to provide level Tier 4 support during and after regular business hours. Qualifications: A minimum of two years of experience working as a Linux Administrator Bachelors degree in computer science, information technology, engineering, or a technical discipline RHCE/RHCSA certified with 2+ years of experience. In depth knowledge of Linux: Debian, Redhat, etc. Experience with VMWare vSphere 8.0 and Windows Server 2016 and up Experience with VMWare Site Recovery Manager Knowledge of existing virtual machine and cloud technologies including principles and methods of scalability to match workloads Experience with Ninja, Ansible, Chef, Puppet Basic knowledge of networking, specifically Cisco devices, including Nexus and Catalyst switches and routers. Experience with enterprise level Windows Server operating environments, including Windows Active Directory, Group Policy Experience with HPE Nimble Storage Strong English communication, collaboration research and problem-solving capabilities. Ability to provide clear instructions to IT partners, explaining how the software works to the customer and being available to answer any questions that may arise. Using analysis and critical thinking skills to determine and assess the customer's needs and meet or exceed their expectations. Ability to manage multiple projects and rapidly changing priorities. Applying keen attention to detail and organization to work on numerous parts of a system or application at the same time while being accurate and thorough. Excellent time management, decision-making, interpersonal, and organizational skills. Desire to provide superior customer service. Ability to prioritize, coordinate and complete tasks to meet deadlines and within company quality standards Strong Experience with Shell, Perl, and/or Python scripting. Strong knowledge of protocols such as DNS, HTTP, LDAP, SMTP and SNMP. Knowledge of multiple specialties such as operating systems, database platforms, storage technologies, including knowledge of administering operating systems like Red Hat Enterprise Linux and Microsoft Server. Demonstrated experience working on remote Systems via use of command line. Hands on experience integrating Linux servers in Active Directory (or using LDAP/Kerberos). Experience installing and configuring basic services including httpd/Apache, vftpd and sshd. Ability to arrange things or actions in an order or pattern according to a specific rule or set of dependencies. Experience with operating system update services such as WSUS, YUM or DNF. Knowledge of the following technologies: Ansible modules, PowerShell scripting, ELK Stack and Sophos Anti-Virus and integrating Linux with Windows Active Directory. Ability to work well as a member of a team and independently. Strong problem solving and communication skills.

Posted 1 week ago

Apply

2.0 - 5.0 years

3 - 4 Lacs

Chennai

Work from Office

We are looking for an experienced Application Security Engineer with 23 years of hands-on experience in security testing across web, mobile, API, and cloud environments. You will perform in-depth manual and automated testing, identify vulnerabilities using frameworks like OWASP and NIST, and provide actionable remediation guidance with clear PoCs. This role involves close collaboration with development and DevOps teams to integrate security into the SDLC, support secure coding practices, and contribute to threat simulations and R&D efforts. Strong knowledge of CVSS, MITRE ATT&CK, and scripting skills (Python, Bash) are essential, along with the ability to clearly communicate security findings to both technical and non-technical stakeholders Key Responsibilities: Conduct hands-on security testing of web applications, mobile apps, cloud environments, and APIs, identifying security vulnerabilities based on industry-standard methodologies (e.g., OWASP, SANS, NIST). Evaluate the risk and severity of discovered vulnerabilities using frameworks such as CVSS and document findings with clear Proof-of-Concepts (PoCs), highlighting real-world business impact and custom remediation guidance. Collaborate with development teams to explain vulnerabilities, answer technical queries, and recommend secure coding practices and mitigation strategies. Participate in research and development (R&D) initiatives, including the discovery of new attack vectors, tooling improvements, and security automation. Contribute to secure SDLC processes, including secure design reviews, code reviews alongside DevOps and architecture teams. Assist in conducting threat simulations, adversary emulation, and red team exercises when required. Maintain awareness of emerging threats, CVEs, and vulnerability trends affecting web, mobile, and cloud technologies. Required Skills & Tools 2-3 years of hands-on experience in security testing or penetration testing across web, mobile, API, and/or network layers. Bachelors degree in Computer Science or a related technical field (or equivalent experience). Having published CVEs is considered a strong advantage. Solid knowledge of OWASP Top 10, MITRE ATT&CK, and Secure Coding Guidelines. Strong understanding of manual testing approaches — not just tool-assisted scans. Hands-on experience with reporting, PoC generation, and remediation consulting. Scripting or automation skills in Python, Bash for creating custom tools. Effective communication skills to interact with both technical and non-technical stakeholders.

Posted 1 week ago

Apply

5.0 - 10.0 years

35 - 40 Lacs

Bengaluru

Work from Office

As a Senior Data Engineer, you will proactively design and implement data solutions that support our business needs while adhering to data protection and privacy standards. In addition to this, you would also be required to manage the technical delivery of the project, lead the overall development effort, and ensure timely and quality delivery. Responsibilities : Data Acquisition : Proactively design and implement processes for acquiring data from both internal systems and external data providers. Understand the various data types involved in the data lifecycle, including raw, curated, and lake data, to ensure effective data integration. SQL Development : Develop advanced SQL queries within database frameworks to produce semantic data layers that facilitate accurate reporting. This includes optimizing queries for performance and ensuring data quality. Linux Command Line : Utilize Linux command-line tools and functions, such as bash shell scripts, cron jobs, grep, and awk, to perform data processing tasks efficiently. This involves automating workflows and managing data pipelines. Data Protection : Ensure compliance with data protection and privacy requirements, including regulations like GDPR. This includes implementing best practices for data handling and maintaining the confidentiality of sensitive information. Documentation : Create and maintain clear documentation of designs and workflows using tools like Confluence and Visio. This ensures that stakeholders can easily communicate and understand technical specifications. API Integration and Data Formats : Collaborate with RESTful APIs and AWS services (such as S3, Glue, and Lambda) to facilitate seamless data integration and automation. Demonstrate proficiency in parsing and working with various data formats, including CSV and Parquet, to support diverse data processing needs. Key Requirements: 5+ years of experience as a Data Engineer , focusing on ETL development. 3+ years of experience in SQL and writing complex queries for data retrieval and manipulation. 3+ years of experience in Linux command-line and bash scripting. Familiarity with data modelling in analytical databases. Strong understanding of backend data structures, with experience collaborating with data engineers ( Teradata, Databricks, AWS S3 parquet/CSV ). Experience with RESTful APIs and AWS services like S3, Glue, and Lambda Experience using Confluence for tracking documentation. Strong communication and collaboration skills, with the ability to interact effectively with stakeholders at all levels. Ability to work independently and manage multiple tasks and priorities in a dynamic environment. Bachelors degree in Computer Science, Engineering, Information Technology, or a related field. Good to Have: Experience with Spark Understanding of data visualization tools, particularly Tableau. Knowledge of data clean room techniques and integration methodologies.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

As a Linux Admin L2 at MKS Instruments, you will collaborate with the IT Infrastructure Organization to perform Operations support activities under the guidance of the Sr. Manager of IT Infrastructure. Your primary responsibilities will include incident response in accordance with SLA requirements, strict adherence to change control guidelines, and delivering exceptional customer service. Furthermore, you will be instrumental in fostering a culture of teamwork within the organization. Your core duties will involve the installation, configuration, and maintenance of Linux operating systems and software applications. You will be tasked with managing user accounts, permissions, and access rights while working in close coordination with other IT teams to troubleshoot technical issues and deploy solutions for Linux servers, including back-up teams and application teams. In this role, having a minimum of 5 years of experience in Red Hat, AWS Linux, Ubuntu VMWare, and Hypervisor is essential. Experience in supporting Linux servers with Oracle and SAP installations will be advantageous. Proficiency in virtualization environments and Bash Scripting for automation tasks will be considered strong assets. While not mandatory, preferred skills for this position include the ability to work both independently and collaboratively as part of a team, excellent problem-solving capabilities, and strong communication skills. This position does not entail any supervisory scope, reporting relationships, or financial responsibilities. Travel requirements for this role are minimal, with up to zero percent travel expected.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

About Us: Ensono is an expert technology adviser and managed service provider dedicated to accelerating clients" digital transformation for long-lasting business outcomes. With a team of over 3500 associates globally, Ensono offers services such as consulting, mainframe and application modernization, public cloud migration, and cloud-native development. Certified in AWS, Azure, and Google Cloud, Ensono is recognized as the Microsoft Datacenter Transformation Partner of the Year and headquartered in greater Chicago. We care deeply about your success, providing comprehensive strategic and managed services for mission-critical applications. Our Advisory and Consulting services assist in developing an application strategy and selecting the right cloud environment, including public, multi, or hybrid cloud, or mainframe. With expertise spanning all mission-critical platforms, Ensono is your relentless ally throughout your digital transformation journey, offering 24/7 support and ensuring you are continuously innovating and secure. About Role: Ensono is seeking a highly skilled and experienced Senior Manager of Observability to lead the observability strategy. The ideal candidate will have a strong background in systems engineering and observability platforms, enabling proactive performance monitoring, issue resolution, and scalability improvements. This leadership role requires technical expertise, collaboration, and the ability to work across teams to ensure an efficient observability experience. Key Responsibilities: - Lead Observability Strategy: Develop, implement, and refine the company's observability framework for robust visibility into all production systems and services. - Team Leadership: Manage and mentor a team of engineers in building scalable observability solutions and fostering continuous learning. - Incident Management: Oversee alerting and incident response processes for rapid issue identification, diagnosis, and resolution. - Tool Selection and Integration: Evaluate and integrate observability tools aligned with business goals and infrastructure requirements. - Collaboration: Work with engineering, DevOps, and IT teams to define KPIs and ensure system reliability and performance visibility. - Data-Driven Decisions: Leverage observability data for proactive system optimization and feature improvements. - Automation and Scalability: Ensure observability solutions scale with organizational needs through process automation and resource optimization. - Reporting and Insights: Provide regular reports on system health, performance trends, and incident root cause analysis to leadership. Qualifications: - Bachelors or Masters degree in Computer Science, Engineering, or related field. - 8+ years of technical leadership experience, with at least 5 years in an observability-focused role. - Expertise in observability tools and understanding of distributed systems, microservices architecture, and cloud environments. - Hands-on experience with monitoring, logging, and tracing systems, as well as coding and scripting for automation. - Strong leadership and communication skills, with a track record of managing high-performing teams. Preferred Qualifications: - Experience with containerization and container orchestration platforms. - Familiarity with CI/CD pipelines, DevOps best practices, chaos engineering, and resiliency testing. - Certifications in cloud platforms such as AWS Certified Solutions Architect or Google Cloud Professional Cloud Architect.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies