Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
8.0 - 12.0 years
15 - 19 Lacs
Bengaluru
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Lead end-to-end management of database operations across MSSQL, MySQL, and Oracle environments Own and enhance platform lifecycle management (PLM), including patching, upgrades, and performance tuning Design and implement automated, self-healing systems for proactive fault detection and recovery Build scalable automation for routine DBA tasks (backups, failovers, capacity planning, etc.) Ensure high availability, disaster recovery, and compliance of all data systems Collaborate with architects and engineering leads to define and evolve the data infrastructure roadmap Mentor and guide junior DBAs and data platform engineers, promoting best practices and continuous learning Establish and monitor KPIs for system reliability, performance, and platform health Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 10+ years of experience in database administration and operations (MSSQL, MySQL, Oracle) 3+ years in a leadership or managerial role with a solid track record of team development Experience with monitoring tools (e.g., Prometheus, Grafana, OEM, SolarWinds) Experience working in hybrid or cloud-native environments (Azure, AWS, or GCP) Deep understanding of PLM, capacity management, HA/DR, and database security Expertise in scripting (PowerShell, Bash, Python) and automation tools (Ansible, Terraform, etc.) Solid troubleshooting and performance tuning skills across DB platforms Familiarity with CI/CD practices and infrastructure automation Preferred Qualifications Experience with containerized DB deployments (e.g., Docker, Kubernetes) Exposure to self-service data platforms and DevOps for data Knowledge of AI/ML-based alerting or anomaly detection in ops Certifications in MSSQL, Oracle, MySQL, or relevant cloud platforms At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 3 weeks ago
5.0 - 9.0 years
13 - 17 Lacs
Noida
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Accountable for the data engineering lifecycle including research, proof of concepts, architecture, design, development, test, deployment, and maintenance Design, develop, implement, and run cross-domain, modular, flexible, scalable, secure, reliable, and quality data solutions that transform data for meaningful analyses and analytics while ensuring operability Layer in instrumentation in the development process so that data pipelines that can be monitored to detect internal problems before they result in user-visible outages or data quality issues Build processes and diagnostic tools to troubleshoot, maintain, and optimize solutions and respond to customer and production issues Embrace continuous learning of engineering practices to ensure industry best practices and technology adoption, including DevOps, Cloud, and Agile thinking Tech debt reduction/Tech transformation including open source adoption, cloud adoption, HCP assessment, and adoption Maintain high-quality documentation of data definitions, transformations, and processes to ensure data governance and security Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience Experience with data analytics tools like Tableau, Power BI, or similar Experience in optimizing data processing workflows for performance and cost-efficiency Proficient in design and documentation of data exchanges across various channels including APIs, streams, batch feeds Proficient in source to target mapping, gap analysis and applies data transformation rules based on understanding of business rules, data structures Familiarity with healthcare regulations and data exchange standards (e.g. HL7, FHIR) Familiarity with automation tools and scripting languages (e.g., Bash, PowerShell) to automate repetitive tasks Understanding of healthcare data, including Electronic Health Records (EHR), claims data, and regulatory compliance such as HIPAA Proven ability to develop and implement scripts to maintain and monitor performance tuning Proven ability to design scalable job scheduler solutions and advises on appropriate tools/technologies to use Proven ability to work across multiple domains to define and build data models Proven ability to understand all the connected technology services and their impacts Proven ability to assess design and proposes options to ensure the solution meets business needs in terms of security, scalability, reliability, and feasibility
Posted 3 weeks ago
4.0 - 7.0 years
10 - 14 Lacs
Noida
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together Primary Responsibilities Co-ordinate with the team to support 24*7 operations Subject Matter Expert for Day to Day Operations, Process and ticket queue management Perform team management along with managing process and operational escalations Leverage latest technologies and analyze large volumes of data to solve complex problems facing health care industry Develop, test, and support new and preexisting programs related to data interfaces Support operations by identifying, researching and resolving performance and production issues Participate in War Room activities to monitor status and coordinate with multiple groups to address production performance concerns, mitigate client risks and communicate status Work with engineering teams to build tools/features necessary for production operations Build and improve standard operation procedures and troubleshooting documents Report on metrics to surface meaningful results and identify areas for efficiency gain Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor/master’s degree in computer science or Information Technology or equivalent work experience 3+ years of experience with UNIX Shell Scripting 2+ years of experience in RDBMS like Oracle and writing queries in SQL and Pl/SQL, Postgres 2+ years of experience working with Production Operations processes and team 2+ years of experience working with server-side Administration with OS flavors such as Redhat or CentOS Experience in understanding performance metrics and developing them to measure progress against KPI Ability to develop and manage multiple projects with minimal direction and supervision Soft Skills Highly organized with strong analytical skills and excellent attention to details Excellent time management and problems solving skills and capacity to lead diverse talent, work cross-functionally and build consensus on difficult issues Flexible to adjust to evolving business needs with ability to understand objectives and communicate with non-technical partners Solid organization skills, very detail oriented, with careful attention to work processes Takes ownership of responsibilities and follows through hand-offs to other groups Enjoys a fast-paced environment and the opportunity to learn new skills High-performing, motivated and goal-driven Preferred Qualifications Experience delegating tasks, providing timely feedback to team to accomplish a task or solve a problem Experience in scripting languages like PERL, Bash/Shell or Python Experience with Continuous Integration (CI) tools, Jenkins or Bamboo preferred Experience working in an agile environment US Healthcare industry experience Experience working across teams and proven track record in solution focused problem-solving skills Familiarity with Cloud Based Technologies Comfortable working in a rapidly changing environment where documentation for job execution aren’t yet fully fleshed out Knowledgeable in building and/or leveraging operational reliability metrics to understand the health of the production support process An eye for improving technical and business processes with proven experience creating standard operating procedures and other technical process documentation At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 3 weeks ago
3.0 - 6.0 years
11 - 15 Lacs
Bengaluru
Work from Office
* Help manage and configure Linux servers remotely, working with on-site techs to bring up new hardware. * Participate in writing, extending, and maintaining Ansible playbooks and roles to automate configuration and deployment. * Contribute to automation workflows using Jenkins. * Collaborate with other engineers to expand our infrastructure automation, reduce manual steps, and improve reliability. * Assist in monitoring, testing, and refining automation to handle edge cases and failure conditions gracefully. * Document systems, tools, and automation logic clearly and thoroughly. Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise * 3–5 years of experience in a Linux systems, DevOps, or infrastructure-related role. * 2-3 years exxperience with configuration management tools like Ansible. * 4-5 years scripting knowledge (e.g., Python, Bash, or similar). * Comfortable working with remote server management tools (e.g., IPMI, iLO, DRAC). * Basic understanding of networking (DNS, DHCP, IP addressing). * Strong desire to learn and improve automation systems. * Good communication and collaboration skills. Preferred technical and professional experience * Familiarity with Jenkins preferred. * Experience deploying and maintaining Tekton * Experience deploying and maintaining Kubernetes. * Exposure to SuperMicro and/or Lenovo server hardware. * Experience with Ubuntu Linux in production environments. * Exposure to PXE booting and automated OS installation processes. * Experience contributing to shared codebases or working with version control.
Posted 3 weeks ago
3.0 - 6.0 years
8 - 12 Lacs
Bengaluru
Work from Office
In this Site Reliability Engineer role, you will work closely with entire IBM Cloud organization to maintain and operationally improve the IBM cloud infrastructure. You will focus on the following key responsibilities: Ability to respond promptly to production issues and alerts 24x7 Execute changes in the production environment through automation Implement and automate infrastructure solutions that support IBM Cloud products and services to reduce toil. Partner with other SRE teams and program managers to deliver mission-critical services to IBM Cloud Build new tools to improve automated resolution of production issues Monitor, respond promptly to production alerts, Execute changes in Production through automation Support the compliance and security integrity of the environment Continually improve systems and processes regarding automation and monitoring. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Excellent written and verbal communication skills. Minimum 5+ years experience in handling large production systems environment Must be extremely comfortable using and navigating within a Linux environment Ability to do low level debugging and problem analysis by examining logs and running Unix commands Must be efficient in writing and debugging scripts 3-5+ years of experience in Virtualization Technologies and Automation / Configuration Managements Automation and configuration management tools/solutionsAnsible, Python, bash, Terraform, GoLang etc. (at least one) Virtualization technologiesCitrix Xen Hypervisor (Preferred), KVM(also preferred), libvirt, VMware vSphere, etc. (at least one) Monitoring technologiesZabbix, Sysdig, Grafana, Nagios, Splunk, etc. (at least one) Working knowledge with Container technologiesKubernetes, Docker, etc. Flexibility to work on shifts to handle production systems Preferred technical and professional experience Good experience inPublic cloud platforms,Kubernetes clusters and Strong Linux skills for managing services across microservices platform, good SRE knowledge in Cloud Compute, Storage and Network services.
Posted 3 weeks ago
3.0 - 6.0 years
5 - 9 Lacs
Bengaluru
Work from Office
As a DevOps Developer for the IBM Cloud Object Storage Service, you will play a pivotal role in enhancing the developer experience, productivity, and satisfaction within the organization. Your primary responsibilities will include: Collaborating with development teams to understand their needs and provide tailored solutions that align with the organization's goals and objectives. Designing and implementing Continuous Integration and Continuous Deployment (CI/CD) pipelines using tools like Jenkins, Tekton, etc. Designing and implementing tools for automated deployment and monitoring of multiple environments, ensuring seamless integration and scalability. Staying updated with the latest trends and best practices in DevOps and related technologies, and incorporating them into the development platform. Ensuring security and compliance of the platforms, including patching, vulnerability detection, and threat mitigation. Providing on-call IT support and monitoring technical operations to maintain the stability and reliability of the developer platform. Collaborating with other teams to introduce best automation practices and tools, fostering a culture of innovation and continuous improvement. Embracing an Agile culture and employing relevant fit-for-purpose methodologies and tools such as Trello, GitHub, Jira, etc. Maintaining good communication skills and the ability to lead global teams remotely, ensuring effective collaboration and knowledge sharing. Implement and automate infrastructure solutions that support IBM Cloud products and infrastructure Implement, and maintain state-of-the-art CI/CD pipelines, ensuring full compliance with industryImplement, and maintain state-of-the-art CI/CD pipelines, ensuring full compliance with industry standards and regulatory frameworks. Administer automated CI/CD systems and tools Partner with other teams, managers and program managers to develop alerting and monitoring for mission-critical services Provide technical escalation support for other Infrastructure Operations team Maintain highly scalable, secure cloud infrastructures leveraging industry-leading platforms such as AWS, Azure, or GCP. Orchestrate and manage infrastructure as code (IaC) implementations using cutting-edge tools like Terraform Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proven Experience: Demonstrated track record of success as a Site Reliability Engineer or in a similar role. System Monitoring and Troubleshooting: Strong skills in system monitoring, issue response, and troubleshooting for optimal system performance. Automation Proficiency: Proficiency in automation for production environment changes, streamlining processes for efficiency. Collaborative Mindset: Collaborative mindset with the ability to partner seamlessly with cross-functional teams for shared success. Effective Communication Skills: Excellent communication skills, essential for effective integration planning and swift issue resolution. Tech Stack Jenkins LInux Administration Python Ansible Golang Terraform Preferred technical and professional experience Programming skills – scripting, Go, Python Must be proficient in writing, debugging, and maintaining automation,scripts, and code (ie, Bash, Ansible, and Python, Java or Golang Ability to administrate, configure, optimize and monitor services and/or servers at scale. Strong understanding of scalability, reliability, and performance principles
Posted 3 weeks ago
4.0 - 9.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Excellent hands-on experience on Terraform, Ansible, Docker & Kubernetes Must have experience on AWS & Linux . Dev / programming in Python is must. Must have experience on scripting language -Python/Shell/Bash. Good debugging skills. Any exp in platform automation is a big plus. Kafka is nice to have as well Primary Skills Terraform, Ansible, Docker & Kubernetes, AWS & Linux
Posted 3 weeks ago
4.0 - 9.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Job Summary Synechron is seeking innovative and skilled GenAI Engineers, focused on automating quarterly planning processes. This role involves developing a prototype using generative AI for planning automation, with a proactive approach to tracking progress. The project requires both a backend and frontend engineer to build a business orchestration layer with integrated business logic, leveraging tools like Atlassian agents and co-pilots. This role contributes to Synechrons strategic objectives by enhancing planning efficiency and demonstrating advanced AI capabilities. Software Requirements Required Software Skills: Proficiency in Python and FastAPI for backend development. Experience with AWS, including CloudFormation and cloud-native solutions. Familiarity with CI/CD pipelines and GitHub Actions. Knowledge of bash scripting and DocumentDB for efficient data management. Preferred Software Skills: Familiarity with generative AI frameworks and tools such as Agentic Frameworks, Langchain, and Semantic Kernel. Experience with Vector, Graph, and SQL databases for data manipulation and storage. Overall Responsibilities Collaborate with cross-functional teams to understand technology requirements and design AI-driven solutions for business planning. Develop and implement technical specifications and documentation for the prototype. Conduct code reviews and ensure codebase quality and maintainability. Stay current with the latest technology trends and integrate relevant advancements into the project. Provide technical support and resolve issues to ensure smooth project execution. Technical Skills (By Category) Programming Languages: RequiredPython PreferredFamiliarity with additional scripting languages as needed. Databases/Data Management: EssentialExperience with DocumentDB and other NoSQL databases. PreferredKnowledge of Vector and Graph databases. Cloud Technologies: EssentialAWS cloud services and CloudFormation for deployment and integration. Frameworks and Libraries: EssentialFastAPI for backend services. PreferredGenerative AI frameworks and tools like Langchain. Development Tools and Methodologies: RequiredCI/CD pipelines, GitHub Actions for version control and deployment. Experience Requirements 7 to 10 years of experience in software development with a focus on cloud-native and generative AI technologies. Proven experience in developing and deploying solutions using AWS and related technologies. Experience in working with cross-functional teams and contributing to AI-centric projects. Day-to-Day Activities Participate in daily stand-up meetings and project planning sessions. Write, test, and deploy software solutions, ensuring timely delivery of project milestones. Conduct code reviews and provide constructive feedback to team members. Stay updated on technology trends and incorporate relevant advancements into the project. Collaborate with data science teams to integrate AI capabilities effectively. Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. Relevant certifications in AWS or AI technologies are preferred. Commitment to continuous professional development and staying informed on industry trends. Professional Competencies Strong critical thinking and problem-solving skills. Excellent leadership and teamwork abilities. Effective communication and stakeholder management skills. Adaptability and a strong learning orientation to embrace new technologies. Innovation mindset to drive creative solutions and improvements. Effective time and priority management skills.
Posted 3 weeks ago
5.0 - 10.0 years
17 - 22 Lacs
Chennai
Work from Office
Job Summary Synechron is seeking a highly skilled Senior Developer specializing in ELK Stack & DevOps / SRE (Site Reliability Engineering) to join our dynamic issue management team. In this pivotal role, you will leverage your expertise in Site Reliability Engineering (SRE), DevOps practices, and monitoring solutions to ensure the stability, performance, and operational readiness of our applications and infrastructure. Your contributions will directly support our business objectives by enhancing system reliability, streamlining incident management, and fostering continuous improvement across technical domains. Software Requirements Required Skills: Proven proficiency with ELK Stack (Elasticsearch, Logstash, Kibana) — version 7.x or higher, with hands-on experience in building dashboards and analytics Experience with CI/CD tools such as Jenkins, Ansible, or equivalent automation platforms Programming/scripting proficiency in Python and Bash Familiarity with monitoring and logging tools (ELK Stack essential, Splunk preferred) Cloud platform experience (AWS, Azure) — practical knowledge of cloud services and deployment strategies Preferred Skills: Experience with React, Node.js, and Java application logging and monitoring strategies Familiarity with additional DevOps tools and methodologies Knowledge of containerization and orchestration (e.g., Docker, Kubernetes) Experience with Infrastructure as Code (IaC) tools Overall Responsibilities Collaborate with the issue management team to efficiently track, analyze, and resolve incidents and Operational Readiness Evaluations (OREs), ensuring minimal disruption and swift recovery. Develop, implement, and optimize monitoring and logging solutions utilizing ELK Stack, creating actionable dashboards and performance metrics. Design and enforce effective logging strategies for applications built with React, Node.js, and Java to facilitate troubleshooting and performance analysis. Lead continuous improvement initiatives aimed at enhancing system reliability, performance, and operational efficiency. Work cross-functionally with development, infrastructure, and security teams to diagnose and address performance bottlenecks and reliability challenges. Document incident processes, resolution procedures, and best practices to promote knowledge sharing and team growth. Technical Skills (By Category) Programming Languages & Scripts (Essential): Python, Bash, or equivalent scripting languages Monitoring & Logging Tools (Essential): ELK Stack (Elasticsearch, Logstash, Kibana) — version 7.x or higher Splunk (preferred) Cloud Technologies (Essential): AWS or Azure services such as EC2, S3, CloudWatch, or equivalent Frameworks & Application Technologies: Experience in monitoring React, Node.js, and Java applications — implementation of logging and performance metrics Development & Automation Tools: CI/CD pipelines (Jenkins, Ansible) — setup, maintenance, and optimization Containerization (Docker, Kubernetes) — knowledge preferred Security & Protocols (if applicable): Basic understanding of best practices in security for monitoring and logging Experience Requirements Minimum of 8 years of professional experience in DevOps, Site Reliability Engineering, or related fields Demonstrated success in developing and maintaining comprehensive monitoring and logging solutions, particularly using ELK Stack Proven experience implementing and refining logging strategies across diverse application stacks (React, Node.js, Java) Hands-on experience working within cloud environments such as AWS or Azure Experience working in large-scale, distributed systems and incident management processes Day-to-Day Activities Proactively monitor system health and incident alerts, collaborating with the issue management team for swift resolution Design, configure, and enhance ELK Stack dashboards, visualizations, and analytics for operational insights Implement and refine logging strategies for web and backend applications to facilitate effective troubleshooting Participate in continuous improvement projects to boost application and infrastructure performance Engage in cross-team meetings to discuss incident trends, system bottlenecks, and reliability enhancements Document procedures, lessons learned, and best practices for ongoing knowledge sharing Qualifications Bachelor’s degree in Computer Science, Information Technology, or a related discipline; equivalent professional experience supported Relevant certifications such as AWS Certified DevOps Engineer, Certified Kubernetes Administrator, or equivalent are preferred Ongoing professional development in DevOps, SRE practices, and monitoring technologies Professional Competencies Strong analytical and problem-solving skills with a focus on system reliability and performance Effective communicator capable of conveying technical information clearly to diverse audiences Team-oriented collaborator with experience working across cross-functional groups Adaptable learner, eager to stay current with emerging technologies and best practices Demonstrates proactive approach to incident management and continuous improvement Ability to manage multiple priorities efficiently while maintaining attention to detail
Posted 3 weeks ago
8.0 - 13.0 years
11 - 16 Lacs
Chennai
Work from Office
Job Summary Synechron is seeking an experienced and detail-oriented Senior Middleware Administrator to oversee the deployment, management, and automation of middleware environments. This role is pivotal in ensuring the stability, security, and performance of middleware systems across cloud and on-premises infrastructure. The successful candidate will lead automation initiatives, support containerization efforts, and mentor team members to optimize middleware operations, thereby contributing to the organization’s digital and operational excellence. Software Requirements Required: Linux platform proficiency (Red Hat Enterprise Linux or equivalent) Scripting languagesPython and/or Bash (intermediate to advanced) CI/CD toolsJenkins, Travis, Concourse or similar Configuration and automation toolsAnsible, Terraform ContainerizationDocker, Kubernetes Preferred: Middleware technologies such as WebSphere, JBoss, or similar Cloud environments (AWS, Azure, or GCP) familiarity HashiCorp CertifiedTerraform Associate (optional) HashiCorp CertifiedVault Associate (optional) Overall Responsibilities Develop, maintain, and enhance automation scripts and infrastructure as code using Ansible, Terraform, and related tools Build and deploy containerized versions of legacy applications, ensuring scalability and reliability Provide hands-on support and guidance to operations teams on automation, containerization, and middleware management Support provisioning, configuration, deployment, and disaster recovery processes for middleware environments Monitor the health, security, and performance of middleware systems, taking corrective actions as needed Troubleshoot and resolve middleware-related issues with a focus on stability and uptime Automate updates, patches, configurations, and environment provisioning processes Conduct training sessions and share knowledge to improve team capabilities in automation and middleware management Assist in planning and executing middleware upgrades and migrations aligned with organizational standards and best practices Performance outcomes: Consistent, automated provisioning and deployment processes Reduced manual intervention and increased system reliability Knowledge sharing that enhances team competency Secure, compliant, and well-maintained middleware environments Technical Skills (By Category) Programming Languages: Mandatory: Python, Bash Preferred: PowerShell, other scripting tools Databases/Data Management: Basic understanding of database connectivity for middleware applications (e.g., JDBC, SQL) Cloud Technologies: Experience with cloud environments (AWS, Azure, GCP) for middleware deployment and automation Frameworks and Libraries: Knowledge of container orchestration and management (Docker, Kubernetes) Development Tools & Methodologies: Jenkins, Travis CI, Concourse Infrastructure as CodeTerraform, Ansible Version ControlGit Security Protocols: Familiarity with securing middleware environments and implementing best practices in access control and encryption Experience Requirements 5-10 years of experience in IT operations, systems automation, or middleware administration Proven expertise in middleware tools such as WebSphere, JBoss, or similar products At least 2+ years of practical experience with Ansible, Terraform, Docker, and Kubernetes in cloud or hybrid environments Demonstrated experience in automation, scripting, and infrastructure provisioning Prior exposure to cloud-native deployment models and disaster recovery strategies Alternative experience pathways: Candidates with extensive hands-on middleware administration and automation experience, even if specific cloud environment experience is limited, will be considered. Day-to-Day Activities Monitor and maintain middleware system performance, security, and availability Develop, test, and deploy automation scripts and infrastructure code for provisioning and configuration management Containerize legacy applications to improve scalability and operational efficiency Collaborate with development, operations, and security teams on middleware-related initiatives Conduct troubleshooting and root cause analysis for middleware outages or issues Perform system upgrades, patches, configuration changes, and environment migrations Provide guidance and training to operational teams on automation tools and middleware best practices Document processes, configurations, and automation workflows for transparency and knowledge sharing Qualifications Master’s Degree in Computer Science, Computer Engineering, or related field; alternative professional experience considered Certification in Red Hat Enterprise Linux Automation with Ansible (RH294) is preferred Certified Kubernetes Application Developer (CKAD) or equivalent is highly desirable Additional certifications such as HashiCorp CertifiedTerraform Associate or Vault Associate are advantageous Professional Competencies Critical thinker with strong problem-solving skills and analytical aptitude Effective communicator with the ability to articulate technical concepts clearly Team-oriented with a proven ability to collaborate across functions Results-driven, with attention to detail and quality in work outputs Adaptable and eager to learn new tools, processes, and technologies Demonstrates initiative in automation and process improvement efforts Skilled in managing multiple priorities and working under pressure
Posted 3 weeks ago
5.0 - 9.0 years
11 - 15 Lacs
Bengaluru
Work from Office
Job Summary Synechron is seeking a skilled Automation Engineer to join our team, focusing on designing and implementing automated solutions that enhance efficiency and quality across various processes and systems. This role plays a crucial part in advancing Synechron’s strategic objectives by collaborating with cross-functional teams to identify and execute automation opportunities, ensuring our solutions meet the highest standards of reliability and performance. Software Requirements Required: Proficiency in automation tools and frameworks such as Selenium, Jenkins, and Ansible. Experience with scripting languages like Python, JavaScript, or Bash. Preferred: Familiarity with cloud technologies such as AWS or Azure. Understanding of CI/CD pipelines and related tools. Overall Responsibilities Design, develop, and implement automated solutions to improve efficiency and quality. Collaborate with software developers, QA engineers, and operations personnel to identify automation opportunities. Create and maintain automation frameworks and scripts for testing and deployment processes. Conduct thorough testing of automated solutions to ensure reliability and performance. Monitor automated systems for performance issues and troubleshoot problems. Document automation processes, workflows, and technical specifications for knowledge sharing and compliance. Stay updated on industry trends and emerging technologies in automation and software development. Provide training and support to team members on automation tools and practices. Participate in continuous improvement initiatives to enhance automation strategies. Technical Skills (By Category) Programming Languages: RequiredPython, JavaScript, or Bash. PreferredJava or Ruby. Development Tools and Methodologies: RequiredSelenium, Jenkins, Ansible. Cloud Technologies: PreferredAWS, Azure. Frameworks and Libraries: RequiredFamiliarity with automation frameworks. Security Protocols: PreferredKnowledge of security practices in automation. Experience Requirements Minimum 6+ years of experience in automation engineering roles. Experience in developing automated solutions within software development environments. Industry experience in technology, finance, or similar sectors is preferred. Alternative experience pathways include roles in software development or systems engineering with a focus on automation. Day-to-Day Activities Attend regular team meetings and engage in collaborative discussions with cross-functional teams. Develop and test automation scripts and frameworks. Monitor and troubleshoot automated systems to ensure optimal performance. Document processes and maintain clear records of all automation activities. Provide support and training on automation tools and methodologies to team members. Qualifications Bachelors or Master’s degree in Computer Science, Information Technology, or a related field. Relevant certifications in automation tools or cloud technologies are preferred. Commitment to continuous professional development and staying abreast of industry trends. Professional Competencies Strong critical thinking and problem-solving capabilities. Effective communication and stakeholder management skills. Proven ability to work collaboratively in team settings. Adaptability to new technologies and changing requirements. Innovative mindset focused on improving processes and solutions. Excellent time and priority management skills.
Posted 3 weeks ago
4.0 - 9.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Job Summary Synechron is seeking talented GenAI Engineers to join our team for a 2-3 month project focused on automating quarterly planning through advanced generative AI solutions. This role involves developing a prototype using a business orchestration layer with built-in business logic, leveraging Atlassian agents and co-pilots. Our team, including experienced data scientists, aims to demonstrate this prototype in June. Software Requirements Required Software Skills: Proficiency in cloud-native solutions on AWS, including CloudFormation and AWS services. Experience with CI/CD pipelines and GitHub Actions. Strong programming skills in Python and FastAPI. Familiarity with bash scripting and DocumentDB. Preferred Software Skills: Knowledge of generative AI frameworks and tools such as Agentic Frameworks, Langchain, and Semantic Kernel. Experience with Vector, Graph, and SQL Databases. Overall Responsibilities Develop a backend and frontend UI for a prototype that automates business planning using generative AI. Apply architectural patterns and microservices architecture in solution deployment and integration. Collaborate with cross-functional teams, including data scientists, to ensure alignment with project goals. Stay current with industry trends and integrate new technologies into the solution. Conduct code reviews to ensure quality and maintainability. Technical Skills (By Category) Programming Languages: RequiredPython PreferredFamiliarity with other scripting languages such as JavaScript for frontend development. Databases/Data Management: EssentialExperience with DocumentDB and other NoSQL databases. PreferredKnowledge of Vector and Graph databases. Cloud Technologies: EssentialAWS cloud services and CloudFormation. Frameworks and Libraries: EssentialFastAPI for backend development. PreferredGenerative AI tools and frameworks like Langchain. Development Tools and Methodologies: RequiredCI/CD pipelines, GitHub Actions, Agile methodologies. Experience Requirements 7 to 10 years of experience in software development, with a focus on cloud-native and generative AI technologies. Proven experience with software development methodologies and tools such as Agile and Scrum. Experience in developing solutions with cross-functional teams, including participation in code reviews. Day-to-Day Activities Participate in daily stand-up meetings and project planning sessions. Collaborate with cross-functional teams to gather requirements and design solutions. Write, test, and deploy software solutions, ensuring timely delivery. Conduct code reviews and provide feedback to team members. Stay updated on latest technology trends and incorporate them into solutions. Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. Professional certifications in relevant technologies are a plus. Professional Competencies Strong critical thinking and problem-solving capabilities. Excellent leadership and teamwork abilities. Effective communication and stakeholder management skills. Adaptability and a strong learning orientation. Innovative mindset with a focus on driving creative solutions. Effective time and priority management skills.
Posted 3 weeks ago
6.0 - 11.0 years
6 - 16 Lacs
Pune
Hybrid
Job Title: Sr. Systems Software Engineer (Kernel, Filesystems, and Networking) Duration: Full time role Location: Hybrid (Pune/Bengaluru/Hyderabad/Mumbai/Chennai) Job Description: We are seeking a highly skilled and experienced Senior Systems Software Engineer with a strong background in Linux kernel development, file systems, and networking technologies relevant to modern data centers. The ideal candidate will have a deep understanding of low-level systems programming, networking protocols (with a focus on RDMA and CNI), and hands-on experience in designing, developing, and maintaining high-performance, scalable system components. Key Responsibilities: Design, develop, and maintain Linux kernel modules, with emphasis on performance, scalability, and security. Contribute to the development and enhancement of file systems, ensuring reliability, high throughput, and low latency. Build and optimize networking solutions for data center environments, including RDMA, DPDK, and container networking (CNI plugins). Collaborate with cross-functional teams including DevOps, SRE, and platform engineering to ensure seamless integration with infrastructure. Participate in code reviews, architecture discussions, and performance tuning sessions. Monitor and improve system performance, addressing bottlenecks and implementing robust diagnostics. Contribute to open-source communities and stay up to date with emerging kernel and networking trends. Required Skills and Qualifications: 6 to 8 years of hands-on experience in systems-level programming, preferably in C/C++ and scripting languages like Python or Bash. Proven expertise in Linux kernel development (process management, memory management, device drivers, etc.). In-depth knowledge of file system architecture, implementation, and debugging (e.g., ext4, XFS, Btrfs, etc.). Strong understanding of datacenter networking concepts: TCP/IP stack, RDMA, DPDK, SR-IOV, VLANs, VxLAN, etc., Experience with Container Networking Interface (CNI) and technologies like Kubernetes, Docker, or CRI-O. Familiarity with performance profiling tools (e.g., perf, ftrace, eBPF). Experience working in distributed systems and cloud-native infrastructure is a plus. Preferred Qualifications: Contributions to open-source kernel/file system/networking projects. Familiarity with cloud platforms (e.g., AWS, GCP, Azure) and infrastructure as code tools (e.g., Terraform, Ansible).
Posted 3 weeks ago
5.0 - 8.0 years
5 - 15 Lacs
Bengaluru
Remote
Key Responsibilities: Design, develop, and optimize relational (PostgreSQL, SQL Server, MySQL, Oracle) and NoSQL (MongoDB, Cassandra, Redis) databases. Write and optimize complex SQL queries, stored procedures, triggers, and functions. Develop and maintain ETL pipelines for data integration. Ensure database security, backups, and high-availability solutions. Collaborate with teams to support application development and troubleshoot performance issues. Maintain technical documentation and stay updated on database best practices. Required Skills: 5+ years of experience in database development. Strong expertise in PostgreSQL and proficiency in SQL Server, MySQL, or Oracle. Experience with query optimization, indexing, and partitioning. Familiarity with NoSQL databases and cloud DB solutions (AWS RDS, Azure SQL, etc.). Hands-on experience with ETL tools, data warehousing, and scripting (Python, PowerShell, Bash). Strong problem-solving and communication skills.
Posted 3 weeks ago
2.0 - 5.0 years
5 - 8 Lacs
Chennai
Remote
Notice Period : Immediate - 15 Days Job Description Overview : We are seeking a highly skilled Linux Systems Engineer to join our team. The ideal candidate will have a deep understanding of Linux operating systems, virtualization technologies (specifically VMware), and networking protocols. This role will involve deploying and managing products in Linux environments, troubleshooting complex issues, and ensuring optimal system performance. Responsibilities : Linux Administration : - Proficiently administer various Linux distributions (e.g., Red Hat, Ubuntu, CentOS). - Install, configure, and maintain Linux systems, including servers, workstations, and network devices. - Perform system hardening, security updates, and patch management. - Manage user accounts, permissions, and access controls. - Optimize system performance and resource utilization. Virtualization : - Deploy and manage virtual machines using VMware vSphere. - Create, configure, and maintain virtual networks and storage. - Perform VM migration, replication, and high availability tasks. - Troubleshoot virtualization-related issues. Networking : - Understand TCP/IP, UDP, SNMP protocols and their applications. - Configure and troubleshoot network interfaces, routing, and firewalls. - Work with network devices (switches, routers, load balancers). - Implement network security measures. Product Deployment : - Deploy and configure software products in Linux environments. - Integrate products with virtualization platforms and other systems. - Provide technical support and troubleshooting for deployed products. Troubleshooting : - Diagnose and resolve complex technical issues related to Linux, virtualization, and networking. - Analyze system logs and performance metrics to identify problems. - Implement effective troubleshooting strategies and best practices. Documentation : - Create and maintain clear and concise documentation for system configurations, procedures, and troubleshooting steps. Collaboration : - Work closely with other team members, including developers, network engineers, and IT operations staff. - Communicate effectively and collaborate on projects to achieve team goals. Qualifications : - Strong knowledge of Linux operating systems, including Red Hat Enterprise Linux. - Experience with VMware vSphere virtualization platform. - Understanding of networking protocols (TCP/IP, UDP, SNMP) and concepts. - Experience deploying and managing software products in Linux environments. - Excellent analytical and troubleshooting skills. - Excellent communication and interpersonal skills. - Certification in RHCSA or RHCE is a plus. - Knowledge of OpenStack is a bonus. - Familiarity with hardware platforms (EMC VNX, Unity Storage, HP blade servers, Brocade SAN switches, VC flex, HP switches) is beneficial. Additional Skills : - Scripting skills (e.g., Bash, Python). - Automation experience using tools like Ansible or Puppet. - Cloud computing knowledge (e.g., AWS, Azure, GCP).
Posted 3 weeks ago
6.0 - 9.0 years
27 - 42 Lacs
Chennai
Work from Office
Role : MLOps Engineer Location - Kochi Mode of Interview - In Person Data - 14th June 2025 (Saturday) Key words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities: Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required experience and qualifications: Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage.
Posted 3 weeks ago
4.0 - 7.0 years
20 - 25 Lacs
Pune
Work from Office
About the Role : - We are seeking a highly skilled and experienced Senior Cloud Infrastructure & DevOps Engineer to join our dynamic engineering team. - As a key member of our DevOps team, you will play a critical role in designing, implementing, and maintaining our cloud infrastructure and CI/CD pipelines. - You will be responsible for automating and streamlining our software delivery processes, ensuring the reliability, scalability, and security of our cloud environments. Key Responsibilities : - Provision, configure, and manage cloud infrastructure resources on platforms such as AWS, Azure, or GCP. - Implement and maintain infrastructure as code (IaC) using tools like Terraform, Ansible, or CloudFormation. - Ensure the security and compliance of cloud resources. - Optimize cloud resource utilization and minimize costs. - Design, implement, and maintain CI/CD pipelines using Jenkins, GitLab CI/CD, or other CI/CD tools. - Automate build, test, and deployment processes for applications and infrastructure. - Integrate security and compliance checks into the CI/CD pipeline. - Experience with containerization platforms like Docker and Kubernetes. - Deploy, manage, and scale containerized applications. - Implement and manage Kubernetes clusters. - Implement and maintain monitoring and logging solutions (e.g., ELK stack, Prometheus, Grafana). - Monitor application and infrastructure performance, identify and troubleshoot issues proactively. - Collaborate effectively with development, operations, and security teams. - Communicate technical information clearly and concisely to both technical and non-technical audiences. - Participate in code reviews and contribute to the improvement of development processes. Qualifications : Essential : - 4-7 years of experience in DevOps engineering or a related field. - Strong experience with CI/CD tools (Jenkins, GitLab CI/CD). - Hands-on experience with containerization technologies (Docker, Kubernetes). - Proficiency with scripting languages (Bash, Python, Groovy). - Experience with Linux/Unix systems administration. - Experience with configuration management tools (Ansible, Puppet, Chef). - Strong understanding of networking concepts and security best practices. - Excellent problem-solving, analytical, and troubleshooting skills. - Strong communication and interpersonal skills. - Bachelor's degree in Computer Science, Engineering, or a related field.
Posted 3 weeks ago
8.0 - 13.0 years
8 - 12 Lacs
Mumbai, Chennai
Work from Office
Notice period : Immediate to 30 days max Responsibilities of Senior SRE : - The Site Reliability Engineering (SRE) team is responsible for the reliability, scalability, stability and performance of systems and services. - They work with cross-functional teams to design, build and maintain systems and they troubleshoot issues when they arise. They bridge the gap between development and operations teams. - They work closely with business teams to define Service Level Objectives (SLO) and agreements (SLA) of critical systems. They also monitor and maintain the uptime of these systems in-line with the defined SLO's and SLA's. - They deploy and manage monitoring tools to gain insights on system health and performance. - They analyze performance, identify bottlenecks and implement solutions to improve a system's scalability and latency durations. - They develop scripts, implement tools and automation frameworks to reduce the manual intervention efforts of deployment, monitoring and scaling. - They work with development teams for design and development of observability practices like logging, metrics, tracing, etc. They aim to diagnose and troubleshoot issues proactively. - They create actionable alerts on monitoring systems to ensure rapid response for potential production incidents. - They forecast resource needs and provision adequately for current and future demand. - They design and execute "chaos experiments" to test system's failure resiliency. - They own, define and implement the Disaster Recovery (DR) processes for systems. They also conduct planned and unplanned mock DR drills to test for response preparedness during production incidents. - They ensure that security best practices are followed and implemented during design and operations of systems. - They also own and maintain documentation of processes, playbooks, and systems. - They publish KPI reports and other system health updates on a regular basis to the business. Requirements : - Must-have - Bachelor's degree, preferably in CS or a related field, or equivalent experience - Must-have - 12+ years of overall IT experience - Must-have - 7+ years of proven work experience as a Senior Site Reliability Engineer or a similar position. - Must-have - 5+ years of AWS Cloud experience with AWS Certified DevOps Engineer or SysOps or Security etc. - Must-have - AWS experience - 3+ years' experience with using a broad range of AWS technologies (e.g. EC2, RDS, ELB, S3, VPC, CloudWatch & Monitoring Tools) to develop and maintain an Amazon AWS based cloud solution, with an emphasis on best practice cloud security. - Must-have - 2+ years of experience in CDN and/or Cache systems like Fastly, Akamai, CloudFront, etc. - Proven Understanding & strong experience with Cloud deployments ( AWS / Docker/ Kubernetes) - Knowledge on provisioning IAC Tools like Terraform, Chef, Ansible, Shell, groovy, python, etc. - Experience with monitoring systems such as CloudWatch, NewRelic, Datadog/Splunk, ELK stack. - Experience managing cloud network resources (AWS Preferred) such as CloudWatch, VPC, URL proxies, private link, DNS, ACLs, firewalls, and C2S access points. - Platform or Application Engineering and Operational Knowledge in any of the CI/CD tooling like GitHub Actions, Jenkins, etc. - Experience in other tooling Technologies like JIRA, Bitbucket, Jenkins, Fortify, SonarQube, Nexus, Nexus IQ - Experience with configuration automation tools like Puppet/Ansible/Chef/Salt - Scripting Skills : Strong scripting (e.g. Bash & Python) and automation skills. - Operating Systems : Windows and Linux system administration. - Problem Solving : Ability to analyze and resolve complex infrastructure resource and application deployment issues - Strong attention to detail. Excellent verbal and written communication skills. Strong documentation skills. Good To Have : - Experience with Terraform/Ansible/Chef/Puppet - Experience with GitHub Actions - Experience with CloudFront, Fastly
Posted 3 weeks ago
5.0 - 7.0 years
7 - 9 Lacs
Bengaluru
Work from Office
We are seeking an experienced and skilled DevOps Engineer with a strong focus on Jenkins and AWS to join our dynamic team. The ideal candidate will have deep hands-on experience managing Jenkins environments, AWS infrastructure, and supporting DevOps processes through automation, CI/CD pipelines, and infrastructure-as-code practices. This role requires a strong understanding of cloud technologies, containerization, and configuration management, along with excellent problem-solving abilities and strong leadership skills. As a DevOps Engineer, you will: Collaborate with the WiFi development team to support and enhance development and deployment workflows. Design, implement, and maintain CI/CD pipelines using Jenkins and Groovy. Troubleshoot and resolve issues across development, infrastructure, and deployment layers. Manage and maintain the following critical systems: Jenkins CI servers Kubernetes (K8s) clusters Elastic Stack (ELK) Prometheus and Grafana monitoring stacks Ubuntu-based servers A server hosted in Microsoft Azure Automate and improve system operations, observability, and scalability. Use Ansible, Python, and Bash for automation and configuration management. Qualifications: Minimum Requirements: 5+ years of hands-on experience in DevOps. Expertise with Jenkins, including advanced pipeline scripting in Groovy, Job DSL, CasC. Solid experience with Docker and container orchestration using Kubernetes. Strong Linux skills, especially with Ubuntu environments. Proficient in scripting with Bash and Python. Experience with cloud platforms such as Azure or AWS. Solid understanding of Git workflows and version control practices. Nice to Have: Experience with generative AI frameworks such as (CrewAI/Langchain) Experience with Ansible for configuration management. In-depth knowledge of Prometheus and Grafana for monitoring and alerting. Experience with the ELK Stack (Elasticsearch, Logstash, Kibana). Exposure to virtualization using QEMU or similar tools. Soft Skills: Strong problem-solving and debugging capabilities. Fast learner with a proactive, self-driven mindset. Excellent communication and documentation skills. Ability to work both independently and within a collaborative team Job Type: Experienced Hire Shift: Shift 1 (India) Primary Location: India, Bangalore Additional Locations: Business group: The Client Computing Group (CCG) is responsible for driving business strategy and product development for Intel's PC products and platforms, spanning form factors such as notebooks, desktops, 2 in 1s, all in ones. Working with our partners across the industry, we intend to deliver purposeful computing experiences that unlock people's potential - allowing each person use our products to focus, create and connect in ways that matter most to them. As the largest business unit at Intel, CCG is investing more heavily in the PC, ramping its capabilities even more aggressively, and designing the PC experience even more deliberately, including delivering a predictable cadence of leadership products. As a result, we are able to fuel innovation across Intel, providing an important source of IP and scale, as well as help the company deliver on its purpose of enriching the lives of every person on earth. Work Model for this Role This role will require an on-site presence. * Job posting details (such as work model, location or time type) are subject to change.
Posted 3 weeks ago
5.0 - 8.0 years
25 - 32 Lacs
Chennai, Bengaluru
Hybrid
5 - 7 years of experience in a DevSecOps, Application Security, or DevOps Security role. Strong working knowledge of: Extensive experience in GitHub Enterprise and related security capabilities, specially security tool integrations and automations CI/CD pipeline integration of security tooling. Cloud platforms (AWS, Azure, GCP) and hands-on experience with CSPM solutions. Working experience in Application security tools (SAST, DAST, SCA, IaC) Sound working experience in scripting and programming languages Experience collaborating with software engineers, cloud teams, and SREs in a security capacity. Good understanding of OWASP Top 10, secure coding practices, and DevOps lifecycle. Proficient in scripting (e.g., Python, Bash) and automation (e.g., GitHub Actions, Terraform, Ansible).
Posted 3 weeks ago
5.0 - 10.0 years
20 - 25 Lacs
Bengaluru
Work from Office
Lead development using React, TypeScript, Redux, and Webpack for the frontend. Build microservices and APIs using Java (Spring Boot, Vert.x) on the backend. Write YAML-based configuration files and leverage Python/Bash for automation, scripting Required Candidate profile Mandatory Skills: frontend- React/TypeScript/Webpack/Redux backend- Java/Spring Boot/Vertx/YAML/Python/Bash Minimum Relevant Experience: 05+ Years 5days working from office
Posted 3 weeks ago
5.0 - 10.0 years
13 - 22 Lacs
Hyderabad
Work from Office
Hello Everyone, We are looking for the below requirement: Looking for the banking domain project/Trading Platform/ Investment Banking or Capital Marketing projects. Key Responsibilities: Maintain and enhance platform infrastructure across Linux and Windows environments Develop scripts to automate system monitoring, deployment, and recovery processes Troubleshoot and resolve environment-level issues impacting application performance Build and manage CI/CD pipelines using tools like Jenkins, Azure DevOps, or GitHub Actions Collaborate with development, support, and cloud teams to ensure high platform availability Support and automate tasks like patching, environment readiness, and DR test setups Work with DBAs, application teams, and product vendors to resolve infra-related performance bottlenecks Document processes, create knowledge articles, and ensure knowledge continuity Mandatory Skills: 5-9 years of experience in infrastructure/platform engineering Strong hands-on skills in Windows, Linux, Bash scripting and Powershell Experience with CI/CD pipelines and deployment automation Proficiency in tools such as Ansible, Jenkins, Azure DevOps, Git Experience with log aggregation and monitoring (e.g., ELK, Grafana, Prometheus) Comfortable supporting enterprise-grade applications in a financial services environment Preferred Skills: Exposure to Cloud platforms like AWS (especially EC2, S3, IAM, CloudWatch) Familiarity with application support tools and release pipelines SQL knowledge and ability to work with DB teams for performance tuning Prior experience working with geographically distributed teams. Interested candidates or references please drop cv: sireesha.r@thehirewings.com/ careers@thehirewings.com/9346429928/6304852810
Posted 3 weeks ago
6.0 - 11.0 years
4 - 9 Lacs
Bengaluru
Work from Office
SUMMARY Job Role: Apache Kafka Admin Experience: 6+ years Location: Pune (Preferred), Bangalore, Mumbai Must-Have: The candidate should have 6 years of relevant experience in Apache Kafka Job Description: We are seeking a highly skilled and experienced Senior Kafka Administrator to join our team. The ideal candidate will have 6-9 years of hands-on experience in managing and optimizing Apache Kafka environments. As a Senior Kafka Administrator, you will play a critical role in designing, implementing, and maintaining Kafka clusters to support our organization's real-time data streaming and event-driven architecture initiatives. Responsibilities: Design, deploy, and manage Apache Kafka clusters, including installation, configuration, and optimization of Kafka brokers, topics, and partitions. Monitor Kafka cluster health, performance, and throughput metrics and implement proactive measures to ensure optimal performance and reliability. Troubleshoot and resolve issues related to Kafka message delivery, replication, and data consistency. Implement and manage Kafka security mechanisms, including SSL/TLS encryption, authentication, authorization, and ACLs. Configure and manage Kafka Connect connectors for integrating Kafka with various data sources and sinks. Collaborate with development teams to design and implement Kafka producers and consumers for building real-time data pipelines and streaming applications. Develop and maintain automation scripts and tools for Kafka cluster provisioning, deployment, and management. Implement backup, recovery, and disaster recovery strategies for Kafka clusters to ensure data durability and availability. Stay up-to-date with the latest Kafka features, best practices, and industry trends and provide recommendations for optimizing our Kafka infrastructure. Requirements: 6-9 years of experience as a Kafka Administrator or similar role, with a proven track record of managing Apache Kafka clusters in production environments. In - depth knowledge of Kafka architecture, components, and concepts, including brokers, topics, partitions, replication, and consumer groups. Hands - on experience with Kafka administration tasks, such as cluster setup, configuration, performance tuning, and monitoring. Experience with Kafka ecosystem tools and technologies, such as Kafka Connect, Kafka Streams, and Confluent Platform. Proficiency in scripting languages such as Python, Bash, or Java. Strong understanding of distributed systems, networking, and Linux operating systems. Excellent problem-solving and troubleshooting skills, with the ability to diagnose and resolve complex technical issues. Strong communication and interpersonal skills, with the ability to effectively collaborate with cross-functional teams and communicate technical concepts to non-technical stakeholders.
Posted 3 weeks ago
10.0 - 14.0 years
12 - 17 Lacs
Hyderabad
Work from Office
Overview We are seeking a highly skilled and motivated Associate Manager AWS Site Reliability Engineer (SRE) to join our team. As an Associate Manager AWS SRE, you will play a critical role in designing, managing, and optimizing our cloud infrastructure to ensure high availability, reliability, and scalability of our services. You will collaborate with cross-functional teams to implement best practices, automate processes, and drive continuous improvements in our cloud environment Responsibilities Design and Implement Cloud Infrastructure: Architect, deploy, and maintain AWS infrastructure using Infrastructure-as-Code (IaC) tools such as Terraform or CloudFormation. Monitor and Optimize Performance: Develop and implement monitoring, alerting, and logging solutions to ensure the performance and reliability of our systems. Ensure High Availability: Design and implement strategies for achieving high availability and disaster recovery, including backup and failover mechanisms. Automate Processes: Automate repetitive tasks and processes to improve efficiency and reduce human error using tools such as AWS Lambda, Jenkins, and Ansible. Incident Response: Lead and participate in incident response activities, troubleshoot issues, and perform root cause analysis to prevent future occurrences. Security and Compliance: Implement and maintain security best practices and ensure compliance with industry standards and regulations. Collaborate with Development Teams: Work closely with software development teams to ensure smooth deployment and operation of applications in the cloud environment. Capacity Planning: Perform capacity planning and scalability assessments to ensure our infrastructure can handle growth and increased demand. Continuous Improvement: Drive continuous improvement initiatives by identifying and implementing new tools, technologies, and processes. Qualifications Experience: 10+ years of experience and Minimum of 5 years of experience in a Site Reliability Engineer (SRE) or DevOps role, with a focus on AWS cloud infrastructure. Technical Skills: Proficiency in AWS services such as EC2, S3, RDS, VPC, Lambda, CloudFormation, and CloudWatch. Automation Tools: Experience with Infrastructure-as-Code (IaC) tools such as Terraform or CloudFormation, and configuration management tools like Ansible or Chef. Scripting: Strong scripting skills in languages such as Python, Bash, or PowerShell. Monitoring and Logging: Experience with monitoring and logging tools such as Prometheus, Grafana, ELK Stack, or CloudWatch. Problem-Solving: Excellent troubleshooting and problem-solving skills, with a proactive and analytical approach. Communication: Strong communication and collaboration skills, with the ability to work effectively in a team-oriented environment. Certifications: AWS certifications such as AWS Certified Solutions Architect, AWS Certified DevOps Engineer, or AWS Certified SysOps Administrator are highly desirable. Education: Bachelors degree in Computer Science, Engineering, or a related field, or equivalent work experience.
Posted 3 weeks ago
10.0 - 15.0 years
20 - 30 Lacs
Gurugram
Hybrid
Position : Senior Network Engineer Location : Gurugram - Haryana Direct Hire Role Skills- Firewall and SaaS: Palo Alto, Prisma Access Load-Balancers and WAFs: F5 Big-IP, Cloud-Flare, A10 Networks (optional) Networking: Cisco, Arista, Aruba Silver-Peak (SD-WAN) DDOS: Cloud-Flare and Radware. Network Observability: cPacket, Viavi, Wireshark, Thousand-Eyes, Grafana, Elasticsearch, Telegraf, Logstash Clouds: AWS, Azure Wireless: Cisco and Juniper MIST Networking Protocols: BGP, MP-BGP, OSPF, Multicast, MLAG, VPC, MSTP, Rapid-PVST+, LACP, mutual route redistribution, VXLAN, eVPN. Programming and Automation: Python, JSON, Jinja, Ansible, YAML. Role & responsibilities 10+ years of technical experience in networking, network security and upgrades Working understanding of open-standard networking protocols and the ability to identify and implement these protocols at an enterprise level Performs complex installations, upgrades, and maintenance and technical duties supporting the operations internal and non-internal network Assist in the development/design of network/security policies, standards, guidelines, and procedures relevant to IT infrastructure and Architecture Communicates with the client, the team and NOC on a day to day basis to ensure quick turnaround times, resolutions and maintain a robust environment
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The bash job market in India is thriving with numerous opportunities for professionals who have expertise in bash scripting. Organizations across various industries are actively seeking individuals with these skills to streamline their operations and automate repetitive tasks. If you are a job seeker looking to explore bash jobs in India, read on to learn more about the job market, salary range, career progression, related skills, and interview questions.
These cities are known for their vibrant tech industries and offer a plethora of opportunities for bash professionals.
The average salary range for bash professionals in India varies based on experience level. Entry-level positions may start at around INR 3-4 lakhs per annum, while experienced professionals can earn up to INR 12-15 lakhs per annum.
In the field of bash scripting, a typical career path may involve starting as a Junior Developer, progressing to a Senior Developer, and eventually moving up to a Tech Lead role. With experience and expertise, individuals can also explore roles such as DevOps Engineer or Systems Administrator.
In addition to bash scripting, professionals in this field are often expected to have knowledge of: - Linux operating system - Shell scripting - Automation tools like Ansible - Version control systems like Git - Networking concepts
chmod
command in bash? (basic)grep
and awk
commands. (medium)#!
(shebang) at the beginning of a bash script? (basic)cut
command in bash? (basic)cron
? (medium)$1
and $@
in bash scripting. (medium)exec
command in bash? (advanced)tr
command in bash? (basic)export
command in bash. (medium)As you prepare for bash job interviews in India, remember to showcase your expertise in bash scripting, along with related skills and experience. By mastering these interview questions and demonstrating your passion for automation and scripting, you can confidently pursue rewarding opportunities in the field. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.