Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
1.0 - 5.0 years
3 - 7 Lacs
Bengaluru
Work from Office
RoleSenior CAD Engineer Experience10+years LocationBangalore Notice PeriodMax 15days preferred Role Overview We are looking for a Senior CAD Engineer to deploy and support our front-end tools, to develop scripts to automate regression and debug flows, and to work along with our design, implementation and verification teams. What You'll Do Deploy and support front-end tools, such as, RTL simulators, low power tools, static RTL checkers such as Lint, CDC/RDC/SDC/DFT, and formal verification. Develop scripts to automate regression and debug flows, and to enable Continuous Integration/Continuous Delivery (CI/CD) Streamline utilization of compute infrastructure using load distribution tools Identify and prioritize needs of internal users and develop capabilities for them Proficiently use scripts to integrate tools, repos and compute infrastructure Configure and maintain project progress Dashboards. Interface with EDA vendors for license and tool installations Deploy tools and methodologies across geographies for global teams working together What You Need To Have Tech/B 10+ years of relevant experience in CAD or allied disciplines 4+ years in a CAD role for a several 100 million gate Silicon ASIC project Knowledge and understanding of ASIC flow Proficiency in python, bash, c, Makefiles Proficiency in administration of Linux systems (such as Redhat Enterprise) Proficiency in distributed version control such as Git and/or Mercurial (Hg) Eager to learn, fast pick up and timely execution Experience in working with the standard CAD tools that are prevalent in the industry Nice-to-haves Experience with Kubernetes or LSF Systems Experience with HW Design Flows, System Verilog, Verilog, EDA/CAD, and Flows Experience with Javascript, CSS, and Web development frameworks Show more Show less
Posted 3 weeks ago
1.0 - 5.0 years
7 - 11 Lacs
Bengaluru
Work from Office
RoleASIC CAD Lead Engineer Experience10+years LocationBangalore Notice PeriodMax 15days preferred Role Overview We are looking for a ASIC CAD Lead Engineer to deploy and support our front-end tools, to develop scripts to automate regression and debug flows, and to work along with our design, implementation and verification teams. What You'll Do Deploy and support front-end tools, such as, RTL simulators, low power tools, static RTL checkers such as Lint, CDC/RDC/SDC/DFT, and formal verification. Develop scripts to automate regression and debug flows, and to enable Continuous Integration/Continuous Delivery (CI/CD) Streamline utilization of compute infrastructure using load distribution tools Identify and prioritize needs of internal users and develop capabilities for them Proficiently use scripts to integrate tools, repos and compute infrastructure Configure and maintain project progress Dashboards. Interface with EDA vendors for license and tool installations Deploy tools and methodologies across geographies for global teams working together What You Need To Have Tech/B 10+ years of relevant experience in CAD or allied disciplines 4+ years in a CAD role for a several 100 million gate Silicon ASIC project Knowledge and understanding of ASIC flow Proficiency in python, bash, c, Makefiles Proficiency in administration of Linux systems (such as Redhat Enterprise) Proficiency in distributed version control such as Git and/or Mercurial (Hg) Eager to learn, fast pick up and timely execution Experience in working with the standard CAD tools that are prevalent in the industry Nice-to-haves Experience with Kubernetes or LSF Systems Experience with HW Design Flows, System Verilog, Verilog, EDA/CAD, and Flows Experience with Javascript, CSS, and Web development frameworks Show more Show less
Posted 3 weeks ago
1.0 - 5.0 years
7 - 11 Lacs
Bengaluru
Work from Office
We're HiringDevOps Engineer! We are looking for a skilled and motivated DevOps Engineer to join our dynamic team in Bangalore Urban The ideal candidate will have a strong background in software development and systems administration, with a focus on automating and optimizing our development processes. “ LocationBangalore Urban, India Work ModeWork From Office ’ RoleDevOps Engineer What You'll Do Design and implement CI/CD pipelines for efficient development workflows ” Collaborate with development teams to ensure seamless integration of applications “ˆ Monitor system performance and troubleshoot issues proactively › Automate operational processes using scripting languages ”’ Enhance security protocols within the DevOps practices š" Drive continuous improvement initiatives across the infrastructure What Were Looking For Minimum 4 years of experience in DevOps or related field Proficient with cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes) Strong knowledge of scripting languages (Python, Bash) Experience with configuration management tools (Ansible, Chef, Puppet) Excellent problem-solving skills and attention to detail Strong communication skills and ability to work collaboratively Ready to take your career to the next levelš" Apply now and be a part of our innovative journey! Show more Show less
Posted 3 weeks ago
1.0 - 5.0 years
20 - 25 Lacs
Pune
Work from Office
Solution Architect Cloud & DevOps Role Summary: Codvo is seeking an experienced Solution Architect specializing in Cloud Infrastructure and DevOps practices to join our team In this role, you will be responsible for designing, implementing, and managing the cloud infrastructure and DevOps strategies that underpin our Generative AI platform and client deployments You will ensure our solutions are deployed securely, reliably, and efficiently, whether on public cloud platforms or client-specific private cloud environments You will work closely with the Generative AI Solution Architect, engineering teams, and operations to build scalable, automated, and robust infrastructure solutions. Key Responsibilities: Cloud Architecture DesignDesign scalable, resilient, and cost-effective cloud architectures (AWS, Azure, GCP, and private clouds) to host Codvo's Generative AI platform components and client solutions. DevOps Strategy & ImplementationDefine and implement CI/CD pipelines, Infrastructure as Code (IaC) practices (using tools like Terraform, CloudFormation, or ARM templates), configuration management, and automated deployment strategies. Deployment ArchitectureArchitect deployment solutions for containerized applications (Docker, Kubernetes) across different environments, including client private clouds, ensuring seamless integration and operation. Infrastructure for AI/MLDesign and manage infrastructure components specific to AI/ML workloads, such as compute resources for model tuning/inference, vector databases, message queues (NATS, Kafka), and data storage solutions. Monitoring, Logging & AlertingImplement comprehensive monitoring, logging, and alerting solutions (e.g., Prometheus, Grafana, ELK stack, CloudWatch, Azure Monitor) to ensure system health, performance, and availability. Security & ComplianceIntegrate security best practices throughout the cloud infrastructure and DevOps lifecycle (DevSecOps) Implement network security, identity and access management (IAM), vulnerability management, and ensure compliance with relevant standards. AutomationDrive automation initiatives across infrastructure provisioning, configuration, deployment, and operational tasks. CollaborationWork closely with the Generative AI Solution Architect to align infrastructure capabilities with application architecture needs Collaborate with development teams to optimize applications for cloud deployment and operational efficiency. Client Environment IntegrationPlan and execute the deployment and configuration of Codvo solutions within client-specific private cloud environments, addressing unique technical and security requirements. Performance & Cost OptimizationContinuously monitor and optimize cloud infrastructure for performance and cost-effectiveness. DocumentationCreate and maintain detailed documentation for infrastructure architecture, DevOps processes, deployment procedures, and operational runbooks. Required Qualifications & Skills: Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. Proven experience (typically 10+ years) in IT infrastructure and operations, with significant experience (4+ years) in a Cloud Architect, DevOps Architect, or similar role. Deep expertise in designing, deploying, and managing solutions on at least one major cloud platform (AWS, Azure, or GCP) Experience with hybrid or private cloud environments is a strong plus. Strong hands-on experience with Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation, ARM Templates). Proficiency with containerization technologies (Docker) and container orchestration platforms (Kubernetes). Experience implementing and managing CI/CD pipelines using tools like Jenkins, GitLab CI, Azure DevOps, or similar. Solid understanding of networking concepts (VPCs, subnets, firewalls, load balancers) and cloud security best practices. Experience with monitoring, logging, and alerting tools and frameworks. Scripting skills (e.g., Python, Bash, PowerShell). Excellent problem-solving and troubleshooting skills related to infrastructure and deployment issues. Strong communication and collaboration skills. Experience working in Agile/Scrum environments. Preferred Qualifications: Experience managing infrastructure for AI/ML workloads (e.g., setting up environments for model training/serving, managing vector databases). Experience deploying and managing applications in client-owned private cloud environments Relevant cloud certifications (e.g., AWS Certified Solutions Architect Professional, AWS Certified DevOps Engineer Professional, Azure Solutions Architect Expert, Google Cloud Certified Professional Cloud Architect/DevOps Engineer). Experience with GitOps practices. Understanding of Generative AI concepts and their infrastructure implications. Show more Show less
Posted 3 weeks ago
5.0 - 8.0 years
1 - 5 Lacs
Bengaluru
Work from Office
Job Summary We are looking for an experienced AWS SysOps Engineer with 5 to 8 years of experience to manage, monitor, and optimize AWS cloud infrastructure. The ideal candidate should have a strong background in AWS services, automation, system administration, and security best practices to ensure the stability, scalability, and security of cloud-based environments. Years of experience needed 5+ years of experience as AWS sysOps engineer Technical Skills: 5-8 years of hands-on experience in AWS System Administration, Cloud Operations, or Infrastructure Management. Strong expertise in AWS core services: EC2, RDS, S3, IAM, Route 53, CloudWatch, CloudTrail, Auto Scaling, and ELB. Hands-on experience with Linux and Windows administration in AWS environments. Strong scripting skills in Python, Bash, or PowerShell for automation and monitoring. Experience with monitoring and logging solutions like AWS CloudWatch, ELK Stack, Prometheus, or Datadog. Understanding of AWS cost optimization and billing management. Experience in backup and disaster recovery planning using AWS services. Deploy, manage, and maintain AWS cloud infrastructure, ensuring high availability, performance, and security. Manage user access, roles, and security policies using AWS IAM, AWS Organizations, and AWS SSO. Implement and manage patching, system updates, and security hardening across AWS infrastructure. Experience with Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or AWS CDK is a plus Knowledge of AWS networking (VPC, VPN, Direct Connect, Security Groups, NACLs). Good To Have below: Familiarity with security best practices (IAM, KMS, GuardDuty, Security Hub, WAF). Experience with containerization (Docker, Kubernetes, ECS, EKS). Knowledge of DevOps and CI/CD tools (Jenkins, GitHub Actions, AWS CodeDeploy). Exposure to multi-cloud and hybrid cloud environments. Experience with AWS Lambda, Step Functions, and other serverless services. Familiarity with secrets management using AWS Secrets Manager, HashiCorp Vault, or CyberArk. Certifications Needed: AWS Certified SysOps Administrator Associate
Posted 3 weeks ago
4.0 - 8.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. As a DevOps Engineer, you will be responsible for designing, implementing, and maintaining our Azure-based infrastructure and deployment pipelines. You will collaborate closely with cross-functional teams to ensure smooth and efficient software delivery, automating repetitive tasks, and optimizing our cloud environment for scalability, security, and reliability. Primary Responsibilities Azure Infrastructure Management: Design, deploy, and manage Azure virtual machines, networking, and other Azure services Implement and maintain Azure Resource Manager (ARM) templates for infrastructure as code (IaC) Ensure high availability, scalability, and security of Azure resources CI/CD Pipeline Development: Create and maintain CI/CD pipelines using Azure DevOps, Jenkins, or similar tools Automate build, test, and deployment processes for applications and infrastructure Monitor and optimize CI/CD pipelines for efficiency Containerization and Orchestration: Utilize Docker and Kubernetes for containerization and orchestration of applications Manage Azure Kubernetes Service (AKS) clusters for containerized workloads Security and Compliance: Implement security best practices and compliance standards for Azure resources Perform regular security assessments and vulnerability scans Collaborate with the security team to remediate vulnerabilities Monitoring and Troubleshooting: Set up Azure Monitor and Application Insights for real-time monitoring Investigate and resolve issues related to performance, availability, and reliability Collaboration and Documentation: Work closely with development and operations teams to ensure alignment Maintain documentation for infrastructure, configurations, and processes Participate in knowledge sharing and mentoring activities Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor's degree in Computer Science, Information Technology, or related field (or equivalent work experience) 10+ years of proven experience as a DevOps Engineer with a focus on Microsoft Azure Hands-on experience with Azure DevOps, Jenkins, or other CI/CD tools Knowledge of security best practices, compliance standards, and monitoring tools Solid knowledge of Azure services, VMs, Azure Kubernetes Service (AKS), Azure Functions, and Azure SQL Database Proficiency in scripting languages like PowerShell, Python, or Bash Familiarity with containerization and orchestration tools like Docker and Kubernetes Understanding of infrastructure as code (IaC) principles and tools (e.g., ARM templates, Terraform) Proven excellent problem-solving skills and attention to detail Proven solid communication and collaboration skills Preferred Qualifications Microsoft CertifiedAzure Administrator Associate Microsoft CertifiedAzure DevOps Engineer Expert Microsoft CertifiedAzure Solutions Architect Expert
Posted 3 weeks ago
10.0 - 15.0 years
20 - 25 Lacs
Noida
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a highly skilled and experienced Senior Cloud Architect to join our Cloud Support team within the Infrastructure, Cloud, and Data Services team. You will be responsible for designing, implementing, and managing cloud-based solutions that meet the diverse needs of our organization. This role requires a deep understanding of cloud architecture, cloud networking, and cloud security. The Senior Cloud Architect will work closely with various teams to ensure that our cloud infrastructure is scalable, reliable, secure, and cost effective. This role is a hands-on technical position, responsible for both cloud solution design and implementation. Primary Responsibilities Design and implement cloud-based solutions across multiple cloud platforms (Azure, AWS, Google Cloud) using industry best practices Develop and maintain cloud architecture standards and guidelines Collaborate with IT teams and business partners to understand business requirements and translate them into cloud solutions Ensure the security, scalability, and reliability of cloud infrastructure Monitor and optimize cloud performance and costs Conduct regular cloud security assessments and audits Troubleshoot and resolve cloud solution operational issues Provide technical leadership and guidance to the cloud support team Stay up-to-date with the latest cloud technologies and trends Develop disaster recovery and business continuity plans for cloud infrastructure Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field 9+ years of experience in cloud architecture and cloud services Experience with cloud security, networking, and storage Experience identifying and remediating security vulnerabilities in cloud infrastructure Solid knowledge of cloud platforms such as AWS, Azure, or Google Cloud Proficiency in scripting and automation (Python, PowerShell, Bash, etc.) Proficiency in infrastructure as code (IaC) tools such as Ansible, Terraform, or CloudFormation Proven ability to manage multiple priorities and projects Proven excellent problem-solving and analytical skills Solid communication and collaboration skills Preferred Qualifications Certifications such as AWS Certified Solutions Architect, Microsoft CertifiedAzure Solutions Architect, or Google Cloud Professional Cloud Architect Experience with cloud and data center observability tools (Splunk, Application Insights, Azure Monitor, CloudWatch, etc.) Experience with Agile project management methodologies Experience with containerization and orchestration tools such as Docker and Kubernetes Experience with traditional data center and enterprise technologies Understanding of multi-cloud and hybrid environments Knowledge of FinOps principles for cloud cost optimization Knowledge of DevOps and DevSecOps practices and tools (GitHub Actions, etc.) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #Nic
Posted 3 weeks ago
8.0 - 12.0 years
15 - 19 Lacs
Bengaluru
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Lead end-to-end management of database operations across MSSQL, MySQL, and Oracle environments Own and enhance platform lifecycle management (PLM), including patching, upgrades, and performance tuning Design and implement automated, self-healing systems for proactive fault detection and recovery Build scalable automation for routine DBA tasks (backups, failovers, capacity planning, etc.) Ensure high availability, disaster recovery, and compliance of all data systems Collaborate with architects and engineering leads to define and evolve the data infrastructure roadmap Mentor and guide junior DBAs and data platform engineers, promoting best practices and continuous learning Establish and monitor KPIs for system reliability, performance, and platform health Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 10+ years of experience in database administration and operations (MSSQL, MySQL, Oracle) 3+ years in a leadership or managerial role with a solid track record of team development Experience with monitoring tools (e.g., Prometheus, Grafana, OEM, SolarWinds) Experience working in hybrid or cloud-native environments (Azure, AWS, or GCP) Deep understanding of PLM, capacity management, HA/DR, and database security Expertise in scripting (PowerShell, Bash, Python) and automation tools (Ansible, Terraform, etc.) Solid troubleshooting and performance tuning skills across DB platforms Familiarity with CI/CD practices and infrastructure automation Preferred Qualifications Experience with containerized DB deployments (e.g., Docker, Kubernetes) Exposure to self-service data platforms and DevOps for data Knowledge of AI/ML-based alerting or anomaly detection in ops Certifications in MSSQL, Oracle, MySQL, or relevant cloud platforms At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 3 weeks ago
5.0 - 9.0 years
13 - 17 Lacs
Noida
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Accountable for the data engineering lifecycle including research, proof of concepts, architecture, design, development, test, deployment, and maintenance Design, develop, implement, and run cross-domain, modular, flexible, scalable, secure, reliable, and quality data solutions that transform data for meaningful analyses and analytics while ensuring operability Layer in instrumentation in the development process so that data pipelines that can be monitored to detect internal problems before they result in user-visible outages or data quality issues Build processes and diagnostic tools to troubleshoot, maintain, and optimize solutions and respond to customer and production issues Embrace continuous learning of engineering practices to ensure industry best practices and technology adoption, including DevOps, Cloud, and Agile thinking Tech debt reduction/Tech transformation including open source adoption, cloud adoption, HCP assessment, and adoption Maintain high-quality documentation of data definitions, transformations, and processes to ensure data governance and security Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience Experience with data analytics tools like Tableau, Power BI, or similar Experience in optimizing data processing workflows for performance and cost-efficiency Proficient in design and documentation of data exchanges across various channels including APIs, streams, batch feeds Proficient in source to target mapping, gap analysis and applies data transformation rules based on understanding of business rules, data structures Familiarity with healthcare regulations and data exchange standards (e.g. HL7, FHIR) Familiarity with automation tools and scripting languages (e.g., Bash, PowerShell) to automate repetitive tasks Understanding of healthcare data, including Electronic Health Records (EHR), claims data, and regulatory compliance such as HIPAA Proven ability to develop and implement scripts to maintain and monitor performance tuning Proven ability to design scalable job scheduler solutions and advises on appropriate tools/technologies to use Proven ability to work across multiple domains to define and build data models Proven ability to understand all the connected technology services and their impacts Proven ability to assess design and proposes options to ensure the solution meets business needs in terms of security, scalability, reliability, and feasibility
Posted 3 weeks ago
4.0 - 7.0 years
10 - 14 Lacs
Noida
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together Primary Responsibilities Co-ordinate with the team to support 24*7 operations Subject Matter Expert for Day to Day Operations, Process and ticket queue management Perform team management along with managing process and operational escalations Leverage latest technologies and analyze large volumes of data to solve complex problems facing health care industry Develop, test, and support new and preexisting programs related to data interfaces Support operations by identifying, researching and resolving performance and production issues Participate in War Room activities to monitor status and coordinate with multiple groups to address production performance concerns, mitigate client risks and communicate status Work with engineering teams to build tools/features necessary for production operations Build and improve standard operation procedures and troubleshooting documents Report on metrics to surface meaningful results and identify areas for efficiency gain Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor/master’s degree in computer science or Information Technology or equivalent work experience 3+ years of experience with UNIX Shell Scripting 2+ years of experience in RDBMS like Oracle and writing queries in SQL and Pl/SQL, Postgres 2+ years of experience working with Production Operations processes and team 2+ years of experience working with server-side Administration with OS flavors such as Redhat or CentOS Experience in understanding performance metrics and developing them to measure progress against KPI Ability to develop and manage multiple projects with minimal direction and supervision Soft Skills Highly organized with strong analytical skills and excellent attention to details Excellent time management and problems solving skills and capacity to lead diverse talent, work cross-functionally and build consensus on difficult issues Flexible to adjust to evolving business needs with ability to understand objectives and communicate with non-technical partners Solid organization skills, very detail oriented, with careful attention to work processes Takes ownership of responsibilities and follows through hand-offs to other groups Enjoys a fast-paced environment and the opportunity to learn new skills High-performing, motivated and goal-driven Preferred Qualifications Experience delegating tasks, providing timely feedback to team to accomplish a task or solve a problem Experience in scripting languages like PERL, Bash/Shell or Python Experience with Continuous Integration (CI) tools, Jenkins or Bamboo preferred Experience working in an agile environment US Healthcare industry experience Experience working across teams and proven track record in solution focused problem-solving skills Familiarity with Cloud Based Technologies Comfortable working in a rapidly changing environment where documentation for job execution aren’t yet fully fleshed out Knowledgeable in building and/or leveraging operational reliability metrics to understand the health of the production support process An eye for improving technical and business processes with proven experience creating standard operating procedures and other technical process documentation At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 3 weeks ago
3.0 - 6.0 years
11 - 15 Lacs
Bengaluru
Work from Office
* Help manage and configure Linux servers remotely, working with on-site techs to bring up new hardware. * Participate in writing, extending, and maintaining Ansible playbooks and roles to automate configuration and deployment. * Contribute to automation workflows using Jenkins. * Collaborate with other engineers to expand our infrastructure automation, reduce manual steps, and improve reliability. * Assist in monitoring, testing, and refining automation to handle edge cases and failure conditions gracefully. * Document systems, tools, and automation logic clearly and thoroughly. Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise * 3–5 years of experience in a Linux systems, DevOps, or infrastructure-related role. * 2-3 years exxperience with configuration management tools like Ansible. * 4-5 years scripting knowledge (e.g., Python, Bash, or similar). * Comfortable working with remote server management tools (e.g., IPMI, iLO, DRAC). * Basic understanding of networking (DNS, DHCP, IP addressing). * Strong desire to learn and improve automation systems. * Good communication and collaboration skills. Preferred technical and professional experience * Familiarity with Jenkins preferred. * Experience deploying and maintaining Tekton * Experience deploying and maintaining Kubernetes. * Exposure to SuperMicro and/or Lenovo server hardware. * Experience with Ubuntu Linux in production environments. * Exposure to PXE booting and automated OS installation processes. * Experience contributing to shared codebases or working with version control.
Posted 3 weeks ago
3.0 - 6.0 years
8 - 12 Lacs
Bengaluru
Work from Office
In this Site Reliability Engineer role, you will work closely with entire IBM Cloud organization to maintain and operationally improve the IBM cloud infrastructure. You will focus on the following key responsibilities: Ability to respond promptly to production issues and alerts 24x7 Execute changes in the production environment through automation Implement and automate infrastructure solutions that support IBM Cloud products and services to reduce toil. Partner with other SRE teams and program managers to deliver mission-critical services to IBM Cloud Build new tools to improve automated resolution of production issues Monitor, respond promptly to production alerts, Execute changes in Production through automation Support the compliance and security integrity of the environment Continually improve systems and processes regarding automation and monitoring. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Excellent written and verbal communication skills. Minimum 5+ years experience in handling large production systems environment Must be extremely comfortable using and navigating within a Linux environment Ability to do low level debugging and problem analysis by examining logs and running Unix commands Must be efficient in writing and debugging scripts 3-5+ years of experience in Virtualization Technologies and Automation / Configuration Managements Automation and configuration management tools/solutionsAnsible, Python, bash, Terraform, GoLang etc. (at least one) Virtualization technologiesCitrix Xen Hypervisor (Preferred), KVM(also preferred), libvirt, VMware vSphere, etc. (at least one) Monitoring technologiesZabbix, Sysdig, Grafana, Nagios, Splunk, etc. (at least one) Working knowledge with Container technologiesKubernetes, Docker, etc. Flexibility to work on shifts to handle production systems Preferred technical and professional experience Good experience inPublic cloud platforms,Kubernetes clusters and Strong Linux skills for managing services across microservices platform, good SRE knowledge in Cloud Compute, Storage and Network services.
Posted 3 weeks ago
3.0 - 6.0 years
5 - 9 Lacs
Bengaluru
Work from Office
As a DevOps Developer for the IBM Cloud Object Storage Service, you will play a pivotal role in enhancing the developer experience, productivity, and satisfaction within the organization. Your primary responsibilities will include: Collaborating with development teams to understand their needs and provide tailored solutions that align with the organization's goals and objectives. Designing and implementing Continuous Integration and Continuous Deployment (CI/CD) pipelines using tools like Jenkins, Tekton, etc. Designing and implementing tools for automated deployment and monitoring of multiple environments, ensuring seamless integration and scalability. Staying updated with the latest trends and best practices in DevOps and related technologies, and incorporating them into the development platform. Ensuring security and compliance of the platforms, including patching, vulnerability detection, and threat mitigation. Providing on-call IT support and monitoring technical operations to maintain the stability and reliability of the developer platform. Collaborating with other teams to introduce best automation practices and tools, fostering a culture of innovation and continuous improvement. Embracing an Agile culture and employing relevant fit-for-purpose methodologies and tools such as Trello, GitHub, Jira, etc. Maintaining good communication skills and the ability to lead global teams remotely, ensuring effective collaboration and knowledge sharing. Implement and automate infrastructure solutions that support IBM Cloud products and infrastructure Implement, and maintain state-of-the-art CI/CD pipelines, ensuring full compliance with industryImplement, and maintain state-of-the-art CI/CD pipelines, ensuring full compliance with industry standards and regulatory frameworks. Administer automated CI/CD systems and tools Partner with other teams, managers and program managers to develop alerting and monitoring for mission-critical services Provide technical escalation support for other Infrastructure Operations team Maintain highly scalable, secure cloud infrastructures leveraging industry-leading platforms such as AWS, Azure, or GCP. Orchestrate and manage infrastructure as code (IaC) implementations using cutting-edge tools like Terraform Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proven Experience: Demonstrated track record of success as a Site Reliability Engineer or in a similar role. System Monitoring and Troubleshooting: Strong skills in system monitoring, issue response, and troubleshooting for optimal system performance. Automation Proficiency: Proficiency in automation for production environment changes, streamlining processes for efficiency. Collaborative Mindset: Collaborative mindset with the ability to partner seamlessly with cross-functional teams for shared success. Effective Communication Skills: Excellent communication skills, essential for effective integration planning and swift issue resolution. Tech Stack Jenkins LInux Administration Python Ansible Golang Terraform Preferred technical and professional experience Programming skills – scripting, Go, Python Must be proficient in writing, debugging, and maintaining automation,scripts, and code (ie, Bash, Ansible, and Python, Java or Golang Ability to administrate, configure, optimize and monitor services and/or servers at scale. Strong understanding of scalability, reliability, and performance principles
Posted 3 weeks ago
4.0 - 9.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Excellent hands-on experience on Terraform, Ansible, Docker & Kubernetes Must have experience on AWS & Linux . Dev / programming in Python is must. Must have experience on scripting language -Python/Shell/Bash. Good debugging skills. Any exp in platform automation is a big plus. Kafka is nice to have as well Primary Skills Terraform, Ansible, Docker & Kubernetes, AWS & Linux
Posted 3 weeks ago
4.0 - 9.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Job Summary Synechron is seeking innovative and skilled GenAI Engineers, focused on automating quarterly planning processes. This role involves developing a prototype using generative AI for planning automation, with a proactive approach to tracking progress. The project requires both a backend and frontend engineer to build a business orchestration layer with integrated business logic, leveraging tools like Atlassian agents and co-pilots. This role contributes to Synechrons strategic objectives by enhancing planning efficiency and demonstrating advanced AI capabilities. Software Requirements Required Software Skills: Proficiency in Python and FastAPI for backend development. Experience with AWS, including CloudFormation and cloud-native solutions. Familiarity with CI/CD pipelines and GitHub Actions. Knowledge of bash scripting and DocumentDB for efficient data management. Preferred Software Skills: Familiarity with generative AI frameworks and tools such as Agentic Frameworks, Langchain, and Semantic Kernel. Experience with Vector, Graph, and SQL databases for data manipulation and storage. Overall Responsibilities Collaborate with cross-functional teams to understand technology requirements and design AI-driven solutions for business planning. Develop and implement technical specifications and documentation for the prototype. Conduct code reviews and ensure codebase quality and maintainability. Stay current with the latest technology trends and integrate relevant advancements into the project. Provide technical support and resolve issues to ensure smooth project execution. Technical Skills (By Category) Programming Languages: RequiredPython PreferredFamiliarity with additional scripting languages as needed. Databases/Data Management: EssentialExperience with DocumentDB and other NoSQL databases. PreferredKnowledge of Vector and Graph databases. Cloud Technologies: EssentialAWS cloud services and CloudFormation for deployment and integration. Frameworks and Libraries: EssentialFastAPI for backend services. PreferredGenerative AI frameworks and tools like Langchain. Development Tools and Methodologies: RequiredCI/CD pipelines, GitHub Actions for version control and deployment. Experience Requirements 7 to 10 years of experience in software development with a focus on cloud-native and generative AI technologies. Proven experience in developing and deploying solutions using AWS and related technologies. Experience in working with cross-functional teams and contributing to AI-centric projects. Day-to-Day Activities Participate in daily stand-up meetings and project planning sessions. Write, test, and deploy software solutions, ensuring timely delivery of project milestones. Conduct code reviews and provide constructive feedback to team members. Stay updated on technology trends and incorporate relevant advancements into the project. Collaborate with data science teams to integrate AI capabilities effectively. Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. Relevant certifications in AWS or AI technologies are preferred. Commitment to continuous professional development and staying informed on industry trends. Professional Competencies Strong critical thinking and problem-solving skills. Excellent leadership and teamwork abilities. Effective communication and stakeholder management skills. Adaptability and a strong learning orientation to embrace new technologies. Innovation mindset to drive creative solutions and improvements. Effective time and priority management skills.
Posted 3 weeks ago
5.0 - 10.0 years
17 - 22 Lacs
Chennai
Work from Office
Job Summary Synechron is seeking a highly skilled Senior Developer specializing in ELK Stack & DevOps / SRE (Site Reliability Engineering) to join our dynamic issue management team. In this pivotal role, you will leverage your expertise in Site Reliability Engineering (SRE), DevOps practices, and monitoring solutions to ensure the stability, performance, and operational readiness of our applications and infrastructure. Your contributions will directly support our business objectives by enhancing system reliability, streamlining incident management, and fostering continuous improvement across technical domains. Software Requirements Required Skills: Proven proficiency with ELK Stack (Elasticsearch, Logstash, Kibana) — version 7.x or higher, with hands-on experience in building dashboards and analytics Experience with CI/CD tools such as Jenkins, Ansible, or equivalent automation platforms Programming/scripting proficiency in Python and Bash Familiarity with monitoring and logging tools (ELK Stack essential, Splunk preferred) Cloud platform experience (AWS, Azure) — practical knowledge of cloud services and deployment strategies Preferred Skills: Experience with React, Node.js, and Java application logging and monitoring strategies Familiarity with additional DevOps tools and methodologies Knowledge of containerization and orchestration (e.g., Docker, Kubernetes) Experience with Infrastructure as Code (IaC) tools Overall Responsibilities Collaborate with the issue management team to efficiently track, analyze, and resolve incidents and Operational Readiness Evaluations (OREs), ensuring minimal disruption and swift recovery. Develop, implement, and optimize monitoring and logging solutions utilizing ELK Stack, creating actionable dashboards and performance metrics. Design and enforce effective logging strategies for applications built with React, Node.js, and Java to facilitate troubleshooting and performance analysis. Lead continuous improvement initiatives aimed at enhancing system reliability, performance, and operational efficiency. Work cross-functionally with development, infrastructure, and security teams to diagnose and address performance bottlenecks and reliability challenges. Document incident processes, resolution procedures, and best practices to promote knowledge sharing and team growth. Technical Skills (By Category) Programming Languages & Scripts (Essential): Python, Bash, or equivalent scripting languages Monitoring & Logging Tools (Essential): ELK Stack (Elasticsearch, Logstash, Kibana) — version 7.x or higher Splunk (preferred) Cloud Technologies (Essential): AWS or Azure services such as EC2, S3, CloudWatch, or equivalent Frameworks & Application Technologies: Experience in monitoring React, Node.js, and Java applications — implementation of logging and performance metrics Development & Automation Tools: CI/CD pipelines (Jenkins, Ansible) — setup, maintenance, and optimization Containerization (Docker, Kubernetes) — knowledge preferred Security & Protocols (if applicable): Basic understanding of best practices in security for monitoring and logging Experience Requirements Minimum of 8 years of professional experience in DevOps, Site Reliability Engineering, or related fields Demonstrated success in developing and maintaining comprehensive monitoring and logging solutions, particularly using ELK Stack Proven experience implementing and refining logging strategies across diverse application stacks (React, Node.js, Java) Hands-on experience working within cloud environments such as AWS or Azure Experience working in large-scale, distributed systems and incident management processes Day-to-Day Activities Proactively monitor system health and incident alerts, collaborating with the issue management team for swift resolution Design, configure, and enhance ELK Stack dashboards, visualizations, and analytics for operational insights Implement and refine logging strategies for web and backend applications to facilitate effective troubleshooting Participate in continuous improvement projects to boost application and infrastructure performance Engage in cross-team meetings to discuss incident trends, system bottlenecks, and reliability enhancements Document procedures, lessons learned, and best practices for ongoing knowledge sharing Qualifications Bachelor’s degree in Computer Science, Information Technology, or a related discipline; equivalent professional experience supported Relevant certifications such as AWS Certified DevOps Engineer, Certified Kubernetes Administrator, or equivalent are preferred Ongoing professional development in DevOps, SRE practices, and monitoring technologies Professional Competencies Strong analytical and problem-solving skills with a focus on system reliability and performance Effective communicator capable of conveying technical information clearly to diverse audiences Team-oriented collaborator with experience working across cross-functional groups Adaptable learner, eager to stay current with emerging technologies and best practices Demonstrates proactive approach to incident management and continuous improvement Ability to manage multiple priorities efficiently while maintaining attention to detail
Posted 3 weeks ago
8.0 - 13.0 years
11 - 16 Lacs
Chennai
Work from Office
Job Summary Synechron is seeking an experienced and detail-oriented Senior Middleware Administrator to oversee the deployment, management, and automation of middleware environments. This role is pivotal in ensuring the stability, security, and performance of middleware systems across cloud and on-premises infrastructure. The successful candidate will lead automation initiatives, support containerization efforts, and mentor team members to optimize middleware operations, thereby contributing to the organization’s digital and operational excellence. Software Requirements Required: Linux platform proficiency (Red Hat Enterprise Linux or equivalent) Scripting languagesPython and/or Bash (intermediate to advanced) CI/CD toolsJenkins, Travis, Concourse or similar Configuration and automation toolsAnsible, Terraform ContainerizationDocker, Kubernetes Preferred: Middleware technologies such as WebSphere, JBoss, or similar Cloud environments (AWS, Azure, or GCP) familiarity HashiCorp CertifiedTerraform Associate (optional) HashiCorp CertifiedVault Associate (optional) Overall Responsibilities Develop, maintain, and enhance automation scripts and infrastructure as code using Ansible, Terraform, and related tools Build and deploy containerized versions of legacy applications, ensuring scalability and reliability Provide hands-on support and guidance to operations teams on automation, containerization, and middleware management Support provisioning, configuration, deployment, and disaster recovery processes for middleware environments Monitor the health, security, and performance of middleware systems, taking corrective actions as needed Troubleshoot and resolve middleware-related issues with a focus on stability and uptime Automate updates, patches, configurations, and environment provisioning processes Conduct training sessions and share knowledge to improve team capabilities in automation and middleware management Assist in planning and executing middleware upgrades and migrations aligned with organizational standards and best practices Performance outcomes: Consistent, automated provisioning and deployment processes Reduced manual intervention and increased system reliability Knowledge sharing that enhances team competency Secure, compliant, and well-maintained middleware environments Technical Skills (By Category) Programming Languages: Mandatory: Python, Bash Preferred: PowerShell, other scripting tools Databases/Data Management: Basic understanding of database connectivity for middleware applications (e.g., JDBC, SQL) Cloud Technologies: Experience with cloud environments (AWS, Azure, GCP) for middleware deployment and automation Frameworks and Libraries: Knowledge of container orchestration and management (Docker, Kubernetes) Development Tools & Methodologies: Jenkins, Travis CI, Concourse Infrastructure as CodeTerraform, Ansible Version ControlGit Security Protocols: Familiarity with securing middleware environments and implementing best practices in access control and encryption Experience Requirements 5-10 years of experience in IT operations, systems automation, or middleware administration Proven expertise in middleware tools such as WebSphere, JBoss, or similar products At least 2+ years of practical experience with Ansible, Terraform, Docker, and Kubernetes in cloud or hybrid environments Demonstrated experience in automation, scripting, and infrastructure provisioning Prior exposure to cloud-native deployment models and disaster recovery strategies Alternative experience pathways: Candidates with extensive hands-on middleware administration and automation experience, even if specific cloud environment experience is limited, will be considered. Day-to-Day Activities Monitor and maintain middleware system performance, security, and availability Develop, test, and deploy automation scripts and infrastructure code for provisioning and configuration management Containerize legacy applications to improve scalability and operational efficiency Collaborate with development, operations, and security teams on middleware-related initiatives Conduct troubleshooting and root cause analysis for middleware outages or issues Perform system upgrades, patches, configuration changes, and environment migrations Provide guidance and training to operational teams on automation tools and middleware best practices Document processes, configurations, and automation workflows for transparency and knowledge sharing Qualifications Master’s Degree in Computer Science, Computer Engineering, or related field; alternative professional experience considered Certification in Red Hat Enterprise Linux Automation with Ansible (RH294) is preferred Certified Kubernetes Application Developer (CKAD) or equivalent is highly desirable Additional certifications such as HashiCorp CertifiedTerraform Associate or Vault Associate are advantageous Professional Competencies Critical thinker with strong problem-solving skills and analytical aptitude Effective communicator with the ability to articulate technical concepts clearly Team-oriented with a proven ability to collaborate across functions Results-driven, with attention to detail and quality in work outputs Adaptable and eager to learn new tools, processes, and technologies Demonstrates initiative in automation and process improvement efforts Skilled in managing multiple priorities and working under pressure
Posted 3 weeks ago
5.0 - 9.0 years
11 - 15 Lacs
Bengaluru
Work from Office
Job Summary Synechron is seeking a skilled Automation Engineer to join our team, focusing on designing and implementing automated solutions that enhance efficiency and quality across various processes and systems. This role plays a crucial part in advancing Synechron’s strategic objectives by collaborating with cross-functional teams to identify and execute automation opportunities, ensuring our solutions meet the highest standards of reliability and performance. Software Requirements Required: Proficiency in automation tools and frameworks such as Selenium, Jenkins, and Ansible. Experience with scripting languages like Python, JavaScript, or Bash. Preferred: Familiarity with cloud technologies such as AWS or Azure. Understanding of CI/CD pipelines and related tools. Overall Responsibilities Design, develop, and implement automated solutions to improve efficiency and quality. Collaborate with software developers, QA engineers, and operations personnel to identify automation opportunities. Create and maintain automation frameworks and scripts for testing and deployment processes. Conduct thorough testing of automated solutions to ensure reliability and performance. Monitor automated systems for performance issues and troubleshoot problems. Document automation processes, workflows, and technical specifications for knowledge sharing and compliance. Stay updated on industry trends and emerging technologies in automation and software development. Provide training and support to team members on automation tools and practices. Participate in continuous improvement initiatives to enhance automation strategies. Technical Skills (By Category) Programming Languages: RequiredPython, JavaScript, or Bash. PreferredJava or Ruby. Development Tools and Methodologies: RequiredSelenium, Jenkins, Ansible. Cloud Technologies: PreferredAWS, Azure. Frameworks and Libraries: RequiredFamiliarity with automation frameworks. Security Protocols: PreferredKnowledge of security practices in automation. Experience Requirements Minimum 6+ years of experience in automation engineering roles. Experience in developing automated solutions within software development environments. Industry experience in technology, finance, or similar sectors is preferred. Alternative experience pathways include roles in software development or systems engineering with a focus on automation. Day-to-Day Activities Attend regular team meetings and engage in collaborative discussions with cross-functional teams. Develop and test automation scripts and frameworks. Monitor and troubleshoot automated systems to ensure optimal performance. Document processes and maintain clear records of all automation activities. Provide support and training on automation tools and methodologies to team members. Qualifications Bachelors or Master’s degree in Computer Science, Information Technology, or a related field. Relevant certifications in automation tools or cloud technologies are preferred. Commitment to continuous professional development and staying abreast of industry trends. Professional Competencies Strong critical thinking and problem-solving capabilities. Effective communication and stakeholder management skills. Proven ability to work collaboratively in team settings. Adaptability to new technologies and changing requirements. Innovative mindset focused on improving processes and solutions. Excellent time and priority management skills.
Posted 3 weeks ago
4.0 - 9.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Job Summary Synechron is seeking talented GenAI Engineers to join our team for a 2-3 month project focused on automating quarterly planning through advanced generative AI solutions. This role involves developing a prototype using a business orchestration layer with built-in business logic, leveraging Atlassian agents and co-pilots. Our team, including experienced data scientists, aims to demonstrate this prototype in June. Software Requirements Required Software Skills: Proficiency in cloud-native solutions on AWS, including CloudFormation and AWS services. Experience with CI/CD pipelines and GitHub Actions. Strong programming skills in Python and FastAPI. Familiarity with bash scripting and DocumentDB. Preferred Software Skills: Knowledge of generative AI frameworks and tools such as Agentic Frameworks, Langchain, and Semantic Kernel. Experience with Vector, Graph, and SQL Databases. Overall Responsibilities Develop a backend and frontend UI for a prototype that automates business planning using generative AI. Apply architectural patterns and microservices architecture in solution deployment and integration. Collaborate with cross-functional teams, including data scientists, to ensure alignment with project goals. Stay current with industry trends and integrate new technologies into the solution. Conduct code reviews to ensure quality and maintainability. Technical Skills (By Category) Programming Languages: RequiredPython PreferredFamiliarity with other scripting languages such as JavaScript for frontend development. Databases/Data Management: EssentialExperience with DocumentDB and other NoSQL databases. PreferredKnowledge of Vector and Graph databases. Cloud Technologies: EssentialAWS cloud services and CloudFormation. Frameworks and Libraries: EssentialFastAPI for backend development. PreferredGenerative AI tools and frameworks like Langchain. Development Tools and Methodologies: RequiredCI/CD pipelines, GitHub Actions, Agile methodologies. Experience Requirements 7 to 10 years of experience in software development, with a focus on cloud-native and generative AI technologies. Proven experience with software development methodologies and tools such as Agile and Scrum. Experience in developing solutions with cross-functional teams, including participation in code reviews. Day-to-Day Activities Participate in daily stand-up meetings and project planning sessions. Collaborate with cross-functional teams to gather requirements and design solutions. Write, test, and deploy software solutions, ensuring timely delivery. Conduct code reviews and provide feedback to team members. Stay updated on latest technology trends and incorporate them into solutions. Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. Professional certifications in relevant technologies are a plus. Professional Competencies Strong critical thinking and problem-solving capabilities. Excellent leadership and teamwork abilities. Effective communication and stakeholder management skills. Adaptability and a strong learning orientation. Innovative mindset with a focus on driving creative solutions. Effective time and priority management skills.
Posted 3 weeks ago
6.0 - 11.0 years
6 - 16 Lacs
Pune
Hybrid
Job Title: Sr. Systems Software Engineer (Kernel, Filesystems, and Networking) Duration: Full time role Location: Hybrid (Pune/Bengaluru/Hyderabad/Mumbai/Chennai) Job Description: We are seeking a highly skilled and experienced Senior Systems Software Engineer with a strong background in Linux kernel development, file systems, and networking technologies relevant to modern data centers. The ideal candidate will have a deep understanding of low-level systems programming, networking protocols (with a focus on RDMA and CNI), and hands-on experience in designing, developing, and maintaining high-performance, scalable system components. Key Responsibilities: Design, develop, and maintain Linux kernel modules, with emphasis on performance, scalability, and security. Contribute to the development and enhancement of file systems, ensuring reliability, high throughput, and low latency. Build and optimize networking solutions for data center environments, including RDMA, DPDK, and container networking (CNI plugins). Collaborate with cross-functional teams including DevOps, SRE, and platform engineering to ensure seamless integration with infrastructure. Participate in code reviews, architecture discussions, and performance tuning sessions. Monitor and improve system performance, addressing bottlenecks and implementing robust diagnostics. Contribute to open-source communities and stay up to date with emerging kernel and networking trends. Required Skills and Qualifications: 6 to 8 years of hands-on experience in systems-level programming, preferably in C/C++ and scripting languages like Python or Bash. Proven expertise in Linux kernel development (process management, memory management, device drivers, etc.). In-depth knowledge of file system architecture, implementation, and debugging (e.g., ext4, XFS, Btrfs, etc.). Strong understanding of datacenter networking concepts: TCP/IP stack, RDMA, DPDK, SR-IOV, VLANs, VxLAN, etc., Experience with Container Networking Interface (CNI) and technologies like Kubernetes, Docker, or CRI-O. Familiarity with performance profiling tools (e.g., perf, ftrace, eBPF). Experience working in distributed systems and cloud-native infrastructure is a plus. Preferred Qualifications: Contributions to open-source kernel/file system/networking projects. Familiarity with cloud platforms (e.g., AWS, GCP, Azure) and infrastructure as code tools (e.g., Terraform, Ansible).
Posted 3 weeks ago
5.0 - 8.0 years
5 - 15 Lacs
Bengaluru
Remote
Key Responsibilities: Design, develop, and optimize relational (PostgreSQL, SQL Server, MySQL, Oracle) and NoSQL (MongoDB, Cassandra, Redis) databases. Write and optimize complex SQL queries, stored procedures, triggers, and functions. Develop and maintain ETL pipelines for data integration. Ensure database security, backups, and high-availability solutions. Collaborate with teams to support application development and troubleshoot performance issues. Maintain technical documentation and stay updated on database best practices. Required Skills: 5+ years of experience in database development. Strong expertise in PostgreSQL and proficiency in SQL Server, MySQL, or Oracle. Experience with query optimization, indexing, and partitioning. Familiarity with NoSQL databases and cloud DB solutions (AWS RDS, Azure SQL, etc.). Hands-on experience with ETL tools, data warehousing, and scripting (Python, PowerShell, Bash). Strong problem-solving and communication skills.
Posted 3 weeks ago
2.0 - 5.0 years
5 - 8 Lacs
Chennai
Remote
Notice Period : Immediate - 15 Days Job Description Overview : We are seeking a highly skilled Linux Systems Engineer to join our team. The ideal candidate will have a deep understanding of Linux operating systems, virtualization technologies (specifically VMware), and networking protocols. This role will involve deploying and managing products in Linux environments, troubleshooting complex issues, and ensuring optimal system performance. Responsibilities : Linux Administration : - Proficiently administer various Linux distributions (e.g., Red Hat, Ubuntu, CentOS). - Install, configure, and maintain Linux systems, including servers, workstations, and network devices. - Perform system hardening, security updates, and patch management. - Manage user accounts, permissions, and access controls. - Optimize system performance and resource utilization. Virtualization : - Deploy and manage virtual machines using VMware vSphere. - Create, configure, and maintain virtual networks and storage. - Perform VM migration, replication, and high availability tasks. - Troubleshoot virtualization-related issues. Networking : - Understand TCP/IP, UDP, SNMP protocols and their applications. - Configure and troubleshoot network interfaces, routing, and firewalls. - Work with network devices (switches, routers, load balancers). - Implement network security measures. Product Deployment : - Deploy and configure software products in Linux environments. - Integrate products with virtualization platforms and other systems. - Provide technical support and troubleshooting for deployed products. Troubleshooting : - Diagnose and resolve complex technical issues related to Linux, virtualization, and networking. - Analyze system logs and performance metrics to identify problems. - Implement effective troubleshooting strategies and best practices. Documentation : - Create and maintain clear and concise documentation for system configurations, procedures, and troubleshooting steps. Collaboration : - Work closely with other team members, including developers, network engineers, and IT operations staff. - Communicate effectively and collaborate on projects to achieve team goals. Qualifications : - Strong knowledge of Linux operating systems, including Red Hat Enterprise Linux. - Experience with VMware vSphere virtualization platform. - Understanding of networking protocols (TCP/IP, UDP, SNMP) and concepts. - Experience deploying and managing software products in Linux environments. - Excellent analytical and troubleshooting skills. - Excellent communication and interpersonal skills. - Certification in RHCSA or RHCE is a plus. - Knowledge of OpenStack is a bonus. - Familiarity with hardware platforms (EMC VNX, Unity Storage, HP blade servers, Brocade SAN switches, VC flex, HP switches) is beneficial. Additional Skills : - Scripting skills (e.g., Bash, Python). - Automation experience using tools like Ansible or Puppet. - Cloud computing knowledge (e.g., AWS, Azure, GCP).
Posted 3 weeks ago
6.0 - 9.0 years
27 - 42 Lacs
Chennai
Work from Office
Role : MLOps Engineer Location - Kochi Mode of Interview - In Person Data - 14th June 2025 (Saturday) Key words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities: Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required experience and qualifications: Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage.
Posted 3 weeks ago
4.0 - 7.0 years
20 - 25 Lacs
Pune
Work from Office
About the Role : - We are seeking a highly skilled and experienced Senior Cloud Infrastructure & DevOps Engineer to join our dynamic engineering team. - As a key member of our DevOps team, you will play a critical role in designing, implementing, and maintaining our cloud infrastructure and CI/CD pipelines. - You will be responsible for automating and streamlining our software delivery processes, ensuring the reliability, scalability, and security of our cloud environments. Key Responsibilities : - Provision, configure, and manage cloud infrastructure resources on platforms such as AWS, Azure, or GCP. - Implement and maintain infrastructure as code (IaC) using tools like Terraform, Ansible, or CloudFormation. - Ensure the security and compliance of cloud resources. - Optimize cloud resource utilization and minimize costs. - Design, implement, and maintain CI/CD pipelines using Jenkins, GitLab CI/CD, or other CI/CD tools. - Automate build, test, and deployment processes for applications and infrastructure. - Integrate security and compliance checks into the CI/CD pipeline. - Experience with containerization platforms like Docker and Kubernetes. - Deploy, manage, and scale containerized applications. - Implement and manage Kubernetes clusters. - Implement and maintain monitoring and logging solutions (e.g., ELK stack, Prometheus, Grafana). - Monitor application and infrastructure performance, identify and troubleshoot issues proactively. - Collaborate effectively with development, operations, and security teams. - Communicate technical information clearly and concisely to both technical and non-technical audiences. - Participate in code reviews and contribute to the improvement of development processes. Qualifications : Essential : - 4-7 years of experience in DevOps engineering or a related field. - Strong experience with CI/CD tools (Jenkins, GitLab CI/CD). - Hands-on experience with containerization technologies (Docker, Kubernetes). - Proficiency with scripting languages (Bash, Python, Groovy). - Experience with Linux/Unix systems administration. - Experience with configuration management tools (Ansible, Puppet, Chef). - Strong understanding of networking concepts and security best practices. - Excellent problem-solving, analytical, and troubleshooting skills. - Strong communication and interpersonal skills. - Bachelor's degree in Computer Science, Engineering, or a related field.
Posted 3 weeks ago
6.0 - 8.0 years
11 - 15 Lacs
Gurugram
Work from Office
Responsibilities : - Define and enforce SLOs, SLIs, and error budgets across microservices - Architect an observability stack (metrics, logs, traces) and drive operational insights - Automate toil and manual ops with robust tooling and runbooks - Own incident response lifecycle: detection, triage, RCA, and postmortems - Collaborate with product teams to build fault-tolerant systems - Champion performance tuning, capacity planning, and scalability testing - Optimise costs while maintaining the reliability of cloud infrastructure Must have Skills : - 6+ years in SRE/Infrastructure/Backend related roles using Cloud Native Technologies - 2+ years in SRE-specific capacity - Strong experience with monitoring/observability tools (Datadog, Prometheus, Grafana, ELK etc.) - Experience with infrastructure-as-code (Terraform/Ansible) - Proficiency in Kubernetes, service mesh (Istio/Linkerd), and container orchestration - Deep understanding of distributed systems, networking, and failure domains - Expertise in automation with Python, Bash, or Go - Proficient in incident management, SLAs/SLOs, and system tuning - Hands-on experience with GCP (preferred)/AWS/Azure and cloud cost optimisation - Participation in on-call rotations and running large-scale production systems Nice to have skills : - Familiarity with chaos engineering practices and tools (Gremlin, Litmus) - Background in performance testing and load simulation (Gatling, Locust, k6, JMeter)
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The bash job market in India is thriving with numerous opportunities for professionals who have expertise in bash scripting. Organizations across various industries are actively seeking individuals with these skills to streamline their operations and automate repetitive tasks. If you are a job seeker looking to explore bash jobs in India, read on to learn more about the job market, salary range, career progression, related skills, and interview questions.
These cities are known for their vibrant tech industries and offer a plethora of opportunities for bash professionals.
The average salary range for bash professionals in India varies based on experience level. Entry-level positions may start at around INR 3-4 lakhs per annum, while experienced professionals can earn up to INR 12-15 lakhs per annum.
In the field of bash scripting, a typical career path may involve starting as a Junior Developer, progressing to a Senior Developer, and eventually moving up to a Tech Lead role. With experience and expertise, individuals can also explore roles such as DevOps Engineer or Systems Administrator.
In addition to bash scripting, professionals in this field are often expected to have knowledge of: - Linux operating system - Shell scripting - Automation tools like Ansible - Version control systems like Git - Networking concepts
chmod
command in bash? (basic)grep
and awk
commands. (medium)#!
(shebang) at the beginning of a bash script? (basic)cut
command in bash? (basic)cron
? (medium)$1
and $@
in bash scripting. (medium)exec
command in bash? (advanced)tr
command in bash? (basic)export
command in bash. (medium)As you prepare for bash job interviews in India, remember to showcase your expertise in bash scripting, along with related skills and experience. By mastering these interview questions and demonstrating your passion for automation and scripting, you can confidently pursue rewarding opportunities in the field. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2