Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Job Description **mandatory SN Required Information Details 1 Role** Digital Workplace (SD & EUC) Automation Solution Architect 2 Required Technical Skill Set** Sound knowledge of ServiceDesk operation process, Tools and Analytics Extensive knowledge of digital ServiceDesk Automation solution using market standard virtual agent platforms (ServiceNow VA, Azure Bot, Kore.AI, Amelia etc.), ServiceNow Orchestrator and ITSM, Avaya, AWS Connect and digital works place automation tools& platforms (Nexthink, Nanoheal etc.) Sound knowledge of Workplace or EUC process, Tools and Analytics, remote support, desktop engineering, field support, touch services, messaging & collaboration. Digital workplace automation tools & platforms (ignio™ DWS, Nexthink, Nanoheal, Systrack etc.) Strong Experience with at least one Orchestration & automation platform (ServiceNow Orchestrator, Ansible, etc.) 3 Good to have Technical Skill Set Exposure & experience with, Aternity, 1E Tachyon Exposure to ITSM tools (Ex: ServiceNow, BMC Remedy) 4 Desired Experience Range** 5 to 15 years 5 Location of Requirement PAN India Desired Competencies (Technical/Behavioral Competency) Must-Have** (Ideally should not be more than ) 5+ experience in Workplace (EUC) & SD operations or overall, 5+ years of experience with Automation solution in ServiceDesk/wider IT Ops Automation with exposure to Workplace (EUC) & IT Operation At least one implementation experience of SD / workplace automation solution (EUC) and/or ServiceDesk automation solution Implementation experience with API (Rest / Webservice / SOAP / XML over https) for integration with Monitoring tools, ITSM (Service Now/Cherwell/Remedy), Experience working with one Virtual agent platform (Azure based, AWS based, ServiceNow VA) Excellent presentation skills & ability to articulate Automation Solution for SD and EUC Automation deals area in a simplified manner understood by various levels of customer Good-to-Have Knowledge of chat/voice bot integration experience on Bot (Ex: RASA, Kore.AI) Ability to spot & mine automation interventions opportunities in various sub processes Good experience in MS Excel, Power point and Word SN Responsibility of / Expectations from the Role 1 Author Automation Solution for SD and EUC Automation deals. Defense/support defense (including presentation and demo) of automation solution with customers 2 Position & defend TCS IT Ops Automation offerings with our internal stakeholders (Industry Vertical team, IT Operation Services Team) 3 Defence/support defence (including presentation and demo) of automation solution with customers Support Geo Sales & Solution Team with various customer meeting preparation 4 Collaborate with various offerings team with TCS to incorporate newer and up to date offerings & case studies in IT Operation Automation solution proposal 5 Support Geo Sales & Solution Team with various customer meeting preparation. 6 Develop & share reusable solution assets/frameworks with rest of the solution team 7 Liaise with Automation & AI & workplace automation solution product vendors for product solution collaterals and pricing 8 Train, coach and guide the generalist member of the team with latest and greatest of workplace automation platforms, Chat/Voice bot, contact centre product 9 Create & present POV on workplace automation solution, virtual agent & contact centre automation solution for wider organization and Industry consumption 10 Co-lead requirements gathering workshops with internal business stakeholders to provide technical guidance, and assist with capturing all necessary business, functional and technical requirements necessary to design and implement the solution
Posted 2 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Dive in and do the best work of your career at DigitalOcean. Journey alongside a strong community of top talent who are relentless in their drive to build the simplest scalable cloud. If you have a growth mindset, naturally like to think big and bold, and are energized by the fast-paced environment of a true industry disruptor, you’ll find your place here. We value winning together—while learning, having fun, and making a profound difference for the dreamers and builders in the world. Dive in and do the best work of your career at DigitalOcean. Journey alongside a strong community of top talent who are relentless in their drive to build the simplest scalable cloud. If you have a growth mindset, naturally like to think big and bold, and are energized by the fast-paced environment of a true industry disruptor, you’ll find your place here. We value winning together —while learning, having fun, and making a profound difference for the dreamers and builders in the world. We are looking for an experienced database engineer with an operations background in building sustainable solutions for data storage and streaming platforms. Our team’s mission statement is to “provide tools and expertise to solve common operational problems, accelerating and simplifying product development.” As part of the team, you’ll be working with a variety of data-related technologies, including MySQL, Clickhouse, Kafka, and Redis, in an effort to transform these technologies into platform services. NOTE: this is not a ‘data scientist’ role, rather this role is to help design and build datastore-related platforms for internal stakeholder groups within DigitalOcean. See more below for role expectations. This is an opportunity to build the services and systems that will accelerate the development of DigitalOcean’s cloud features. Services will provide highly available, operationally elegant solutions that serve as a foundation for a growing product base and serving a global audience. This is a high-impact role and you’ll be working with a large variety of product engineering teams across the company. What You’ll Be Doing Administration, operations, and performance tuning of Vitess-managed MySQL datastores ,with a focus on large-scale, sharded environments Architecting new Vitess-based MySQL database infrastructure on bare metal Delivering managed data platform solutions as a service that facilitate adoption and offer operational elegance Working closely with product engineering and infrastructure teams to drive adoption of services throughout the company Instrument and monitor services developed to ensure operational performance Create tooling and automation to reduce operational burdens Establishing best practices for development, deployment, and operations Driving adoption of services throughout the company Interact with developers and teams to resolve site and database issues What You'll Add To DigitalOcean Experience supporting MySQL (ideally with Vitess or other sharding solutions) in a production environment, with in-depth knowledge of backups, high availability, sharding, and performance tuning Distinguished track record developing and automating platform solutions that serve the needs of other engineering teams Experience with other data technologies such as Kafka and Redis Fluency in SQL, Python, Bash, or other scripting languages Experience with Linux performance troubleshooting Experience with configuration management tooling such as Chef & Ansible What We’d Love You To Have An understanding of using ProxySQL and Kubernetes Familiarity with continuous integration tools such as Concourse, GitHub Actions Some familiarity with Go readability Passion for production engineering done in a resilient fashion You have a passion for not repeating yourself (DRY) by way of automation What Will Not Be Expected From You Demonstrated expertise in being a ‘data scientist’ - this role has much more production engineering focus Crunching mundane support tickets day over day - be the Automator Following a lengthy and strict product roadmap - engineers wear product hats as needed and help define what platform gets built Why You’ll Like Working for DigitalOcean We innovate with purpose. You’ll be a part of a cutting-edge technology company with an upward trajectory, who are proud to simplify cloud and AI so builders can spend more time creating software that changes the world. As a member of the team, you will be a Shark who thinks big, bold, and scrappy, like an owner with a bias for action and a powerful sense of responsibility for customers, products, employees, and decisions. We prioritize career development. At DO, you’ll do the best work of your career. You will work with some of the smartest and most interesting people in the industry. We are a high-performance organization that will always challenge you to think big. Our organizational development team will provide you with resources to ensure you keep growing. We provide employees with reimbursement for relevant conferences, training, and education. All employees have access to LinkedIn Learning's 10,000+ courses to support their continued growth and development. We care about your well-being. Regardless of your location, we will provide you with a competitive array of benefits to support you from our Employee Assistance Program to Local Employee Meetups to flexible time off policy, to name a few. While the philosophy around our benefits is the same worldwide, specific benefits may vary based on local regulations and preferences. We reward our employees. The salary range for this position based on market data, relevant years of experience, and skills. You may qualify for a bonus in addition to base salary; bonus amounts are determined based on company and individual performance. We also provide equity compensation to eligible employees, including equity grants upon hire and the option to participate in our Employee Stock Purchase Program. We value diversity and inclusion. We are an equal-opportunity employer, and recognize that diversity of thought and background builds stronger teams and products to serve our customers. We approach diversity and inclusion seriously and thoughtfully. We do not discriminate on the basis of race, religion, color, ancestry, national origin, caste, sex, sexual orientation, gender, gender identity or expression, age, disability, medical condition, pregnancy, genetic makeup, marital status, or military service. This is role is located in Hyderabad, India
Posted 2 days ago
20.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
About Rackspace Cyber Defence Rackspace Cyber Defence is our next generation cyber defence and security operations capability that builds on 20+ years of securing customer environments to deliver proactive, risk-based, threat-informed and intelligence driven security services. Our purpose is to enable our customers to defend against the evolving threat landscape across on-premises, private cloud, public cloud and multi-cloud workloads. Our goal is to go beyond traditional security controls to deliver cloud-native, DevOps-centric and fully integrated 24x7x365 cyber defence capabilities that deliver a proactive, threat-informed, risk-based, intelligence-driven approach to detecting and responding to threats. Our mission is to help our customers: Proactively detect and respond to cyber-attacks – 24x7x365. Defend against new and emerging risks that impact their business. Reduce their attack surface across private cloud, hybrid cloud, public cloud, and multi-cloud environments. Reduce their exposure to risks that impact their identity and brand. Develop operational resilience. Maintain compliance with legal, regulatory and compliance obligations. What We’re Looking For To support our continued success and deliver a Fanatical Experience™ to our customers, Rackspace Cyber Defence is looking for an Indian based Security Operations Analyst(L2) to support Rackspace’s strategic customers. This role is particularly well-suited to a self-starting, experienced and motivated Sec Ops Analyst, who has a proven record of accomplishment in the cloud security monitoring and incident detection domain As a Security Operations Analyst(L2), you will be responsible for detecting, analysing, and responding to threats posed across customer on-premises, private cloud, public cloud, and multi-cloud environments The primary focus will be on triaging alerts and events (incident detection), which may indicate malicious activity, and determining if threats are real or not. You will also be required to liaise closely with the customer’s key stakeholders, which may include incident response and disaster recovery teams as well as information security. Key Accountabilities Should have experience of 4-7years in SOC Ensure the Customer’s operational and production environment remains secure at all the times and any threats are raised and addressed in a timely manner. Critical incident analysis & validation Platform management tasks like checking the health status and basic troubleshooting Create new runbooks, playbooks and knowledgebase documents Trend monitoring & analysis Threat and vulnerability impact analysis Reactive discovery of adversaries based on threat advisory or intelligence reports Compliance reporting Onboarding of log sources Rule and dashboard enhancements Basic threat hunting Created and manage the watchlists Handling escalations from L1 Analysts Review the L1 handled Incident and prepare individual scorecards Prepare and review the weekly and monthly reports Co-ordinate with vendor for issue resolution Use of threat intelligence platforms such as OSINT, to understand latest threats. Researching and analysing the latest threats to better understand an adversary’s tactics, techniques, and procedures (TTPs) Automation of security processes and procedures to enhance and streamline monitoring capabilities Ensure all Zero Day vulnerabilities are resolved within agreed SLA (Service Level Agreement) periods by respective teams which was reported by L2 Analyst team Maintain close working relationships with relevant teams and individual key stakeholders, such as incident response and disaster recovery teams as well as information security etc Required to work in 24/7 Rotational shift Skills & Experience Existing experience as a Security Operations Analyst, or equivalent Experience of working in large scale, public cloud environments and with using cloud native security monitoring tools such as: - Microsoft Sentinel Microsoft 365 Defender Microsoft Defender for Cloud Endpoint Detection & Response (EDR) tools such as Crowdstrike, Microsoft Defender for Endpoint Firewalls and network security tools such as Palo Alto, Fortinet, Juniper, and Cisco Web Application Firewall (WAF) tools such as Cloudflare, Akamai and Azure WAF Email Security tools such as Proofpoint, Mimecast and Microsoft Defender for Office Data Loss Prevention (DLP) tools such as Microsoft Purview, McAfee and Symantec Nice to have skills/experience includes: Google Cloud Platform (GCP) security tools such as Chronicle and Security Command Centre Amazon Web Services (AWS) security tools such as Security Hub, AWS Guard Duty, AWS Macie, AWS Config and AWS CloudTrail Experience of analysing malware and email headers, and has skills in network security, intrusion detection and prevention systems; operating systems; risk identification and analysis; threat identification and analysis and log analysis. Experience of security controls, such as network access controls; identity, authentication, and access management controls (IAAM); and intrusion detection and prevention controls. Knowledge of security standards (good practice) such as NIST, ISO27001, CIS (Center for Internet Security), OWASP and Cloud Controls Matrix (CCM) etc Knowledge of scripting and coding with languages such as Terraform, python, javascript, golang, bash and/or powershell Knowledge of DevOps practices such as CI/CD, Azure DevOps, CircleCI, GitHub Actions, Ansible and/or Jenkins Computer science, engineering, or information technology related degree (although not a strict requirement) Holds one, or more, of the following certificates (or equivalent): - Certified Information Security Systems Professional (CISSP) Microsoft Certified: Azure Security Engineer Associate (AZ500) Microsoft Certified: Security Operations Analyst Associate (SC-200) CREST Practitioner Intrusion Analyst (CPIA) CREST Registered Intrusion Analyst (CRIA) CREST Certified Network Intrusion Analyst (CCNIA) Systems Security Certified Practitioner (SSCP) Certified Cloud Security Professional (CCSP) GIAC Certified Incident Handler (GCIH)GIAC Security Operations Certified (GSOC) A highly self-motivated and proactive individual who wants to learn and grow and has an attention to detail A great analyser, trouble-shooter and problem solver who understands security operations, programming languages and security architecture Highly organised and detail oriented. Ability to prioritise, multitask and work under pressure An individual who shows a willingness to go beyond in delighting the customer A good communicator who can explain security concepts to both technical and non-technical audiences
Posted 2 days ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Position Title: DevOps Engineer Company: Cyfuture India Pvt. Ltd. Industry: IT Services and IT Consulting Website: www.cyfuture.com Experience: 3–4 Years CTC: 6 LPA Education: B.E./B.Tech/MCA (Computer Science, IT) About Cyfuture Cyfuture is a leading name in IT services and cloud infrastructure, offering cutting-edge data center solutions and managed services across platforms like AWS, Azure, and VMware. Our clientele spans enterprises and government institutions, and we’re rapidly growing in the areas of system integration and managed services. We have strategic partnerships with global OEMs such as VMware, AWS, Azure, HP, Dell, Lenovo, and Palo Alto. Job Overview We are looking for a skilled and motivated DevOps Engineer to join our growing infrastructure team. The ideal candidate will have hands-on experience with DevOps tools and methodologies and will be responsible for deploying, automating, maintaining, and managing production systems to ensure high availability and scalability. Key Responsibilities Implement and manage CI/CD pipelines using Jenkins, GitLab, and GitLab Pipelines Automate infrastructure using tools like Ansible and Shell scripting Containerization and orchestration using Docker Manage source control using GitLab Integrate SonarQube for static code analysis and quality control Deploy and maintain applications on Red Hat Linux OS , Apache, Nginx, and Tomcat Troubleshoot and resolve infrastructure and application-related issues Collaborate with developers and operations teams to improve delivery pipelines Required Skills & Tools CI/CD: Jenkins, GitLab, GitLab Pipelines Automation: Ansible, Shell Scripting Containers: Docker Build Tools: Maven, Gradle, etc. Monitoring & Code Quality: SonarQube Web Servers: Apache, Nginx, Tomcat Operating Systems: Red Hat Linux or other Linux distributions Preferred Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field Strong understanding of DevOps principles and Agile development Good communication skills and the ability to work in a team environment Why Join Us? Work on large-scale infrastructure with enterprise-grade tools Cross-functional learning and exposure to cloud, security, and DevOps automation Strong career growth opportunities in a fast-growing IT company Collaborative work culture and a technology-first mindset To Apply: Please send your updated resume to udisha.parashar@cyfuture.com Immediate joiners will only be preferred.
Posted 2 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About us: Conglomerate IT is a certified and a pioneer in providing premium end-to-end Global Workforce Solutions and IT Services to diverse clients across various domains. Visit us at https://www.conglomerateit.com/ Conglomerate IT mission is to establish global cross culture human connections that further the careers of our employees and strengthen the businesses of our clients. We are driven to use the power of global network to connect business with the right people without bias. We provide Global Workforce Solutions with affability. About job: Job Title: DevOps engineer Location: Hyderabad/ Chennai (Hybrid) Experience Level: 5+ years Job Summary: We are seeking a skilled DevOps Engineer with hands-on experience in infrastructure automation using Terraform or Ansible. The ideal candidate will have a strong background in managing DevOps activities across cloud platforms and possess a solid understanding of CI/CD processes. Key Responsibilities: Develop and maintain infrastructure-as-code using Terraform or Ansible scripts. Manage and automate cloud infrastructure across any major cloud platform (AWS, Azure, GCP, etc.). Monitor, troubleshoot, and optimize DevOps pipelines and cloud environments. Collaborate with development and operations teams to ensure seamless CI/CD integration. Ensure security and compliance standards are followed in all infrastructure deployments. Required Skills: Minimum 4+ years of experience with Terraform scripting or Ansible scripting. Proven experience managing DevOps activities on any cloud platform. Strong knowledge of CI/CD tools and best practices. Experience with version control tools such as Git. Good to Have: Basic understanding or exposure to Kafka and event-driven architectures. Nice to Have (Optional): Experience with containerization tools (Docker, Kubernetes). Familiarity with monitoring tools (Prometheus, Grafana, CloudWatch).
Posted 2 days ago
7.0 years
0 Lacs
Kochi, Kerala, India
On-site
Job Title: Senior DevOps Engineer Location: Ernakulam, Kerala CTC: ₹15 LPA Experience Required: 5–7 years Notice Period: Immediate to 1 month Primary Skills: AWS, Azure, Scripting, Linux, Windows Role Overview We are looking for a highly skilled Senior DevOps Engineer with proven expertise in AWS and Azure cloud platforms. This role involves designing, implementing, and managing advanced cloud solutions that are secure, scalable, and cost-efficient. The ideal candidate will also have strong scripting skills, solid Linux/Windows administration experience, and the ability to lead complex cloud projects. Key Responsibilities Design and implement cloud solutions using AWS (EC2, S3, RDS, Lambda, CloudFormation) and Azure (VMs, Blob Storage, SQL Database, Functions, ARM Templates). Architect and deploy hybrid and multi-cloud solutions, integrating on-premises with AWS/Azure using Direct Connect, ExpressRoute, and VPN Gateways. Automate deployments using AWS CloudFormation, ARM, Terraform, and Ansible. Ensure security and compliance with IAM, Azure AD, KMS, Key Vault, and AWS Shield. Optimize cloud costs with AWS Cost Explorer, Azure Cost Management, Trusted Advisor, and Azure Advisor. Lead cloud migration projects using AWS Migration Services and Azure Migrate. Apply AWS Well-Architected and Azure Architecture Framework best practices. Develop scripts/tools for operational efficiency (Python, PowerShell, Bash). Implement and monitor production environments and create monitoring dashboards. Perform Linux/Windows administration and troubleshooting. Qualifications & Skills 5+ years of hands-on experience with AWS & Azure cloud solutions. 5+ years Linux & Windows administration experience. Strong scripting skills in Bash, Python, PowerShell. Cloud certifications (AWS, Azure, or equivalent) preferred. Expertise in networking and security (VPC, Route 53, Azure DNS, NSGs, Application Gateway). Experience with databases (Postgres, MariaDB, MySQL, MSSQL). Strong problem-solving, analytical, and communication skills. Ability to manage cross-functional teams and deliver complex cloud projects. Skills: linux,terraform,powershell,networking,windows,python,scripting,bash,ansible,cloudformation,aws,databases,security,azure
Posted 2 days ago
7.5 years
0 Lacs
Kolkata, West Bengal, India
Remote
Project Role : Infra Tech Support Practitioner Project Role Description : Provide ongoing technical support and maintenance of production and development systems and software products (both remote and onsite) and for configured services running on various platforms (operating within a defined operating model and processes). Provide hardware/software support and implement technology at the operating system-level across all server and network areas, and for particular software solutions/vendors/brands. Work includes L1 and L2/ basic and intermediate level troubleshooting. Must have skills : Dynatrace Administration Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Dynatrace L3 Primary skill: Dynatrace Administration Secondary skill: Linux, GitLab, Ansible, Terraform, Key responsibilities: - Expected to be an SME, Work on Build and solicitation activity of Dynatrace - Responsible for managing entire Dynatrace Grail solution - Propose changes in Dynatrace, New extensions and - Provide solutions to problems related to Dynatrace - Expected to have a good knowlede in Linux, Ansible, Terraform and Gitlab. - Expected to get requirements from client and provide suitable/optimized solution on observibility. Technical experience: - Must To Have Skills: Proficiency in Dynatrace Administration. - Good To Have Skills: Experience in build and configuration management. - Strong understanding of server and network management. - Familiarity with incident management and ticketing systems. - Experience in troubleshooting operating system-level issues. Professional Attribute: - The candidate should have minimum 7.5 years of experience in Dynatrace Administration. - A 15 years full time education is required.
Posted 2 days ago
2.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Security Logging Support Specialist Job Description: We are seeking a detail-oriented and technically proficient Security Logging Support Specialist to join our team. In this role, you will be responsible for supporting the operational aspects of our security environment, Microsoft Azure Sentinel, primarily utilizing tools like Kusto Query Language (KQL) and Microsoft Azure Services such as Logic Apps and Azure Functions for managing and automating security workflows. Your expertise in these tools, combined with your foundational understanding of IT infrastructure and Microsoft Azure, is necessary to help streamline the onboarding of new data sources and respond to incident tickets efficiently. As an Security Logging Support Specialist , you will work closely with our security and IT teams, troubleshoot data onboarding issues, and ensure seamless integration of data sources across our IT infrastructure. This position is suitable for someone with 2-5 years of experience in the IT operations or systems administration space who has a strong interest in security, monitoring, and automation. Key Responsibilities: Support the integration of new data sources from a variety of IT infrastructure devices (e.g., servers, firewalls, network devices, appliances). Ensure the proper configuration, troubleshooting, and maintenance of data onboarding processes. Address data collection issues and perform root-cause analysis for data discrepancies. Work closely with the infrastructure teams to onboard new data sources, ensuring they are properly integrated into Microsoft Sentinel. Provide operational support to ensure data is accurately ingested and monitored across multiple platforms. Assist in the development and automation of security workflows using Logic Apps. Collaborate with other teams to define data management processes, policies, and standards. Write and maintain light scripts to automate data onboarding/management tasks (e.g., Powershell, Python, Bash). Support and maintain data retention and archival processes to meet business and compliance needs. Document and report issues, resolutions, and improvements for internal knowledge sharing. Utilize Microsoft Azure services for security monitoring and automation. Develop and maintain KQL (Kusto Query Language) queries for data analysis and monitoring within Microsoft Sentinel. Preferred Qualifications: 2-5 years of experience in an operational or support role focused on IT infrastructure or logging systems. Familiarity with security tools like Microsoft Sentinel and services that support data ingestion, including Logic Apps (data ingestion, monitoring, and configuration) and Azure Functions (Function Apps). Solid understanding of Microsoft Azure services and their application in security monitoring and automation. Experience with KQL (Kusto Query Language) for data analysis and monitoring. Solid understanding of Linux and Windows servers , with comfort navigating the Linux command line. Working knowledge support log management and monitoring supporting applications, such as RSyslog , Cribl , Graylog , Syslog-ng or similar. Working knowledge of key IT concepts, including API , CIDR notation/subnets, RDP and SSH, Security Protocols (SSL, TCP/IP, Proxy, IAM), Load Balancing and HA, Virtualization, Ansible, Git, SQL Light scripting knowledge in at least two of the following languages: Powershell , Python , Shell/Bash . Strong troubleshooting skills and ability to resolve issues efficiently. Ability to collaborate with cross-functional teams to develop and communicate technical details clearly. Desired Skills: Strong problem-solving and analytical abilities. Knowledge of log aggregation, parsing, and searching techniques. Familiarity with log data normalization and correlation. Experience with automation and orchestration tools is a plus.
Posted 2 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About CONGLOMERATEIT: Conglomerate IT is a certified and a pioneer in providing premium end-to-end Global Workforce Solutions and IT Services to diverse clients across various domains. Visit us at http://www.conglomerateit.com Our mission is to establish global cross culture human connections that further the careers of our employees and strengthen the businesses of our clients. We are driven to use the power of global network to connect business with the right people without bias. We provide Global Workforce Solutions with affability. About Job Job Title: DevOps engineer Location: Hyderabad/ Chennai (Hybrid) Experience Level: 5+ years Job Description: We are seeking a skilled DevOps Engineer with hands-on experience in infrastructure automation using Terraform or Ansible . The ideal candidate will have a strong background in managing DevOps activities across cloud platforms and possess a solid understanding of CI/CD processes. Key Responsibilities: Develop and maintain infrastructure-as-code using Terraform or Ansible scripts. Manage and automate cloud infrastructure across any major cloud platform (AWS, Azure, GCP, etc.). Monitor, troubleshoot, and optimize DevOps pipelines and cloud environments. Collaborate with development and operations teams to ensure seamless CI/CD integration. Ensure security and compliance standards are followed in all infrastructure deployments. Required Skills: Minimum 4+ years of experience with Terraform scripting or Ansible scripting . Proven experience managing DevOps activities on any cloud platform . Strong knowledge of CI/CD tools and best practices. Experience with version control tools such as Git. Good to Have: Basic understanding or exposure to Kafka and event-driven architectures. Nice to Have (Optional): Experience with containerization tools (Docker, Kubernetes). Familiarity with monitoring tools (Prometheus, Grafana, CloudWatch).
Posted 2 days ago
7.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
About Us Wayvida is a cutting-edge, all-in-one AI-powered teaching and learning platform created to empower coaches, educators, institutions, learners, and communities across the globe by helping them to launch Online Course Selling Platform in their Brand. With a mission to transform education through technology, Wayvida eliminates barriers to teaching and learning, paving the way for professional and personal growth. Wayvida equips you with the tools to create, manage, and market your courses effortlessly without any technical expertise. From live classes and recorded sessions to AI-powered Test creators, assessments, and community engagement and marketing tools, our platform is designed to foster personalized teaching and learning experiences. Combining advanced AI with an intuitive design, Wayvida democratizes education, making it accessible to everyone, everywhere. Job Description Design & manage CI/CD pipelines (Jenkins, GitLab CI/CD, GitHub Actions). Automate infrastructure provisioning (Terraform, Ansible, Pulumi). Monitor & optimize cloud environments (AWS, GCP, Azure). Implement containerization & orchestration (Docker, Kubernetes - EKS/GKE/AKS). Maintain logging, monitoring & alerting (ELK, Prometheus, Grafana, Datadog). Ensure system security, availability & performance tuning . Manage secrets & credentials (Vault, AWS Secrets Manager). Troubleshoot infrastructure & deployment issues . Implement blue-green & canary deployments . Collaborate with developers to enhance system reliability & productivity .. Requirements 7+ years in DevOps, SRE, or Infrastructure Engineering . Strong expertise in Cloud Azure & Infrastructure-as-Code (Terraform CloudFormation) . Proficient in Docker & Kubernetes . Hands-on with CI/CD tools & scripting (Bash, Python, or Go) . Strong knowledge of Linux, networking, and security best practices . Experience with monitoring & logging tools (ELK, Prometheus, Grafana). Familiarity with GitOps, Helm charts & automation .
Posted 2 days ago
40.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Description Design and implement scalable and secure cloud infrastructure on AWS, utilizing services such as EC2, S3, RDS, and VPC. Automate the provisioning and management of AWS resources using Infrastructure as Code tools: Terraform, Ansible, and YAML. Collaborate with development teams to understand their requirements and translate them into cloud-based solutions. Develop and maintain Python and shell scripts to automate routine tasks, enhance efficiency, and ensure consistency. Containerize applications using Docker and deploy them in a Kubernetes cluster, ensuring high availability and scalability. Implement and maintain continuous integration and continuous deployment (CI/CD) pipelines using tools like Jenkins, GitLab, or AWS CodePipeline. Optimize cloud resources for performance and cost-efficiency, employing tools like AWS Cost Explorer and CloudWatch. Stay up-to-date with AWS security and compliance best practices, ensuring the application of necessary patches and updates. Troubleshoot and resolve complex technical issues across Unix-based systems and AWS services. Advocate for a No-Ops model, striving for console-less experiences and self-healing systems Responsibilities Design and implement scalable and secure cloud infrastructure on AWS, utilizing services such as EC2, S3, RDS, and VPC. Automate the provisioning and management of AWS resources using Infrastructure as Code tools: Terraform, Ansible, and YAML. Collaborate with development teams to understand their requirements and translate them into cloud-based solutions. Develop and maintain Python and shell scripts to automate routine tasks, enhance efficiency, and ensure consistency. Containerize applications using Docker and deploy them in a Kubernetes cluster, ensuring high availability and scalability. Implement and maintain continuous integration and continuous deployment (CI/CD) pipelines using tools like Jenkins, GitLab, or AWS CodePipeline. Optimize cloud resources for performance and cost-efficiency, employing tools like AWS Cost Explorer and CloudWatch. Stay up-to-date with AWS security and compliance best practices, ensuring the application of necessary patches and updates. Troubleshoot and resolve complex technical issues across Unix-based systems and AWS services. Advocate for a No-Ops model, striving for console-less experiences and self-healing systems Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 2 days ago
8.0 years
0 Lacs
Gurgaon Rural, Haryana, India
On-site
With 8 years of experience, the opportunity in Gurgoan requires strong AWS skills, specifically in CI/CD, Jenkins, Git, and AWS. The primary technical skills sought include: - CI/CD tools: Jenkins, GitLab CI - Cloud platforms: AWS, Azure, GCP - Infrastructure as Code: Terraform, CloudFormation, ARM Templates - Configuration management: Ansible, Puppet, Chef - Containerization and orchestration: Docker, Kubernetes - Monitoring and logging: Prometheus, Grafana, CloudWatch, ELK Stack - Scripting languages: Python This 6-month contract role entails working on-site with a focus on AWS expertise and proficiency in various essential tools and platforms. hashtag#aws hashtag#devops hashtag#azure hashtag#ansible hashtag#elkstack hashtag#docker hashtag#kubernetes hashtag#cloudwatch hashtag#contract hashtag#workfromoffice hashtag#gurgoan hashtag#immediatejoiners hashtag#immediatehirring To apply, please share resumes to aditi.duvvuri@appitsoftware.com.
Posted 2 days ago
3.0 years
0 Lacs
Surat, Gujarat, India
On-site
Job Title - DevOps Engineer Location - Surat (On-site ) Experience - 2+3 years Job Summary: We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. DevOps Engineer responsibilities include deploying product updates, identifying production issues, and implementing integrations that meet customer needs. If you have a solid background in software engineering and are familiar with Ruby or Python, we’d like to meet you. Ultimately, you will execute and automate operational processes quickly, accurately, and securely. Roles & Responsibilities: Strong experience with essential DevOps tools and technologies including Kubernetes , Terraform , Azure DevOps , Jenkins , Maven , Git , GitHub , and Docker . Hands-on experience in Azure cloud services , including: Virtual Machines (VMs) Blob Storage Virtual Network (VNet) Load Balancer & Application Gateway Azure Resource Manager (ARM) Azure Key Vault Azure Functions Azure Kubernetes Service (AKS) Azure Monitor, Log Analytics, and Application Insights Azure Container Registry (ACR) and Azure Container Instances (ACI) Azure Active Directory (AAD) and RBAC Creative in automating, configuring, and deploying infrastructure and applications across Azure environments and hybrid cloud data centers. Build and maintain CI/CD pipelines using Azure DevOps , Jenkins , and scripting for scalable SaaS deployments. Develop automation and infrastructure-as-code (IaC) using Terraform , ARM Templates , or Bicep for managing and provisioning cloud resources. Expert in managing containerized applications using Docker and orchestrating them via Kubernetes (AKS). Proficient in setting up monitoring , logging , and alerting systems using Azure-native tools and integrating with third-party observability stacks. Experience implementing auto-scaling , load balancing , and high-availability strategies for cloud-native SaaS applications. Configure and maintain CI/CD pipelines and integrate with quality and security tools for automated testing , compliance , and secure deployments . Deep knowledge in writing Ansible playbooks and ad hoc commands for automating provisioning and deployment tasks across environments. Experience integrating Ansible with Azure DevOps/Jenkins for configuration management and workflow automation. Proficient in using Maven and Artifactory for build management and writing POM.xml scripts for Java-based applications. Skilled in GitHub repository management , including setting up project-specific access, enforcing code quality standards, and managing pull requests. Experience with web and application servers such as Apache Tomcat for deploying and troubleshooting enterprise-grade Java applications. Ability to design and maintain scalable , resilient , and secure infrastructure to support rapid growth of SaaS applications. Qualifications & Requirements: Proven experience as a DevOps Engineer , Site Reliability Engineer , or in a similar software engineering role. Strong experience working in SaaS environments with a focus on scalability, availability , and performance . Proficiency in Python or Ruby for scripting and automation. Working knowledge of SQL and database management tools. Strong analytical and problem-solving skills with a collaborative and proactive mindset. Familiarity with Agile methodologies and ability to work in cross-functional teams .
Posted 2 days ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
The Applied R&D Engineer conducts target-oriented research to directly apply findings to the specification, design, further development, and incremental improvement of products, services, systems, tools, processes, etc. Integrates, verifies, tests, and modifies SW / HW / system components and capitalises on innovative solutions to meet particular requirements and specifications. You will design and develop software components based on cloud-native principles and leading PaaS platforms. You will implement scalable, secure, and resilient microservices and cloud-based applications. You will develop APIs and integrate with API gateways, message brokers (Kafka), and containerized environments. You will apply design patterns, domain-driven design (DDD), component-based architecture, and evolutionary architecture principles. You will lead the end-to-end development of features and EPICs, ensuring high performance and scalability. You will define and implement container management strategies, leveraging Kubernetes, OpenShift, and Helm. You have a Bachelor's or Masters degree in Electronics, Computer Science, Electrical Engineering, or a related field with 8+ years of work experience. Experience in cloud-native architecture, cloud security, and cloud design patterns. Expertise in container orchestration using Kubernetes, Helm, and OpenShift. Experience with API Gateway, Kafka Messaging, and Component Life Cycle Management. Expertise in Linux platform, including Linux Containers, Namespaces, and CGroups. Experience in scripting languages Perl / Python and CI/CD tools Jenkins, Git, Helm, and Ansible. It would be nice if you also had familiarity with open-source PaaS environments like OpenShift. Knowledge of Elastic Stack, Keycloak authentication, and security best practices.,
Posted 2 days ago
3.0 years
0 Lacs
Lucknow, Uttar Pradesh, India
On-site
Introduction IBM Software infuses core business operations with intelligence—from machine learning to generative AI—to help make organizations more responsive, productive, and resilient. IBM Software helps clients put AI into action now to create real value with trust, speed, and confidence across digital labor, IT automation, application modernization, security, and sustainability. Critical to this is the ability to make use of all data, because AI is only as good as the data that fuels it. In most organizations data is spread across multiple clouds, on premises, in private datacenters, and at the edge. IBM’s AI and data platform scales and accelerates the impact of AI with trusted data, and provides leading capabilities to train, tune and deploy AI across business. IBM’s hybrid cloud platform is one of the most comprehensive and consistent approach to development, security, and operations across hybrid environments—a flexible foundation for leveraging data, wherever it resides, to extend AI deep into a business.Wonder if IBM is the one for you?In a world where technology never stands still, we understand that, dedication to our clients success, innovation that matters, and trust and personal responsibility in all our relationships, lives in what we do as IBMers as we strive to be the catalyst that makes the world work better. Being an IBMer means you’ll be able to learn and develop yourself and your career, you’ll be encouraged to be courageous and experiment everyday, all whilst having continuous trust and support in an environment where everyone can thrive whatever their personal or professional background. Our IBMers are growth minded, always staying curious, open to feedback and learning new information and skills to constantly transform themselves and our company. They are trusted to provide on-going feedback to help other IBMers grow, as well as collaborate with colleagues keeping in mind a team focused approach to include different perspectives to drive exceptional outcomes for our customers. The courage our IBMers have to make critical decisions everyday is essential to IBM becoming the catalyst for progress, always embracing challenges with resources they have to hand, a can-do attitude and always striving for an outcome focused approach within everything that they do. Are you ready to be an IBMer?About IBM : IBM’s greatest invention is the IBMer. We believe that through the application of intelligence, reason and science, we can improve business, society and the human condition, bringing the power of an open hybrid cloud and AI strategy to life for our clients and partners around the world.Restlessly reinventing since 1911, we are not only one of the largest corporate organizations in the world, we’re also one of the biggest technology and consulting employers, with many of the Fortune 50 companies relying on the IBM Cloud to run their business.At IBM, we pride ourselves on being an early adopter of artificial intelligence, quantum computing and blockchain. Now it’s time for you to join us on our journey to being a responsible technology innovator and a force for good in the world Your Role And Responsibilities As a Full stack developer, you will be responsible for creating the tools for customers to build business automations and integrating intelligence into their business automations. In our agile development model, designers/developers participate in small, autonomous but aligned teams where they learn and perform a variety of roles, including design, development, test, automation and client interaction/support.As an IBM Software Developer, you'll be responsible for ensuring that our software components are expertly designed, developed, debugged, supported, verified, and ready for integration into IBM's best-of-breed solutions that help organisations improve their business outcomes in the global marketplace. Build and test cloud based software using a host of technologies and methodologies Skills on Docker and kubernetes for cloud development Skills in Ansible and Go Lang Build server-side software using a host of technologies and methodologies such as Java, Swagger, and SQL. Build and support REST API/ GraphQL/Swagger/Open API etc In our agile development model, participate in small, autonomous but aligned teams to learn and perform a variety of roles, including design, development, test, automation and client interaction/support. Ensure that our software components are expertly designed, tested, debugged, verified and ready for integration into IBM's best-of-breed solutions that help organizations improve their business outcomes in the global marketplace. Innovate and turn new ideas into reality Responsible for creating and maintaining high-performance, working closely with our teams as well as the broad organization Take ownership of assignments & drive them to completion. Problem determination, debugging, and resolution Participate in peer code reviews to maintain high code quality and share knowledge within the team. If you are passionate about software development and quality in addition to the opportunity to be part of a team that is developing next generation digital business automation software, then this may be the opportunity for you. Required Technical And Professional Expertise 3+ years of IT experience Experienced in Kubernetes, Docker, OpenShift, ICP and related cloud-native development technologies Can have Ansible , GO Lang skills Experience in Software development using Core Java, and J2EE on Unix platform Experience with consuming Web Services , RESTful Web Services (like JSON, SOAP XML etc.), Open API, Swagger Solid understanding of object oriented design, programming languages and databases Solid understanding of the Agile methodology, including story point estimation, refinement, sprint planning, retrospectives, and sprint demos.
Posted 2 days ago
9.0 - 13.0 years
0 Lacs
karnataka
On-site
Join GlobalLogic and be a valuable part of the team working on a significant software project for a world-class company providing M2M / IoT 4G/5G modules to industries such as automotive, healthcare, and logistics. As part of our engagement, you will contribute to developing end-user modules" firmware, implementing new features, ensuring compatibility with the latest telecommunication and industry standards, and conducting analysis and estimations of customer requirements. As a Senior Developer in a Scrum team, you will play a key role in ensuring that engineering deliverables are on schedule and meet quality standards for the application/product features you are responsible for. Your responsibilities will include designing, developing, and testing critical and complex features, as well as mentoring junior developers in the team. You should hold a Bachelor's degree in computer programming, computer science, or a related field. Requirements: - 9+ years of experience in designing and architecting scalable and robust integration solutions. - Hands-on experience in Java, SpringBoot, Microservices, REACT, KAFKA. - Experience with architecture design for backend components of distributed systems. - Familiarity with container orchestration platforms like Kubernetes. - Proficiency in using AWS technologies such as Cloudformation, S3, ECS, EKS, and EC2. - Experience with event-driven architecture. - Proficiency in Linux/Unix environments. - Knowledge of Java, source control systems, continuous integration tools. - Experience with agile collaboration tools like JIRA and Confluence. - Familiarity with Web Services technologies including REST, SOAP, and WSDL. - Strong communication skills and the ability to explain complex technical issues to technical and non-technical audiences. Desirable Requirements: - Experience with Message brokers such as Service Bus, AWS SQS, AWS SNS, Kinesis, and Kafka. - Familiarity with DevOps and build tools like Jenkins, Maven, and other CI/CD tools. - Knowledge of SQL, relational database systems, and ORM tools like Hibernate. - Experience in designing well-defined Restful APIs, API documentation tools like openAPI, and testing tools like Postman and SOAP UI. - Familiarity with infrastructure automation tools such as Ansible and Terraform. As a Technical Architect, your role involves defining and reviewing solution architecture to build complex and scalable applications/products. You will be a critical member in defining the technology roadmap, identifying architectural bottlenecks, suggesting solutions, evaluating technology alternatives, and choosing tools. What we offer: - A culture of caring, prioritizing people first. - Learning and development opportunities to grow personally and professionally. - Interesting and meaningful work on impactful projects. - Balance and flexibility in work-life arrangements. - A high-trust organization built on integrity and ethical values. About GlobalLogic: GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to leading companies worldwide. Since 2000, GlobalLogic has been instrumental in creating innovative digital products and experiences for clients, driving business transformations and redefining industries through intelligent solutions.,
Posted 2 days ago
6.0 - 4.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Bangalore,Karnataka,India Job ID 771107 Join our Team About this opportunity: We are expanding our team to onboard a Cloud SI cum SA to support growing project demands. The role involves cloud deployment, upgrades, and integration—especially enhancing Ericsson CLOUD CNIS/NFVI in Telco Core networks. The team values collaboration, agility, flexibility across time zones, travel readiness, and technical excellence. The new hire will join the BCSS SD SDU SL CN CIP SDD A unit to deliver secure and efficient cloud solutions. What you will do System Integration and Verification of Ericsson CLOUD CNIS/ NFVI solution in Telco Core network. Develop proficiency in Ericsson NFVI products – HDS, SDI, SDN, CEE, SDS (VxSDS, Ceph & NexentaStor) ECCD, NFVI & CNIS to be able to design & deploy for customer networks. Participate in PoC projects along with Cloud PDU in the emerging areas like 5G, IoT, Edge NFVI, Distributed Cloud etc.- Working as a member of Ericsson Integration and Verification team. You will bring Good understanding of Cloud concepts/Virtualization. At least 6 years of total experience with minimum 4 years hands on IP Networking Deployment/Integration experience (Cisco/ Juniper/ Extreme/... switches) At least intermediate level proficiency in Unix/ Linux Administration. Should possess at least intermediate level proficiency in Software Defined Storage (e.g., VxSDS/Ceph/NexentaStor). Hands-on experience and certification in Red Hat OpenStack Platform (RHOSP) & Certified Kubernetes Administrator (CKA) are required. Experience with Red Hat OpenShift Container Platform (RHOCP) certification will be considered added advantages. Hands on CCD/Kubernetes cluster deployment experience in public/ private cloud is an added plus. Exposure to NFVI domain and capable to quickly ramp-up to Ericsson NFVI products. Have experiences of system installation, Integration and verification with good investigation and trouble shooting skills Proficiency in Python, Ansible, and Shell scripting is desirable. Experience with cloud automation tools such as Terraform, Puppet, or Chef, along with version control and CI/CD platforms like GitHub and Jenkins, is also required. What happens once you apply?
Posted 2 days ago
7.0 years
0 Lacs
Noida, Uttar Pradesh
Remote
Req ID: 336824 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Senior Linux Engineer to join our team in Noida, Uttar Pradesh (IN-UP), India (IN). Job Title: Senior Linux Engineer Location: [Insert Location] | Employment Type: Full-Time | Department: Infrastructure & Cloud Operations Job Summary: We are seeking a seasoned and motivated Senior Linux Engineer to manage and support our Linux infrastructure across on-premises and cloud environments. The ideal candidate will have extensive experience with Linux systems administration, cloud platforms (particularly AWS), configuration management using Ansible, and robust troubleshooting skills for both virtual machines and physical servers. This role also involves implementing and maintaining patching solutions for hybrid infrastructure. Key Responsibilities: Administer and maintain Linux servers in on-premises data centers and AWS cloud environments. Design, implement, and manage infrastructure automation using Ansible. Develop and manage patching strategies and solutions across hybrid environments. Perform regular system updates, security patches, and performance tuning. Troubleshoot system and VM issues, providing root cause analysis and permanent resolutions. Collaborate with cloud, networking, and application teams to support business requirements. Ensure system security and compliance with internal and external standards. Document system configurations, procedures, and change management records. Provide mentorship and technical guidance to junior engineers. Participate in on-call rotations and incident response efforts. Requirements: Bachelor's degree in Computer Science, Information Technology, or a related field. 7+ years of experience as a Linux Systems Administrator or Engineer. Strong knowledge of Linux distributions (Red Hat, CentOS, Ubuntu). Hands-on experience with AWS services, including EC2, S3, VPC, IAM, and CloudWatch. Proficiency in Ansible for configuration management and automation. Experience designing and managing patching solutions for on-prem and cloud environments. Strong troubleshooting skills across physical and virtual environments. Knowledge of networking concepts, firewalls, and security best practices. Excellent communication and collaboration skills. Certifications such as RHCE, AWS SysOps, or AWS Solutions Architect are a plus. About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com Whenever possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each client's needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, https://us.nttdata.com/en/contact-us. NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.
Posted 2 days ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As an SAP Cloud Architect with a vision for resilient and scalable cloud-native systems, you have a unique opportunity to architect SAP landscapes on Google Cloud Platform (GCP). Combining your HANA admin experience with strategic infrastructure design, you will play a crucial role in guiding the technical direction of a high-stakes cloud migration, making architecture choices that will significantly impact business outcomes. Your responsibilities in this role include designing and validating end-to-end SAP architecture on GCP, incorporating high availability/disaster recovery (HA/DR) and compliance considerations. You will be tasked with conducting cloud readiness assessments for legacy SAP environments, overseeing HANA installations, patch upgrades, and backup strategies, as well as defining disaster recovery procedures, RTO/RPO baselines, and disaster testing protocols. Additionally, you will lead architecture workshops with both business and tech stakeholders, optimizing cloud resource usage, and aligning with FinOps best practices. To excel in this role, you should have at least 4 years of experience in SAP HANA administration with a focus on HA/DR design. Strong knowledge of GCP Compute Engine, Cloud SQL, IAM, and networking is essential, along with prior experience in migrating SAP ERP or S/4HANA systems to the cloud. Familiarity with Ariba, BW, or FICO modules from an architectural perspective is also beneficial. Candidates with GCP certifications (such as Associate Cloud Engineer or Professional Cloud Architect) and proficiency in automation tools like Ansible and Terraform for SAP provisioning will be given bonus points. By joining this project, you will have the opportunity to influence enterprise cloud strategy in one of the most ambitious SAP transformation initiatives. The role offers a competitive contract rate with hybrid flexibility, allowing you to build deep expertise in SAP modernization and aligning with DevSecOps practices.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Senior Server Administrator at our organization, you will play a crucial role in the day-to-day management, maintenance, and troubleshooting of our Linux-based server infrastructure that caters to our US-based clients. Your responsibilities will include ensuring the stability, security, and performance of our systems, as well as automating tasks to enhance efficiency. Collaborating closely with team members, you will support both development and production environments to contribute to the overall improvement of our infrastructure. At our company, we hold certain corporate values in high regard. Respectful communication and cooperation are key aspects of our work culture, where every individual is treated with dignity and respect. We foster teamwork and employee participation by embracing diverse perspectives within our teams and in our interactions with customers. We value a work/life balance that accommodates the varying needs of our employees, recognizing its importance for our collective success. Additionally, we are committed to embracing and supporting the communities that nurture us, appreciating our employees" dedication to positive change. Diversity, inclusion, and belonging are fundamental aspects of our organizational culture. ePlus is dedicated to creating a work environment that celebrates diversity, promotes inclusion, and encourages employees to bring their authentic selves to work. In this role, your impact will be significant. Your responsibilities will include administering and troubleshooting Linux servers, automating server provisioning and infrastructure operations using Ansible, performing basic network and storage troubleshooting, managing and monitoring Nvidia GPUs on servers, maintaining server documentation, collaborating with other teams to resolve technical issues, and contributing to the continuous improvement of our infrastructure and processes. To excel in this position, you should possess strong Linux administration skills, proficiency in using Ansible for automation, expertise in GitHub, a basic understanding of Nvidia GPU management, experience with container technologies, basic network and storage troubleshooting skills, excellent problem-solving and analytical capabilities, the ability to work independently and as part of a team (especially in a remote setting), and strong communication and documentation skills. Preferred skills for this role include experience with Dell and Supermicro servers, familiarity with the MAAS tool for GPU node systems management and provisioning, creating Ansible playbooks for bare metal systems management, scripting skills (e.g., Bash, Python), experience with monitoring tools (e.g., Nagios, Zabbix), and knowledge of virtualization technologies (e.g., KVM, VMware). As you carry out your duties, you may engage in both seated and occasional standing or walking activities. We provide reasonable accommodations, as required by relevant laws, to support your success in this position. By embracing our values and demonstrating your skills and expertise, you will contribute to our shared mission of making a positive impact within our organization and the broader community. Kindly note that this job description serves as a guide and is not an employment contract.,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
kerala
On-site
We are looking for a talented Network Engineer to become a valuable member of our dynamic infrastructure team. As a Network Engineer, you will have a crucial role in maintaining a high-availability network environment and supporting enterprise-level infrastructure at multiple locations. Your expertise with cutting-edge technologies will be instrumental in ensuring the robustness, security, and efficiency of network operations round the clock. Your responsibilities will include proactively managing and maintaining all network devices to guarantee continuous service uptime. You will be involved in designing, documenting, and controlling network configurations and architecture. Additionally, you will participate in deployments and infrastructure changes during both business and non-business hours, conduct hardware verification and network testing, and keep an up-to-date inventory of all network assets. Collaboration with different business units to provide high-performance networking solutions, supporting network infrastructure at various enterprise locations, researching new technologies, and presenting integration analysis to stakeholders are also part of your role. You will be expected to build and manage highly reliable, multi-segment Ethernet networks across global sites and work closely with capacity planning teams to define network specifications and requirements. The ideal candidate should possess a Bachelor's degree in IT, Networking, or a related field (or equivalent work experience) along with at least 3 years of hands-on experience as a Network Engineer. Strong communication skills, both verbal and written, are essential. Proficiency in Juniper JunOS, working knowledge of BGP4, OSPFv2/3, and MPLS, familiarity with VyOS, Cisco, and/or Fortinet (preferred), experience with Ansible (preferred), and Juniper certifications (JNCIS-ENT+, JNCIS-SP+) are desirable qualifications. A background in hosting companies or network service providers is advantageous. Strong problem-solving abilities and a collaborative mindset are key attributes for success in this role. Location: MindLabs Systems Pvt Ltd, 2A, Second Floor, Carnival Infopark Phase 1V, Kakkanad, Kochi - 682042 Type: 24x7x365 Experience Level: 3+ Years Candidates who can join immediately are preferred, and nearby candidates are given preference.,
Posted 2 days ago
10.0 - 14.0 years
0 Lacs
haryana
On-site
As a skilled and experienced backend developer with over 10 years of total experience, you will be responsible for utilizing Java 8 or higher, Spring Framework, Hibernate/JPA, and Microservices Architecture to build robust and scalable solutions. Your expertise in AWS services such as API Gateway, Fargate, S3, DynamoDB, and SNS will be crucial in developing cutting-edge applications. Additionally, your proficiency in SOAP, PostgreSQL, REST APIs, caching systems like Redis, and messaging systems like Kafka will play a key role in the successful implementation of projects. Your role will involve working with Service-Oriented Architecture (SOA) and Web Services, as well as having hands-on experience with multithreading, cloud development, Data Structures and Algorithms, Unit Testing, and Object-Oriented Programming principles. Familiarity with DevOps tools like Ansible, Docker, Kubernetes, Puppet, Jenkins, and Chef, along with build automation tools such as Maven, Ant, and Gradle, will be essential in ensuring seamless deployment and operation of applications. In addition to your technical prowess, you will be expected to collaborate effectively with cross-functional teams, communicate clearly and concisely, and demonstrate a passion for continuous learning and improvement. Your ability to simplify solutions, optimize processes, and resolve issues efficiently will be instrumental in delivering high-quality code and meeting project requirements. As part of your responsibilities, you will be involved in writing and reviewing code, analyzing functional requirements, defining technologies and frameworks for project realization, implementing design methodologies, and coordinating application development activities. You will also be required to lead or support User Acceptance Testing (UAT) and production rollouts, validate estimates for tasks, provide constructive feedback to team members, troubleshoot and resolve complex bugs, and conduct Proof of Concepts (POCs) to ensure alignment with project requirements. To excel in this role, you should hold a Bachelors or Masters degree in computer science, Information Technology, or a related field. Your strong problem-solving skills, understanding of design patterns, and ability to keep abreast of industry trends will be instrumental in driving innovation and delivering exceptional solutions.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
The AVP, Cloud Engineering at Synchrony will be responsible for bridging the gap between software development and operations to create an efficient environment for development, testing, and deployment of software. Working closely with platform, infrastructure, and operations teams, you will ensure the smooth operation of systems and services. Key responsibilities include collaborating with teams to enhance the IaC development process, implementing version control strategies, advocating security best practices, optimizing cloud environments, automating tasks using tools like Terraform, Ansible, or Chef, ensuring compliance with security and regulatory requirements, monitoring system performance, staying updated with industry trends, and providing high-level customer service support. Required skills and knowledge include a Bachelor's degree in computer science or related field with 5+ years of IT infrastructure experience, or 7+ years of experience in IT infrastructure without a degree. Additionally, you should have a minimum of 5 years of cloud experience, expertise in designing and securing network environments, proficiency in IaC tools like Terraform and Ansible, hands-on experience with CI/CD tools, and familiarity with cloud platforms like AWS, Azure, or GCP. AWS certifications such as AWS Certified SysOps Administrator - Associate, AWS Certified DevOps Administrator - Professional, and AWS Certified Solutions Architect - Associate are recommended. This role requires working from 3:00 PM to 12:00 AM IST with flexibility to adjust timings based on US Eastern hours for meetings with global teams. Internal applicants must meet specific eligibility criteria and follow internal application procedures. Grade/Level for this role is 11 within the Information Technology job family group at Synchrony.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
Join us as a Mainframe Infrastructure Specialist, responsible for supporting the successful delivery of Location Strategy projects to plan, budget, agreed quality and governance standards. You'll spearhead the evolution of our digital landscape, driving innovation and excellence. You will harness cutting-edge technology to revolutionise our digital offerings, ensuring unparalleled customer experiences. To be successful as a Mainframe Infrastructure Specialist you should have experience with z/VM including Mainframe Terminology, CP, CMS, GDPS. Installation and support of the z/VM Operating system. SSI / LGR. Installation, Configuration and usage of Ops Manager, Backup / Restore Manager, RACF, DIRMAINT, RSCS. TCP/IP networks and management. IPL, Device Management, System Configs, Minidisks, SFS. XEDIT, CP Directory, Virtual Machines, Privilege Classes. You should also have experience with z/Linux, including Installation of Red Hat (including kickstart). Patching of Linux servers (automated and manual). Understanding of how a Linux environment hangs together (directory structure etc.). Basic use and configuration of a Linux operating system. Ability to navigate and perform basic tasks in a Linux environment (Create/Copy/Edit files, changing directories etc.). Management of application/system services using system. Understanding of LVM to administer storage (Physical/Logical volumes, Volume Groups etc.). Understanding of different file systems and the use cases (XFS/EXT4/GPFS). Understanding of Linux permissions and how they complement a system. Security related subjects such as PAM, SUDO, SSL/TLS certificates, SSH, SELinux. Network fundamentals, IP, DNS, firewalls, NTP and Active Directory integration. YUM/DNF repository management. Timers (CRON). Some other highly valued skills may include experience in using Service First (Service Now) or other similar Change/Problem/Incident Management tool. Experience with JIRA or similar ticket-based systems. Understanding of Git or other SCMs. Application experience (webservers, databases etc.). Experience with tools such as Ansible, CHEF, Ganglia and Nimbus. This role is based in Pune. Purpose of the role: To build and maintain infrastructure platforms and products that support applications and data systems, using hardware, software, networks, and cloud computing platforms as required with the aim of ensuring that the infrastructure is reliable, scalable, and secure. Ensure the reliability, availability, and scalability of the systems, platforms, and technology through the application of software engineering techniques, automation, and best practices in incident response. Accountabilities: 1. Build Engineering: Development, delivery, and maintenance of high-quality infrastructure solutions to fulfil business requirements ensuring measurable reliability, performance, availability, and ease of use. Including the identification of the appropriate technologies and solutions to meet business, optimisation, and resourcing requirements. 2. Incident Management: Monitoring of IT infrastructure and system performance to measure, identify, address, and resolve any potential issues, vulnerabilities, or outages. Use of data to drive down mean time to resolution. 3. Automation: Development and implementation of automated tasks and processes to improve efficiency and reduce manual intervention, utilising software scripting/coding disciplines. 4. Security: Implementation of a secure configuration and measures to protect infrastructure against cyber-attacks, vulnerabilities, and other security threats, including protection of hardware, software, and data from unauthorised access. 5. Teamwork: Cross-functional collaboration with product managers, architects, and other engineers to define IT Infrastructure requirements, devise solutions, and ensure seamless integration and alignment with business objectives via a data-driven approach. 6. Learning: Stay informed of industry technology trends and innovations, and actively contribute to the organization's technology communities to foster a culture of technical excellence and growth. Analyst Expectations: To perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. Requires in-depth technical knowledge and experience in their assigned area of expertise. Thorough understanding of the underlying principles and concepts within the area of expertise. They lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L Listen and be authentic, E Energise and inspire, A Align across the enterprise, D Develop others. OR for an individual contributor, they develop technical expertise in work area, acting as an advisor where appropriate. Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for the end results of a team's operational processing and activities. Escalate breaches of policies/procedure appropriately. Take responsibility for embedding new policies/procedures adopted due to risk mitigation. Advise and influence decision-making within own area of expertise. Take ownership of managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how your sub-function integrates with function, alongside knowledge of the organization's products, services, and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organization sub-function. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Guide and persuade team members and communicate complex/sensitive information. Act as a contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organization. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
patna, bihar
On-site
Incepted in the year 2004, Mobineers has established its name in the IT world as a top IT services and Engineering Company. Since the Indian e-governance and IT sector has been witnessing radical changes from the past decade, we have set ourselves with the aim of fulfilling the needs of this sector. Our IT solutions and services have become the need of the hour to bring about a positive change when it comes to the deregulation and Responsibilities: Ensure the reliability, performance, and scalability of our database infrastructure. Work closely with application teams to ship solutions that integrate seamlessly with our database systems. Analyze solutions and implement best practices for supported data stores (primarily MySQL and PostgreSQL). Develop and enforce best practices for database security, backup, and recovery. Work on the observability of relevant database metrics and make sure we reach our database objectives. Provide database expertise to engineering teams (for example, through reviews of database migrations, queries, and performance optimizations). Work with peers (DevOps, Application Engineers) to roll out changes to our production environment and help mitigate database-related production incidents. Work on automation of database infrastructure and help engineering succeed by providing self-service tools. OnCall support on rotation with the team. Support and debug database production issues across services and levels of the stack. Document every action so your learning turns into repeatable actions and then into automation. Perform regular system monitoring, troubleshooting, and capacity planning to ensure scalability. Create and maintain documentation on database configurations, processes, and Qualifications: Have at least 5 years of experience running MySQL/PostgreSQL databases in large environments. Awareness of cloud infrastructure (AWS/GCP). Have knowledge of the internals of MySQL/PostgreSQL. Knowledge of load balancing solutions such as ProxySQL to distribute database traffic efficiently across multiple servers. Knowledge of tools and methods for monitoring database performance. Strong problem-solving skills and ability to work in a fast-paced environment. Excellent communication and collaboration skills to work effectively within cross-functional teams. Knowledge of caching (Redis / Elasticache) Knowledge of scripting languages (Python) Knowledge of infrastructure automation (Terraform/Ansible) Familiarity with DevOps practices and CI/CD pipelines.,
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40005 Jobs | Dublin
Wipro
19416 Jobs | Bengaluru
Accenture in India
16187 Jobs | Dublin 2
EY
15356 Jobs | London
Uplers
11435 Jobs | Ahmedabad
Amazon
10613 Jobs | Seattle,WA
Oracle
9462 Jobs | Redwood City
IBM
9313 Jobs | Armonk
Accenture services Pvt Ltd
8087 Jobs |
Capgemini
7830 Jobs | Paris,France