Jobs
Interviews

1311 Yaml Jobs - Page 17

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

12.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Title: AMS Service Delivery Manager Location: Mumbai / Pune / Chennai / Hyderabad / Bangalore / Kolkata / Delhi / Noida/ Coimbatore Experience: 12-16 years. "The AMS Service Delivery Manager will have a deep understanding of SDLC, Distributed and Open Source technologies, Maintenance Support project features coupled with strong leadership and project management capabilities. This role involves overseeing end-to-end service delivery, ensuring high-quality standards, and maintaining customer satisfaction. Responsibilities Oversee the delivery of multiple projects, ensuring they are completed on time, within budget, and to the highest quality standards. Develop and implement service delivery strategies to enhance efficiency and customer satisfaction. Ensure compliance with service level agreements (SLAs) and manage service performance metrics. Provide technical leadership and guidance to the development team, ensuring best practices in coding, architecture, and design Collaborate with stakeholders to define project scope, objectives, and deliverables. Monitor and report on service delivery performance, identifying areas for improvement. Adhere to Agile ways of working and make sure of aligning the team in the same mindset. Guiding and mentoring team on the SRE guidelines and Continuous improvements. Encourage team on continuous learning (ensuring cross-skilling and upskilling) and AI intervention as a culture. Ensure customer feedback is collected, analyzed, and acted upon to improve service quality Mandatory Skills Should have Maintenance Support project experience and good knowledge on SDLC phases on any one of the below Distributed/Open source technologies o .Net Fullstack (+ React or Angular + PL-SQL) OR o Java Fullstack (+ React or Angular + PL-SQL) OR o Mainframe (COBOL, DB2, AS-400 etc.) Should possess basic knowledge of DevOps (CICD, SAST/DAST, branching strategy, Artifactory and packages, YAML etc.) End to end Incident Management Review Shift and Roster plans Should have good working experience in support maintenance projects Project management - project planning, timelines, resources, budgets, risks, mitigation plans, escalation management, Change Request creation, Transition Planning and Management etc. Excellent Communication and convincing power. Excellent team collaboration Good To Have Skills Should possess SRE ideas like Observability, Resiliency, SLA-SLI-SLO, MTTx Knowledge of Observability tools (Splunk/AppDynamics/Dynatrace/Prometheus/Grafana/ELK stack) Basic Knowledge on Chaos Engineering, Self healing, auto scaling Basic Knowledge on one of the clouds (Azure OR AWS OR GCP) Building use cases for automation and AI/Agentic Knowledge on any one scripting language (Python/Bash/Shell)" This job is provided by Shine.com

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description The Applications Development Intermediate Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Ab Initio Data Engineer We are looking for Ab Initio Data Engineer to be able to design and build Ab Initio-based applications across Data Integration, Governance & Quality domains for Compliance Risk programs. The individual will be working with both Technical Leads, Senior Solution Engineers and prospective Application Managers in order to build applications, rollout and support production environments, leveraging Ab Initio tech-stack, and ensuring the overall success of their programs. The programs have a high visibility, and are fast paced key initiatives, which generally aims towards acquiring & curating data and metadata across internal and external sources, provide analytical insights and integrate with other Citi systems. Technical Stack: Ab Initio 4.0.x software suite – Co>Op, GDE, EME, BRE, Conduct>It, Express>It, Metadata>Hub, Query>it, Control>Center, Easy>Graph Big Data – Cloudera Hadoop, Hive, Yarn Databases - Oracle 11G/12C, Teradata, MongoDB, Snowflake Others – JIRA, Service Now, Linux, SQL Developer, AutoSys, and Microsoft Office Responsibilities: Ability to design and build Ab Initio graphs (both continuous & batch) and Conduct>it Plans, and integrate with portfolio of Ab Initio softwares. Build Web-Service and RESTful graphs and create RAML or Swagger documentations. Complete understanding and analytical ability of Metadata Hub metamodel. Strong hands on Multifile system level programming, debugging and optimization skill. Hands on experience in developing complex ETL applications. Good knowledge of RDBMS – Oracle, with ability to write complex SQL needed to investigate and analyze data issues Strong in UNIX Shell/Perl Scripting. Build graphs interfacing with heterogeneous data sources – Oracle, Snowflake, Hadoop, Hive, AWS S3. Build application configurations for Express>It frameworks – Acquire>It, Spec-To-Graph, Data Quality Assessment. Build automation pipelines for Continuous Integration & Delivery (CI-CD), leveraging Testing Framework & JUnit modules, integrating with Jenkins, JIRA and/or Service Now. Build Query>It data sources for cataloguing data from different sources. Parse XML, JSON & YAML documents including hierarchical models. Build and implement data acquisition and transformation/curation requirements in a data lake or warehouse environment, and demonstrate experience in leveraging various Ab Initio components. Build Autosys or Control Center Jobs and Schedules for process orchestration Build BRE rulesets for reformat, rollup & validation usecases Build SQL scripts on database, performance tuning, relational model analysis and perform data migrations. Ability to identify performance bottlenecks in graphs, and optimize them. Ensure Ab Initio code base is appropriately engineered to maintain current functionality and development that adheres to performance optimization, interoperability standards and requirements, and compliance with client IT governance policies Build regression test cases, functional test cases and write user manuals for various projects Conduct bug fixing, code reviews, and unit, functional and integration testing Participate in the agile development process, and document and communicate issues and bugs relative to data standards Pair up with other data engineers to develop analytic applications leveraging Big Data technologies: Hadoop, NoSQL, and In-memory Data Grids Challenge and inspire team members to achieve business results in a fast paced and quickly changing environment Perform other duties and/or special projects as assigned Qualifications: Bachelor's degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics) and a minimum of 5 years of experience Minimum 5 years of extensive experience in design, build and deployment of Ab Initio-based applications Expertise in handling complex large-scale Data Lake and Warehouse environments Hands-on experience writing complex SQL queries, exporting and importing large amounts of data using utilities Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

India

On-site

Job Title: Junior Engineer, Technical Support, tier 2 NIKSUN is the recognized worldwide leader in making the Unknown Known, by using next-generation technology that revolutionizes the way networks and services are secured, protected, and managed. The company develops and deploys a complete range of award-winning forensics, compliance, security surveillance, and performance management solutions for applications ranging from core infrastructures to edge and branch environments. Key responsibilities: Respond to customer incidents through conducting preliminary assessment to product or system issues. Debug and track until resolution by analyzing the source of the issue and its impact on hardware, network, and storage resources. With your technical expertise you will support cloud solutions, App installs, release updates & upgrade, testing, deploying, maintaining, and enhancing software solutions. You will address bugs and escalate to engineering check code and verify accuracy, testability, functionality, and efficiency. Also, you should have a good understanding of different hardware components: processing, networking, storage. NIKSUN Cloud Support engineer is a customer facing role on which you Engage with NIKSUN’s clients on live troubleshooting sessions via telephone, e-mails, live-Chat to provide level 1/2 support on a vast array of NIKSUN products. Lead, manage and drive incidents to resolution, whether reported reactively by customers or proactively through NIKSUN monitoring tools. Adhere to incident severity standards, escalating none resolved incidents to Tier 3 at due time. Work with different NIKSUN engineering departments, relay a technical message back to the customer in a simple non-technical language if required. Maintain updates flowing to customers regarding incident resolution efforts at all times. NIKSUN Cloud support operates 24x7x365, you’ll be required to work on a shift schedule including day, night, weekends, and holidays. Requirements: Must have 3 years of experience with DevOps support and experience in programming languages (preferred shell scripting). Must have 1 year of experience testing and maintaining software products. Must have 1 year of experience in cloud computing Kubernetes, Docker, YAML files…etc. Experience in providing technical support to Global clients. Excellent problem-solving and communication skills. Ability to provide step-by-step technical help, both written and verbal. Hands-on experience with Windows/Linux OS environments. Additional certification in Linux, Cisco, and Network and Information Security or similar technologies is a plus. Required Skills and traits: Experience providing technical support to Global clients Excellent problem solving and communication skills. NIKSUN encourages teamwork, collaboration, and knowledge sharing. Mentor other team members and push the team for success. Professional Requirements: Bachelor’s Degree in computer science or equivalent software engineering discipline. Qualified applicants will receive consideration for employment without regard to age, race, creed, color, religion, sex, national origin, ancestry, marital status, affectional or sexual orientation, gender identity or expression, disability, nationality, or protected veteran status.

Posted 3 weeks ago

Apply

3.0 - 8.0 years

12 - 16 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Project description Join the exciting team that performs change work at one of the Big Four banks of Australia. Looking for a highly skilled Python/Bash/Groovy developer having 3+ years of hands-on experience in designing, developing & maintaining backend systems. Should also have hands-on experience with tools like Jenkins, GitHub/Git Action, and Codefresh. You will be able to manage source code with GIT and work on it. You should have Knowledge of YAML & JSON. You will be required to work in squads under our customers' direction and have a level of familiarization with Agile methodologies. Responsibilities Able to design, develop & maintain complex backend services and applications using Python/Bash/Groovy Able to perform system administration tasks on Linux/Unix Should be able to perform cloud-based development with services like AWS, Azure, or Google Cloud Manage IaC tools like Terraform or CloudFormation Familiarity with tools for monitoring and logging infrastructure Able to understand security practices and tools HashiCorp Vault Skills Must have Overall, 7+ years of experience as a DevOps Engineer. Programming and ScriptingProficiency in languages like Python, Bash, or Groovy for automation. CI/CD PipelinesUnderstanding Continuous Integration and Continuous Deployment processes, along with tools like Jenkins, GitHub/Git Action, and Codefresh. ContainerizationUnderstanding container technologies like Docker and orchestration tools like Kubernetes. Version ControlExperience with Git for managing source code. Configuration ManagementExperience with tools like YAML, JSON. Nice to have Linux/UNIX FundamentalsUnderstanding Linux for System Administration Tasks Cloud PlatformsKnowledge of cloud services like AWS or Azure, or Google Cloud Infrastructure as Code (IaC)Experience with IaC tools like Terraform or CloudFormation Monitoring and LoggingFamiliarity with tools for monitoring and logging infrastructure SecurityUnderstanding security practices and tools HashiCorp Vault

Posted 3 weeks ago

Apply

6.0 - 11.0 years

14 - 18 Lacs

Hyderabad

Work from Office

Project description Are you an enthusiastic technology professionalAre you excited about seeking an enriching career; working for a large Tier One BankWe are seeking for Scala engineer to join our development team within the bank for an exciting opportunity, building on top of existing technology to be implemented across other APAC regions. You will work on a new and challenging project to implement banking systems, helping to shape the future of the Banks business. Responsibilities Responsibilities would be related to the following aspects work closely with senior engineers in order to find best possible technical solution for the project/available requirements Scala development to provide banking solutions experience using Nexus repository software working against a ticketing system with different priorities reporting key metrics post go live development of continuous improvement themes such as automation and whitelisting. improve developer experience and make it easy to do the right thing challenge team to follow the best practices, eliminate process waste troubleshoot production/infrastructure issues be keen to expand current Scala/Java Skills Must have 6+ years building back-end systems 4+ years developing in Scala or other functional programming language Java, JS and React TDD distributed version controlGit or Mercurial strong written and verbal communication skills in English Banking experience be able to work in multicultural work environment Nice to have Nice to have actor systemAkka . HTTP stack and building REST APIs functional programming with Cats or ScalaZ ScalaTest and BDD Continuous integration & deployment practices pen minded and able to quickly learn new technologies and paradigms Kafka or other distributed messaging system Distributed environments and multi-threading profiling and application tuning build toolsGradle experience with Yaml, Json and Xml (Xsd) experience with Unix shell and CLI tools search engine, eg. Solr, ElasticSearch -"search" topic issues eg. building queries, indexing, etc

Posted 3 weeks ago

Apply

10.0 - 15.0 years

14 - 18 Lacs

Hyderabad

Work from Office

Project description Our client is a global technology change and delivery organization comprising nearly 500 individuals located in Switzerland, Poland, Singapore and India. Providing global records and document processing, archiving, and retrieval solutions to all business divisions focusing on supporting Legal, Regulatory, and Operational functions. Responsibilities Design, implement, and manage data solutions on Azure Develop and maintain data pipelines using Databricks Ensure efficient data storage and retrieval using Azure Storage and Data Lake Automate infrastructure and application deployments with Ansible Write clean, maintainable code in C# core, SQL Optimize code/application for best performance Use and promote state of the art technologies, tools and engineering practices Collaborate with team members using Git and GitLab for version control and CI/CD Share and contributeSupport and guide less senior team members, contribute to team spirit and dynamic growth, actively participate in wider engineering group and product-wide activities Skills Must have 10+ years of software development experience inbuilding and shipping production grade software Degree in Computer Science, Information Technology, or related field Proficient in deploying and managing services on Microsoft Azure Understanding of Azure Storage concepts and best practices Understanding of Microsoft Fabric concepts and best practices Experience in designing, implementing, and managing data lakes and Databricks on Azure Experience in using Ansible for automation and configuration management, with proficiency in YAML Strong programming skills in C# core, SQL, MVC core/Blazor, Java Script, HTML, CSS Proficient in version control using Git and experience with GitLab for CI/CD pipelines Strong cross-discipline and cross group collaboration skills Passion for delivering high quality/delightful user experience Strong problem solving, debugging, and troubleshooting skills Ability to ramp up quickly on new technologies and adopt solution from within the company or from the Open Source community Nice to have Experience in Agile Framework

Posted 3 weeks ago

Apply

6.0 - 11.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Project description Financial Market Digital Channels team is driven to provide world class technology to support the bank's Financial Markets business, working specifically on the bank's in-house built pricing, execution and trade processing platform. We bring a deep understanding of the domain, a scientific approach, and innovative solutions to bear on the challenges of best servicing our customers in a highly competitive environment. This is a rare opportunity to join an organization working with smart technologists globally in the financial markets domain. The culture in the team is open, intellectual, and fun. Learning opportunities are plentiful and career advancement is always waiting for those high-energized talents willing and able to step up. Responsibilities Interact with product management, project management and development teams to develop a strong understanding of the project and testing objectives Should co-ordinate between onshore & offshore teams Participate in troubleshooting and triaging of issues with different teams to drive towards root cause identification and resolution Design and create test conditions, test data and test scripts to address business and technical use cases Use existing tools and techniques to execute test cases and build/script new tools for performing testing/validation function Develop and lead the automation strategy/effort and generate scripts to perform automated testing cycles using Selenium & Cucumber Skills Must have 6+ years of experience in software test design and testing methodologies Solid knowledge of core JAVA, Java Script, SQL, Shell scripting and Hands-on experience with following automated build/testing tools mainly Cypress Selenium BDD/Cucumber XML Cypress API Automation GIT/ADO Repos ADO/Confluence Jmeter ADO Boards/ADO Test Plan Agile Methodology Exposure Good exposure on Database testing concepts Familiar with DevOps CI/CD (Jenkins/Groovy/YAML/ADO Pipeline) Good understanding of Java/JEE tech stack based applications Integration technology expertise API, FTP, WebSerices & Solace Messaging Banking or Financial service industry experience Nice to have -

Posted 3 weeks ago

Apply

13.0 - 18.0 years

14 - 19 Lacs

India, Bengaluru

Work from Office

Siemens Healthineers is a global leader in providing medical solutions that significantly enhance patient care and overall healthcare outcomes. We are actively seeking an exceptional and driven Software Architect to join our Magnetic Resonance team. As a Software Architect, you will play a pivotal role in shaping the architecture of the Magnetic Resonance platform and integrated software solutions. Job Profile Drive and guide the specification and implementation of a Software (SW) architecture for a whole product or a subsystem over all its domain and technical aspects and ensure the sustainability of architecture vision. Technically conceptualize, design and develop whole product or a sub-system. Collaborate with different stake holders to elaborate functional and non- functional requirements of the product and take decisions that are driven by a clear focus on the intended business and its associated requirements. Bring in maturity, vision, depth of domain, technical experience and the ability to identify, analyze, and decide on relevant issues in time and with courage, even in the absence of complete information. Coach and motivate developers in your team over the entire lifetime of a project to enable, support, and enforce an appropriate implementation, maintenance, quality assurance, test, and evolution of the SW architecture, even in the face of challenging, changing, and evolving business cases, requirements and realization technologies. Involve relevant stake holders (Project manager, Product manager, Test manager etc.) in decision making process. From Business perspective, completely responsible for validating technology roadmaps, doing technology evaluation and if needed, driving make or buy decisions. If needed, shall provide support to evaluate suppliers and identifying potential invention disclosures. From requirements engineering perspective, completely responsible for deriving software requirements, defining external software interfaces, validating software requirements and giving feasibility statement. From Architecting and design perspective, completely responsible for elaborating software architecture and design, design prototyping, documenting architecture and design rationale, establishing architecture traceability, identification and specification of interfaces. Shall provide support for Architecture management, monitoring of internal software quality and structural changes. From implementation perspective, completely responsible for reviewing detailed software design, coaching for software development and deployment. Shall provide support for implementation of any critical component. From testing and Quality perspective, completely responsible for driving architecture quality. Shall provide support for software integration, sequence and concept, performing tests and resolving of defects. Ability to effectively communicate, influence, and interact with various stakeholders Desired Qualification and Experience 13+ years of hands-on experience in design and development using multiple technologies, test frameworks and programming languages, out of which at least 3+ years of experience working as a software architect. Proficient in .Net Technologies. C++ knowledge will be a bonus. Proficient in UML and UML modeling tools. Exposure to one or more source control tools. Especially Git & tfvc Strong fundamentals of Object-Oriented Analysis & Design (OOA/OOD). Strong understanding and hands on experience in handling NFRs. Sound knowledge of software engineering process. Sound knowledge of Requirements engineering. Experience in software estimations, scheduling. Must have a good oral and written communication ability. Expertise in scriptin with powershell and bash Hands on experience with CI/CD pipeline tools, yaml knowledge Familiarity with healthcare domain is a plus. Familiarity with Agile practices is a plus. Ability to effectively communicate, influence, and interact with various stakeholders

Posted 3 weeks ago

Apply

10.0 - 15.0 years

14 - 19 Lacs

India, Bengaluru

Work from Office

Software architect responsible for Syngo Via product in the Siemens Healthineers portfolio. Responsibility includes Architectural Vision and/or Technical Concepts (concepts, interactions, dependencies, algorithms, technologies) Operational and developmental qualities like Security, Reliability, Compatibility, Portability, Maintainability, Performance Efficiency. Architecture Documentation including Design Decisions and Reasonings at corresponding level (System, etc.) Architecture Implementation (product code, test automation code, inline documentation, manuals) Feasibility Studies & Prototypes Test concept and integration environment Project Schedule Requirement clarification and specification Make-or-buy decisions Technical coaching of implementation team Analysis of product risks and definition/implementation of risk mitigations Roles and responsibilities Design and build scalable software systems & test frameworks Manage two scrum teams of total size 10-12 team members and functionally guide them in design and development Drive optimization around design, code and test quality by working across the dev and test teams and driving best practices, approaches. Qualification Bachelor/master’s degree or equivalent experience in computer science or comparable education with corresponding additional skills Overall 10+ years of experience in software design and development out of which atleast 1+ years of experience as a software architect. Knowledge & skills Proficient in Microsoft .Net, C#, Visual Studio, Nunit (or) MS Unit, WCF/WPF, UML Exposure to one or more source control tools. Especially Git & tfvc Strong fundamentals of Object-Oriented Analysis & Design (OOA/OOD). Strong understanding and hands on experience in handling NFRs. Sound knowledge of software engineering process. Sound knowledge of Requirements engineering. Experience in software estimations, scheduling. Must have a good oral and written communication ability. Expertise in scripting with powershell and bash Hands on experience with CI/CD pipeline tools, yaml knowledge Familiarity with healthcare domain is a plus. Familiarity with Agile practices is a plus. Ability to effectively communicate, influence, and interact with various stakeholders

Posted 3 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At PwC, our people in infrastructure focus on designing and implementing robust, secure IT systems that support business operations. They enable the smooth functioning of networks, servers, and data centres to optimise performance and minimise downtime. Those in DevSecOps at PwC will focus on minimising software threats by integrating development, operations and security industry leading practices in order to validate secure, consistent and efficient delivery of software and applications. You will work to bridge the gap between these teams for seamless and secure application and software development. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. We are seeking a join our team and play a critical role in managing and optimizing our Azure environments. The successful candidate will be responsible for developing and maintaining automated deployment scripts, managing Azure services, and ensuring the reliability and quality of software releases. This skilled DevOps Engineer to role requires close collaboration with development, QA, and operations teams to address operational issues and optimize processes. The ideal candidate will have 3-5 years or more of experience in application support and a strong foundation in cloud technologies, CI/CD pipelines, and troubleshooting techniques. You will work in coordination with US-based counterparts and will play a critical role in providing timely support, participating in knowledge transfer sessions, and ensuring continuity across shifts. The position also includes participation in an on-call rotation to provide support during off-hours when needed. We emphasize work-life balance and have a globally distributed support model to minimize overnight load. Key Responsibilities Automation and Scripting: Design and implement automation for continuous integration, delivery, and deployment. Develop and maintain automated deployment scripts and tooling to ensure reliable software releases on Azure. Be proficient in writing YAML, ARM, and Bicep templates for deploying Infrastructure as code Azure Services Management Configure and manage Azure services, including virtual machines, containers, databases, networking, and monitoring solutions. Optimize the performance, scalability, and cost efficiency of Azure resources. Security And Compliance Ensure adherence to security and compliance requirements on Azure such as Azure Policy. Implement security controls, secrets management, encryption, access management, and audit trails to protect sensitive data. Deployment Strategies Collaborate with the development team to define deployment strategies, create release pipelines, and execute deployments. Collaborate with development teams to resolve build and release errors. Collaboration And Support Work closely with development, Quality Assurance, and App Support teams to identify and address operational issues. Act as a bridge between different teams to foster collaboration and efficiency. Project Support Provide technical assistance for pipeline failures, deployment errors, and disaster recovery incidents. Support application teams throughout the migration and post-migration phases. Kubernetes Expertise Design, deploy, and maintain production Kubernetes clusters, ensuring high availability, scalability, and security. The ideal candidate has proven hands-on experience with container orchestration, cloud infrastructure, and CI/CD pipelines. Demonstrate experience with monitoring and logging, cloud and Kubernetes networking configuration and troubleshooting, and resource management. SQL Experience Be able to run basic queries, know how to interpret a stored procedure. Understand database resiliency design features such as replication and geo-redundancy. DataDog/App Insights Experience Understand how to read and interpret system performance metrics, identify performance issues (patterns), and troubleshooting monitors. Be able to build and manage monitors as systems evolve. Qualifications Proven experience in DevOps engineering, particularly with Azure environments. Hands-on experience deploying and troubleshooting Kubernetes clusters in production environments, ideally supported by a recognized certification such as the Certified Kubernetes Administrator (CKA) Hands-on experience with HELM for Kubernetes application packaging and deployment. Strong understanding of HELM chart structure, templating, and lifecycle management. Ability to troubleshoot and resolve issues related to HELM deployments and Kubernetes workloads. Proficiency in basic SQL scripting Proficiency in PowerShell scripting Strong skills in automation and scripting for CI/CD processes. Proficiency in YAML, ARM, and Bicep template creation and management. Experience with Azure services management and optimization. Proficiency in implementing monitoring and alerting solutions. Knowledge of security and compliance practices on Azure. Ability to manage competing priorities and deadlines. Proficiency with Generative AI and prompt engineering. Excellent collaboration and communication skills. Join us to leverage your DevOps expertise in a dynamic environment and contribute to the successful delivery of our Specialty portfolio projects.

Posted 3 weeks ago

Apply

20.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Business Information Hitachi Energy India Development Centre (IDC) , is a research and development facility with around 500 R&D engineers, specialists, and experts, who focus on creating and sustaining digital solutions, new products and technology. This includes product integration, testing, cybersecurity, and certification. The India Development Centre is situated in Chennai and Bangalore. IDC collaborates with the R&D and Research centres of Hitachi Energy, which are spread across more than 15 locations throughout 12 countries. In the past 20 years, IDC has secured more than 200 international papers and 150+ patents. Mission Statement We are advancing the world’s energy system to become more sustainable, flexible and secure whilst balancing social, environmental and economic. Hitachi Energy has a proven record and unparalleled installed base in more than 140 countries Your Responsibilities Able to stay on scope and take responsibility for meeting milestones and deadlines. Proactive with suggestions for improvements; thinks laterally; receptive to new ideas. Work closely with a dynamic group of people in various time zones. You are working on improving our processes for continuous integration, continuous deployment, automated testing, and release management. You strive for the highest standards in terms of security. Developing, maintaining, and supporting azure infrastructure, and system software components. Using your understanding of the different azure tech components to support developers with good advice on how to build solutions. Ownership of overall architecture in azure. Ensure application performance, uptime, and scale, maintaining high standards of code quality and thoughtful design. Provide technical leadership for CI/CD processes design implementation and process orchestration. Define and document best practices and strategies regarding application deployment and infrastructure maintenance. Monitor and report on compute/storage costs and forecast. Manage and orchestrate deployment of a .NET microservices based solution. Living Hitachi Energy’s core values of safety and integrity, which means taking responsibility for your own actions while caring for your colleagues and the business. Your Background 3+ years of experience in azure DevOps, CI/CD, configuration management, and test automation. 2+ years of experience in; IAC, ARM, YAML, Azure PaaS, Azure Active Directory, Kubernetes, Application insight. 2+ years of experience in Bash. Solid experience of working with a wide array of azure components. Development in Azure, including Function Apps, WebApp, etc. Experience of building and maintaining large scale SaaS solutions. Wide database experience with SQL, PostgreSQL, NoSQL, Redis databases. Experience in infrastructure as code automation (ARM, Terraform or similar). Experience in application and system monitoring. Understanding of security, network, virtualization, load balancer, storage, and databases. Understanding of security concepts, best practices and how to apply them, such as TLS/SSL, and data encryption. Experience with Helm charts. Experience with docker-compose. Expertise in building docker based services. Experience in Linux system management. Experience with logging & visualization tools such as ELK stack, Prometheus, Grafana. Experience in azure Data Factory. Experience in at least one programming language (e.g., Python, C#). Experience with WAF. Experience with streaming data e.g., Kafka. Knowledge of big data/analytics. Experience debugging complex, multi-server service in a high availability production environment. Proficiency in both spoken & written English language is required. Hitachi Energy is a global technology leader that is advancing a sustainable energy future for all. We serve customers in the utility, industry and infrastructure sectors with innovative solutions and services across the value chain. Together with customers and partners, we pioneer technologies and enable the digital transformation required to accelerate the energy transition towards a carbon-neutral future. We employ around 45,000 people in 90 countries who each day work with purpose and use their different backgrounds to challenge the status quo. We welcome you to apply today and be part of a global team that appreciates a simple truth: Diversity + Collaboration = Great Innovation.

Posted 3 weeks ago

Apply

6.0 - 10.0 years

27 - 42 Lacs

Chennai

Work from Office

Job Summary We are seeking an experienced Infra. Technology Specialist with 6 to 10 years of experience to join our dynamic team member with PCF . The ideal candidate will have expertise in YAML Terraform Nexus Amazon Kubernetes Services AWS Services Shell Script Linux Docker and Pivotal Cloud Foundry. This role involves working in a hybrid model with rotational shifts ensuring the smooth operation and management of our cloud infrastructure. Responsibilities Lead the design implementation and management of cloud infrastructure using AWS Services. Oversee the deployment and management of containerized applications using Docker and Amazon Kubernetes Services. Provide expertise in writing and managing infrastructure as code using Terraform and YAML. Manage and maintain Nexus repositories for efficient artifact storage and retrieval. Develop and maintain shell scripts for automation of routine tasks and processes. Ensure the stability and security of Linux-based systems and environments. Implement and manage Pivotal Cloud Foundry (PCF) environments for scalable application deployment. Collaborate with cross-functional teams to ensure seamless integration and operation of cloud services. Monitor system performance and troubleshoot issues to ensure high availability and reliability. Optimize cloud infrastructure for cost efficiency and performance. Document processes configurations and procedures for future reference and compliance. Stay updated with the latest industry trends and best practices in cloud technologies. Provide support during rotational shifts to ensure 24/7 availability of services. Qualifications Must have strong experience with YAML Terraform and Nexus. Must have expertise in Amazon Kubernetes Services and AWS Services. Must have proficiency in Shell Script and Linux. Must have hands-on experience with Docker and Pivotal Cloud Foundry. Nice to have experience with PCF Work Model. Should possess excellent problem-solving and troubleshooting skills. Should have strong communication and collaboration abilities. Should be able to work effectively in a hybrid work model. Should be adaptable to rotational shifts. Should be committed to continuous learning and improvement. Should be detail-oriented and able to document processes accurately. Should have a proactive approach to identifying and addressing potential issues. Should have knowledge and working experience on cloud services ideally on PaaS N Need to have good knowledge on PCF administration and knowledge on Kubernetes is added advantage. \uF0F0 Manage PCF open cloud foundry (upgrade patching and monitoring). Configuration of alerts in PCF alert manager. #LI-SM5

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Thane, Maharashtra, India

On-site

DevOps Engineer - Kubernetes Specialist Experience: 4 - 8 Years Exp Salary : Competitive Preferred Notice Period : Within 30 Days Opportunity Type: Hybrid (Mumbai) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Kubernetes , CI/CD , Google Cloud Ripplehire (One of Uplers' Clients) is Looking for: DevOps Engineer - Kubernetes Specialist who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description We are seeking an experienced DevOps Engineer with deep expertise in Kubernetes primarily Google Kubernetes Engine (GKE) to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and maintaining scalable containerized infrastructure, with a strong focus on cost optimization and operational excellence. Key Responsibilities & Required Skills Kubernetes Infrastructure & Deployment Responsibilities: Design, deploy, and manage production-grade Kubernetes clusters Perform cluster upgrades, patching, and maintenance with minimal downtime Deploy and manage multiple microservices with ingress controllers and networking Configure storage solutions and persistent volumes for stateful applications Required Skills: 3+ years of hands-on Kubernetes experience in production environments, primarily on Google Kubernetes Engine (GKE) Strong experience with Google Cloud Platform (GCP) and GKE-specific features Deep understanding of Docker, container orchestration, and GCP networking concepts Knowledge of Helm charts, YAML/JSON configuration, and service mesh technologies CI/CD, Monitoring & Automation Responsibilities: Design and implement robust CI/CD pipelines for Kubernetes deployments Implement comprehensive monitoring, logging, and alerting solutions Leverage AI tools and automation to improve team efficiency and task speed Create dashboards and implement GitOps workflows Required Skills: Proficiency with Jenkins, GitLab CI, GitHub Actions, or similar CI/CD platforms Experience with Prometheus, Grafana, ELK stack, or similar monitoring solutions Knowledge of Infrastructure as Code tools (Terraform, Ansible) Familiarity with AI/ML tools for DevOps automation and efficiency improvements Cost Optimization & Application Management Responsibilities: Analyze and optimize resource utilization across Kubernetes workloads Implement right-sizing strategies for services and batch jobs Deploy and manage Java-based applications and MySQL databases Configure horizontal/vertical pod autoscaling and resource management Required Skills: Experience with resource management, capacity planning, and cost optimization Understanding of Java application deployment and MySQL database administration Knowledge of database operators, StatefulSets, and backup/recovery solutions Proficiency in scripting languages (Bash, Python, or Go) Preferred Qualifications Experience with additional Google Cloud Platform services (Compute Engine, Cloud Storage, Cloud SQL, Cloud Build) Knowledge of GKE advanced features (Workload Identity, Binary Authorization, Config Connector) Experience with other cloud Kubernetes services (AWS EKS, Azure AKS) is a plus Knowledge of container security tools and chaos engineering Experience with multi-cluster GKE deployments and service mesh (Istio, Linkerd) Familiarity with AI-powered monitoring and predictive analytics platforms Key Competencies Strong problem-solving skills with innovative mindset toward AI-driven solutions Excellent communication and collaboration abilities Ability to work in fast-paced, agile environments with attention to detail Proactive approach to identifying issues using modern tools and AI assistance Ability to mentor team members and promote AI adoption for team efficiency Join our team and help shape the future of our DevOps practices with cutting-edge containerized infrastructure. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Ripplehire is a recruitment SaaS for companies to identify correct candidates from employees' social networks and gamify the employee referral program with contests and referral bonus to engage employees in the recruitment process. Developed and managed by Trampoline Tech Private Limited. Recognized by InTech50 as one of the Top 50 innovative enterprise software companies coming out of India and; NHRD (HR Association) Staff Pick for the most innovative social recruiting tool in India. Used by 7 clients as of July 2014. It is a tool available on the subscription-based pricing model. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job description Role: Devops Specialist Experience:7 Year+ Location: Hyderabad Primary Skills: Strong experience with Docker Kubernetes for container orchestration. Configure and maintain Kubernetes deployments, services, ingresses, and other resources using YAML manifests or GitOps workflows. Experience in Microservices based architecture design. Understanding of SDLC including CI and CD pipeline architecture. Experience with configuration management (Ansible). Experience with Infrastructure as code(Teraform/Pulumi/CloudFormation). Experience with Git and version control systems. Secondary Skills: Experience with CI/CD pipeline using Jankins or AWS CodePipeline or Github actions. Experience with building and maintaining Dev, Staging, and Production environments. Familiarity with scripting languages (e. g., Python, Bash) for automation. Monitoring and logging tools like Prometheus, Grafana. Knowledge of Agile and DevOps methodologies. Incidence management and root cause analysis. Excellent problem solving and analytical skills. Excellent communication skills. Mandatory work from Office 24x5 Support

Posted 3 weeks ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

On-site

🚀 Hiring: Lead DevOps Engineer – Azure + AWS 📍 Trivandrum / Kochi (Work from Office) 🕐 Immediate Joiners Only | 💰 CTC: Up to ₹20 LPA 📅 Experience: 6+ yrs total (Azure: 3+ yrs, AWS: 1+ yr) 🔧 Key Requirements: Azure DevOps (3+ yrs) & AWS (1+ yr – EC2, IAM, VPC, S3) CI/CD: Azure DevOps, GitLab/GitHub Actions IaC: Terraform, Ansible, ARM templates Docker (mandatory), Kubernetes (plus) Linux Admin, Shell/YAML scripting DevSecOps tools (SonarQube, Snyk – plus) Prior lead/team mentoring experience 👀 What We’re Looking For: We need a technically strong and leadership-ready DevOps Engineer who can work across both Azure and AWS environments, automate workflows, and lead infrastructure initiatives for enterprise-grade systems. #DevOpsJobs #AzureDevOps #AWS #ImmediateJoiners #HiringNow #KochiJobs #TrivandrumJobs #Terraform #Docker #CI_CD #LeadDevOps #Automation #Linux

Posted 3 weeks ago

Apply

6.0 years

18 - 22 Lacs

Kochi, Kerala, India

On-site

🚀 Hiring Now: Lead DevOps Engineer – Azure + AWS 📍 Location: Trivandrum / Kochi (Work from Office) 🕐 Notice Period: Immediate Joiners Only 💰 CTC: Up to ₹20 LPA 📅 Experience: 6+ years total (Relevant: Azure DevOps – 4–5 years, AWS – 1+ year) 🔧 Key Requirements ✅ 4–5 years of hands-on experience in Azure DevOps ✅ Minimum 1 year of working experience in AWS (EC2, IAM, VPC, S3, etc.) ✅ Strong expertise in CI/CD pipelines (Azure DevOps, GitLab/GitHub Actions) ✅ Good experience with IaC tools – Terraform, Ansible, ARM templates ✅ Docker experience is a must; Kubernetes is a plus ✅ Strong background in Linux system administration ✅ Hands-on with monitoring, automation, and scripting (Shell, YAML) ✅ Exposure to DevSecOps tools (SonarQube, Snyk, etc.) is a plus ✅ Prior experience in a lead role – mentoring or team ownership 👀 What We’re Looking For We need a technically strong and leadership-ready DevOps Engineer who can work across both Azure and AWS environments, automate workflows, and lead infrastructure initiatives for enterprise-grade systems. 💬 Interested? Drop your CV at aswathy@velodata.in 🔁 Please reshare or tag someone you know Skills: aws,docker,arm templates,ci/cd pipelines,ansible,devsecops tools,azure,cd,azure devops,yaml scripting,ci,kubernetes,linux system administration,terraform,shell scripting,devops

Posted 3 weeks ago

Apply

6.0 years

18 - 22 Lacs

Thiruvananthapuram Taluk, India

On-site

🚀 Hiring Now: Lead DevOps Engineer – Azure + AWS 📍 Location: Trivandrum / Kochi (Work from Office) 🕐 Notice Period: Immediate Joiners Only 💰 CTC: Up to ₹20 LPA 📅 Experience: 6+ years total (Relevant: Azure DevOps – 4–5 years, AWS – 1+ year) 🔧 Key Requirements ✅ 4–5 years of hands-on experience in Azure DevOps ✅ Minimum 1 year of working experience in AWS (EC2, IAM, VPC, S3, etc.) ✅ Strong expertise in CI/CD pipelines (Azure DevOps, GitLab/GitHub Actions) ✅ Good experience with IaC tools – Terraform, Ansible, ARM templates ✅ Docker experience is a must; Kubernetes is a plus ✅ Strong background in Linux system administration ✅ Hands-on with monitoring, automation, and scripting (Shell, YAML) ✅ Exposure to DevSecOps tools (SonarQube, Snyk, etc.) is a plus ✅ Prior experience in a lead role – mentoring or team ownership 👀 What We’re Looking For We need a technically strong and leadership-ready DevOps Engineer who can work across both Azure and AWS environments, automate workflows, and lead infrastructure initiatives for enterprise-grade systems. 💬 Interested? Drop your CV at aswathy@velodata.in 🔁 Please reshare or tag someone you know Skills: aws,docker,arm templates,ci/cd pipelines,ansible,devsecops tools,azure,cd,azure devops,yaml scripting,ci,kubernetes,linux system administration,terraform,shell scripting,devops

Posted 3 weeks ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

Why Verifone What you'll do Provide Subject Matter Expertise in container platform to setup CI/CD pipeline. Deploying multi-tier applications on Kubernetes (K8's) using Jenkins file and YAML file. What qualifications will you need to be successful Proficient with deploying multi-tier applications on Kubernetes/OpenShift using Jenkins file and YAML file. Must be proficient in writing Kubernetes object YAML files e.g. Service, POD, Container, Route, Storage, Volume Template files, Docker files, Ansible Play books Hands-on working knowledge on: Rancher, OpenShift or similar Kubernetes platform Ansible, Jenkins Worked on at least one development project as developer (e.g. Java, .NET, Angular) Done at least 1 customer project implementation on Kubernetes platform Working knowledge of BitBucket, Ansible, Artifactory, Groovy Scripting, RHEL Linux, Networking fundamentals. Good to have Test Ops Knowledge At least one year working experience in any RDBMS, knowledge in understanding and writing SQL queries, using postman/soap requests. Configuring and maintaining certificates as needed. Managing dev and test environment and regular security patching and scans. Troubleshooting and problem solving across platform and application domains. Setup pipeline and support of supplementary software, eg: Jenkins Good knowledge in SonarQube Our commitment ,

Posted 3 weeks ago

Apply

15.0 - 20.0 years

1 - 5 Lacs

Gurugram

Work from Office

About The Role Project Role : Infra Tech Support Practitioner Project Role Description : Provide ongoing technical support and maintenance of production and development systems and software products (both remote and onsite) and for configured services running on various platforms (operating within a defined operating model and processes). Provide hardware/software support and implement technology at the operating system-level across all server and network areas, and for particular software solutions/vendors/brands. Work includes L1 and L2/ basic and intermediate level troubleshooting. Must have skills : Splunk Enterprise Architecture and Design Good to have skills : Python (Programming Language), Microsoft Azure DevOps, Ansible on Microsoft AzureMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :An individual should have a proven experience of more than 5 yrs. into Splunk, with more than 8 yrs. of experience overall.Hold great ability to understand the business requirements and converting them into solution designs,Should be quality oriented, Committed to deliver, pragmatic and solution oriented and most importantly a team player and with great communication skills. Key Responsibilities:-- Strong automation experience with YAML/python, build & release pipelines- Plan and strategize new optimized solutions - Should have SRE mindset, eliminating toil & targeting automation for any manual & repeatable task.- Strong Architecture knowledge (distributed search/index lifecycle/cluster bundle/manage apps/rolling operation/buckets management)- Multi-Site Clustering (both Search heads & indexers stack) with replication strategies- Indexes management- Summary Indexes/DM Acceleration/Scheduled searches- Maintain Jobs & their performance- Splunk Install/Upgrade/Uninstall- Create & Develop Monitoring Jobs for alerts- Troubleshoot issues/error via different logs- SPL (tune real-time searches)- DMC, DB Connect exposure & knowledge- Maintain & develop documents(SOPs) of the whole platform Technical Experience:-- Strong Experience of working with YAML/Ansible for automation- Comfortable working with UNIX Platform- Basic Networking Skills- Strong Azure DevOps skills - To support 24*7 environment with rotational shifts- Experience of working into Azure Cloud environment- Strong skills into core Splunk Enterprise Administration (Architect certified-preferable)- Skills/exposure to Containerization technology, kubernetes, AKS, etc.- Skills/exposure to Observability (OTel) Professional Attributes:-- An individual should have a proven experience of more than 5 yrs. into Splunk, with more than 8 yrs. of experience overall.- Hold great ability to understand the business requirements and converting them into solution designs,- Should be quality oriented,- Committed to deliver, pragmatic and solution oriented,- And most importantly a team player and with great communication skills. Educational Qualifications:-- Graduate/full time degree Qualification 15 years full time education

Posted 3 weeks ago

Apply

5.0 - 10.0 years

6 - 10 Lacs

Mumbai

Work from Office

Application production support BNPP Cardif Taiwan The Application Production Support guarantees application production for both technical and functional support. He/she ensures that the service is maintained in operational condition. He/she ensures the smooth running of the processes (batch processing, transfer, etc.) and the proper functioning in the application sense of the term (relevant business data, functional result, correct execution of application workflows, etc.). In a context of accelerating change, he/she manages the various releases within his/her scope. He/she intervenes not only in the event of incidents to restore the service, but also in preventive terms thanks to data analysis and automation, thus improving service quality and the user experience. He/she acts as an expert with the ITOps particularly on production and infrastructure requirements. Responsibilities Direct Responsibilities Providing technical and functional support to users: Providing technical and functional support to users Managing incidents within the scope of reporting Guaranteeing the eradication of problems Ensuring the proper management of application access by following the recommendations applicable to its scope Implementing monitoring and supervise the applications technical and functional components: Implementing the system and parameters used to monitor its scope Monitoring the smooth implementation of processing according to the production plan Checking the functional and technical quality of the interfaces / transfers between applications Monitoring the condition of the applications technical components Checking the proper running of application backups and restoration Orchestrating and managing the various releases within his/her scope Creating and enriching the application document framework. Maintaining operating procedures: Providing analysis and measuring, analysing and improving the performance of the applications technical and functional components in order to improve the user experience Realising the technical integration of applications based on existing products and services in order to reduce implementation times Contributing Responsibilities Contributing to the processes belonging to the activity, observing the requirements and carrying out the necessary remediation Contributing to the construction of production dashboards and undertaking the actions defined for improving the service Contribute to continuous improvement actions Technical & Behavioral Competencies Primary skillsets (mandatory): Proficient in ITIL Operation Methodology with strong analytical skills and Proven work experience. Operation Excellence on Redhat, Windows servers Vulnerability remediation for LINUX servers and Windows servers Knowledge on configuration management, monitoring, continuous Integration tools on UNIX and windows platforms; Ansible tools setup Strong English communication skills. Secondary skill sets (good to have): Java middleware / Apache webserver Container Orchestration and deploying new applications using Dockers, Kubernetes cluster by using deployment, services, statefulset, daemonset etc. Automation of deployment, customization, upgrades and monitoring through DevOps tools Experience in providing 24x7 support for critical servers, Availability, Performance, Monitoring, Incident response, Preparation, Change management and Capacity management Experience working with ServiceNow Ticket Tool Self-starter with a proactive approach and a problem-solving mindset, capable of working on multiple projects simultaneously and meeting tight deadlines. Japanese proficiency would be appreciated. Proven working Experience with the Technology below: Script: YAML, Bash, Powershell Monitoring: Dynatrace Specific Qualifications (if required) Skills Referential Behavioural Skills : (Please select up to 4 skills) Ability to collaborate / Teamwork Resilience Organizational skills Client focused Transversal Skills: Ability to develop and adapt a process Analytical Ability Ability to set up relevant performance indicators Ability to manage a project Ability to inspire others & generate people's commitment Education Level: Bachelor Degree or equivalent

Posted 3 weeks ago

Apply

7.0 - 12.0 years

20 - 30 Lacs

Bengaluru

Work from Office

Job Title: Principal IT/OT Security Architect Manufacturing Sector Location: Bangalore Division: Industrial Security, Automation & Network Modernization Level: Principal Engineer Secure What Moves the World and Automate It Americas manufacturing sector isnt just under pressure from cyber threats its being pushed to modernize faster than ever. At Surya Technologies , were not just defending factories. Were helping them transform , using automation, intelligent observability, and solutions-as-code to bring repeatability and resilience to industrial environments. Were hiring a Principal IT/OT Security Architect to lead the charge in converging cybersecurity, networking, automation, and compliance across mid-market and enterprise manufacturers in the Southeast. This isnt a clipboard-and-checklist kind of role. You’ll build solutions that scale — powered by code, infused with AI-driven observability, and designed to run on the plant floor with the same reliability as a CNC machine. What You’ll Do Architect converged IT/OT security solutions that are scalable, secure, and automation-ready Use infrastructure-as-code (IaC) principles to define network, sensor, and control plane configurations — versioned, peer-reviewed, repeatable Integrate AI-powered observability platforms like Datadog or Defender for IoT for anomaly detection and visibility Lead zero trust architecture projects for OT — combining microsegmentation, NAC, and managed detection Build automated compliance frameworks for CMMC, NIST 800-171, and IEC 62443 — transforming audits from nightmares into workflows Develop standard deployment patterns that enable your work to scale across 50+ factories Partner with Surya’s automation and platform engineering teams to build tools, APIs, and templates that replace manual configuration Represent Surya in technical conversations with CIOs, plant engineers, and auditors — helping each group understand the mission in their own language Serve as the technical backbone of the manufacturing practice, mentoring future engineers and shaping go-to-market solutions What You Bring 7+ years in cybersecurity, industrial networking, or automation architecture Experience in designing and deploying secure industrial networks — VLANs, firewalls, switches, segmented zones Proficiency in tools like Claroty, Defender for IoT, Nozomi , and Tenable.ot Comfortable writing and reviewing YAML, Terraform, or Ansible playbooks for infrastructure or security automation Familiar with using AI/ML tools or anomaly detection engines in monitoring pipelines Strong understanding of compliance frameworks (CMMC, NIST, ISO, IEC 62443) and how to translate them into codified technical controls Passion for turning security into a platform , not just a service — if you’ve ever turned an SOW into a repo, you belong here Comfortable in a hard hat and a hoodie — you can talk to both an OT technician and a CISO Why This Role Matters Manufacturing is entering its most vulnerable — and most transformative — decade. AI, automation, and security are colliding on the factory floor, and someone has to build the playbook for how it all works together. That someone is you. This role is your opportunity to be the architect behind secure, smart, and scalable factories , not just protect the status quo. You’ll design systems once and deploy them dozens of times — with repeatability, resilience, and intelligence baked in from day one. Why Surya Surya is a next-generation managed services firm built for industrial modernization . We help manufacturers go from legacy to leading edge — combining cloud platforms, security, observability, and AI-driven automation. We’re growing rapidly in the Southeast and building a team of the best technical minds in the region. You won’t be buried in bureaucracy here — you’ll be building the standard others follow. Join Us If you’ve ever said, “There has to be a better way to secure and scale factory environments” — you’re right. Now help us build it. Apply now — and lead the future of manufacturing security, one intelligent deployment at a time.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Mulshi, Maharashtra, India

On-site

Summary / Role Purpose The R&D DevOps Engineer II will support the designing, implementing, and maintaining all phases of software build management and supporting systems for the Meshing Development Unit. This role will closely work with distributed cross functional teams developing next generation engineering software products and create & maintain build scripts, integrate open-source and third-party tools. The DevOps Engineer II uses automation to minimize manual intervention and enhance system stability and reliability. In this role, the DevOps Engineer II will use advanced technical and problem-solving skills to help the team tackle complex issues, satisfy customer requirements, and accomplish development objectives. Key Duties And Responsibilities Perform DevOps activities, including the maintenance, monitoring, documenting, and testing of product builds and packaging to provide quality production builds Configure and maintain tools for generating, deploying and monitoring ANSYS product builds on Windows and Linux platforms within Cloud and On-Premises hardware infrastructure Develop, implement, and maintain fully automated build chains using Continuous Integration and Continuous Delivery (CI/CD) tools Be an expert in investigating, debugging, and resolving platform-specific build failures and issues in development, testing and production environment to maintain high system reliability Collaborate with members of the software development, infrastructure and testing teams to brainstorm best techniques to reduce and resolve complex technological infrastructure, build or packaging problems Measure and monitor metrics and alarms extensively to ensure the performance and reliability of systems. Execute acceptance tests to ensure product build stability and conformance to company quality standards Minimum Qualifications Bachelor’s degree in Computer Science, Computer Engineering, or related field 3 to 5 Yrs of work experience in DevOps Experience with building C/C++ programs on Linux or Windows operating systems Experience with build systems including CMake and SCons Experience with build project configuration and dependency management Experience with programming languages such as C/C++, C#, Fortran, Java Experience with IDEs such as Microsoft Visual Studio and compiler suites such as Intel and GNU Experience with scripting languages such as Python, JavaScript, Windows batch and Linux shell scripts Experience with source code version control systems such as Git Experience with Azure DevOps for managing source code repositories, CI/CD pipelines and agents Experience with pipeline configuration languages like YAML Passion for crafting robust and efficient automated build systems with exceptional debugging and troubleshooting skills Extremely strong in written and interpersonal communication skills Preferred Qualifications And Skills Master’s degree in Computer Science, Computer Engineering or related field Experience working with open-source software, software development tools, compilers, and packaging software Experience working with GitHub Experience with dependency management software such as Conan and NuGet Experience with virtualization and cloud technology such as Docker and container orchestration tools like Kubernetes Knowledge of cloud security

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Qualifications 2+ years experience B.E., B.Tech, M.Tech, MCA, M.SC. or equivalent Telecom & Monitoring domain knowledge (Preferred) Technical Skills Technologies knowledge required: Angular, NodeJs, Perl, Posgres Strong knowledge in Javascript Good understanding & hands-on experience of NodeJs, Perl Good knowledge in frontend stack with Angular Build efficient, testable, and reusable modules with NodeJs Database knowledge with writing queries, joins, etc Hands on experience with unit testing frameworks such as jest DevOps supporting tools hands on experience – Git, K8, Docker, YAML, etc. Agile & Scrum knowledge Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Key Responsibilities Prioritize and manage workload & issues effectively, ensuring critical issues are addressed promptly. Analyze complex technical problems, troubleshoot root causes, and implement effective solutions to prevent recurrence. Collaborate with cross-functional teams, including developers, system administrators, and project managers, to escalate and resolve issues efficiently. Troubleshoot and debug software problems. Design, develop, and maintain web applications using Angular, Perl, and Node.js. Manage and optimize PostgreSQL databases, ensuring data integrity and performance. Support and improve deployment workflows using Git, Docker, and Kubernetes. Write clean, efficient, and well-documented code following best practices. Contribute to technical discussions and code reviews.

Posted 3 weeks ago

Apply

0 years

3 - 8 Lacs

Hyderābād

On-site

Job Description: ability to write Kubernetes yaml file all from scratch to manage infrastructure on EKS experience with writing Jenkins pipelines for setting up new pipeline or extend existing create docker images for new applications like Java NodeJS ability to setup backups for storage services on AWS and EKS Setup Splunk log aggregation tools for all existing applications Setup Integration of our EKS Lambda Cloudwatch with Grafana Splunk etc Manage and Setup DevOps SRE tools independently for existing stack and review with the CORE engineering teams Independently manage the work stream for new features of DevOps and SRE with minimum day to day oversight of the tasks activities Deploy and leverage existing public domain helm charts for repetitive stuff and orchestration and terraform pulumi creation Site Reliability Engineer SRE Cloud Infrastructure Data Ensure reliable scalable and secure cloud based data infrastructure Design implement and maintain AWS infrastructure with a focus on data products Automate infrastructure management using Pulumi Terraform and policy as code Monitor system health optimize performance and manage Kubernetes EKS clusters Implement security measures ensure compliance and mitigate risks Collaborate with development teams on deployment and operation of data applications Optimize data pipelines for efficiency and cost effectiveness Troubleshoot issues participate in incident response and drive continuous improvement Experience with Kubernetes administration data pipelines and monitoring and observability tools In depth coding and debugging skills in Python Unix scripting Excellent communication and problem solving skills Self driven highly motivated and ability to work both independently and within a team Operate optimally in fast paced development environment with dynamic changes tight deadlines and limited resources Key Responsibilities: Setup sensible permission defaults for seamless access management for cloud resources using services like aws iam aws policy management aws kms kube rbac etc Understanding of best practices for security access management hybrid cloud etc Technical Requirements: should be able to write bash scripts for monitoring existing running infrastructure and report out should be able to extend existing IAC code in pulumi typescript ability to debug and fix kubernetes deployment failures network connectivity ingress volume issues etc with kubectl good knowledge of networking basics to debug basic networking and connectivity issues with tools like dig bash ping curl ssh etc knowledge for using monitoring tools like splunk cloudwatch kube dashboard and create dashboards and alerts when and where needed knowledge of aws vpc subnetting alb nlb egress ingress knowledge of doing disaster recovery from prepared backups for dynamodb kube volume storage keyspaces etc AWS Backup Amazon S3 Systems Manager Additional Responsibilities: Knowledge of advance kube concepts and tools like service mesh cluster mesh karpenter kustomize etc Templatise infra IAC creation with pulumi and terraform using advanced techniques for modularisation Extend existing helm charts for repetitive stuff and orchestration and write terraform pulumi creation Use complicated manual infrastructure setup with Ansible Chef etc Certifications AWS Certified Advanced Networking Specialty AWS Certified DevOps Engineer Professional DOP C02 Preferred Skills: Technology->Cloud Platform->Amazon Webservices DevOps->AWS DevOps

Posted 3 weeks ago

Apply

2.0 - 5.0 years

10 - 15 Lacs

Gurugram

Work from Office

Qualifications: 2+ years experience B.E., B.Tech, M.Tech, MCA, M.SC. or equivalent Telecom & Monitoring domain knowledge (Preferred) Technical Skills: Technologies knowledge required: Angular, NodeJs, Perl, Posgres Strong knowledge in Javascript Good understanding & hands-on experience of NodeJs, Perl Good knowledge in frontend stack with Angular Build efficient, testable, and reusable modules with NodeJs Database knowledge with writing queries, joins, etc Hands on experience with unit testing frameworks such as jest DevOps supporting tools hands on experience – Git, K8, Docker, YAML, etc. Agile & Scrum knowledge Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Key Responsibilities: Prioritize and manage workload & issues effectively, ensuring critical issues are addressed promptly. Analyze complex technical problems, troubleshoot root causes, and implement effective solutions to prevent recurrence. Collaborate with cross-functional teams, including developers, system administrators, and project managers, to escalate and resolve issues efficiently. Troubleshoot and debug software problems. Design, develop, and maintain web applications using Angular, Perl, and Node.js. Manage and optimize PostgreSQL databases, ensuring data integrity and performance. Support and improve deployment workflows using Git, Docker, and Kubernetes. Write clean, efficient, and well-documented code following best practices. Contribute to technical discussions and code reviews. Roles and Responsibilities Qualifications: 2+ years experience B.E., B.Tech, M.Tech, MCA, M.SC. or equivalent Telecom & Monitoring domain knowledge (Preferred) Technical Skills: Technologies knowledge required: Angular, NodeJs, Perl, Posgres Strong knowledge in Javascript Good understanding & hands-on experience of NodeJs, Perl Good knowledge in frontend stack with Angular Build efficient, testable, and reusable modules with NodeJs Database knowledge with writing queries, joins, etc Hands on experience with unit testing frameworks such as jest DevOps supporting tools hands on experience – Git, K8, Docker, YAML, etc. Agile & Scrum knowledge Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Key Responsibilities: Prioritize and manage workload & issues effectively, ensuring critical issues are addressed promptly. Analyze complex technical problems, troubleshoot root causes, and implement effective solutions to prevent recurrence. Collaborate with cross-functional teams, including developers, system administrators, and project managers, to escalate and resolve issues efficiently. Troubleshoot and debug software problems. Design, develop, and maintain web applications using Angular, Perl, and Node.js. Manage and optimize PostgreSQL databases, ensuring data integrity and performance. Support and improve deployment workflows using Git, Docker, and Kubernetes. Write clean, efficient, and well-documented code following best practices. Contribute to technical discussions and code reviews.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies