Jobs
Interviews

1299 Yaml Jobs - Page 7

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 7.0 years

15 - 20 Lacs

Kochi, Bengaluru

Work from Office

DevOps Engineer: Azure Stack (5+ Years Experience) A skilled DevOps Engineer with 5+ years of experience and strong expertise in Microsoft Azure and Azure Stack. The ideal candidate will be responsible for building, automating, and optimizing deployments in Azure environments, with a focus on scalability, security, and high availability.Key Responsibilities: Design, implement, and maintain CI/CD pipelines using Azure DevOps Build and manage infrastructure as code (IaC) using ARM templates, Terraform, or Bicep Automate provisioning, deployment, and configuration in Azure and Azure Stack Hub Manage Azure resources: App Services, VNets, NSGs, Storage Accounts, Key Vault, etc. Monitor and troubleshoot deployments using Azure Monitor, Log Analytics, and Application Insights Implement security and compliance best practices in Azure pipelines and infrastructure Collaborate with development and operations teams to streamline release processes Terraform/Ansible Required Skills: Strong hands-on experience with Azure DevOps, YAML pipelines, and Git Deep understanding of Azure services and Azure Stack architecture Proficient in PowerShell, Bash, or Python for automation Experience with containerization and orchestration using Docker and AKS Familiar with monitoring and logging tools within the Azure ecosystem Solid grasp of networking, firewall rules, identity/access management, and RBAC in Azure Preferred: Exposure to hybrid cloud environments and on-premises integration via Azure Stack Experience implementing DevSecOps practices Azure certifications (e.g., AZ-400, AZ-104, or AZ-305) are a plus

Posted 1 week ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Req ID: 333751 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Java Microservices Developer to join our team in Noida, Uttar Pradesh (IN-UP), India (IN). Role Title JAVA Developer Summary of Role: In that role, they will: We are looking for highly skilled Core Java developer who will work on writing clean, reusable, modular, and maintainable code that is easy to understand and easy to change. In-depth knowledge of data structures and algorithms is required. This is so that you can apply them better in day-to-day tasks, and so you know which one to choose over the other, especially when using frameworks like the collection framework. Java Developers need to compile detailed technical documentation and user assistance material, requiring excellent written communication. What you will do: Design and development for RESTful APIs and Microservices Collaborate with software and production engineers to design scalable services, plan feature roll-out, and ensure high reliability and performance of our products. Conduct code reviews, develop high-quality documentation, and build robust test suites for your products. Design and build complex systems that can scale rapidly with little maintenance. Design and implement effective service/product interfaces. Develop complex data models using common patterns like EAV, normal forms, append only, event sourced, or graphs. Able to lead and successfully complete software projects without major guidance from a manager/lead. Provide technical support for many applications within the technology portfolio. Respond to and troubleshoot complex problems quickly, efficiently, and effectively. Handle multiple competing priorities in an agile, fast-paced environment. Should be able to work in with Scrum and other Agile processes. Communication, group dynamics, collaboration and continuous improvement are core – being best practice driven. Pro-active, quick learner, ability to effectively work in multi-cultural and multi-national teams. Positive attitude and ability to engage with different stakeholders managing scope and expectations skilfully. Experience and Skills Required: Mandatory 5+ years of hands-on experience in design and development of RESTful APIs and Microservices using technology stack: Java/J2EE, Spring framework, Spring Batch, AWS Elastic Kubernetes Services (EKS), RDS Oracle DB, Apigee/API Gateway. 5+ Years experience & expertise in frontend development using React JS, HTML5, CSS3 and Responsive web application development. Must have experience in Rest API integrations. Experience in API layer security (e.g., JWT, OATH2), API logging, API testing, creating REST API documentation using Swagger and YAML or similar tools desirable. Experience in TDD, writing unit test cases in JUnit. Unit Test Frameworks: Mockito (Java), Junit (Java); Nice to have exposure to End-to-end Test Frameworks: Fitnesse/Test API, Protractor; Functional Testing: Cucumber; Performance Test Tools: JMeter Proficient in SQL and Stored Procedures such as in RDS Oracle DB Experience with Unix, Linux Operating Systems preferably on AWS environment. Knowledge of Jira, Git/SVN, Jenkins, DevOps, CI/CD, Build tools - Maven, Jenkins Knowledge of Spring framework – (4.x), Hibernate ORM – 4.x Knowledge of Database – MS SQL Server Knowledge of SQL Versioning tool – flyway Apache Active MQ –PDF generation libraries - iText, flying saucer, html, CSS (for pdf 1.x) UI - Understanding of Core JavaScript is needed other JavaScript frameworks can be learned and these are the frameworks and libraries. Skill sets and experience in open-source frontend frameworks such as React and Angular is highly desirable. Knowledge of Java script libraries like typescript, redux, Knowledge of rx js Knowledge of Load ash ,Gulp or webpack About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com Whenever possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each client’s needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, https://us.nttdata.com/en/contact-us . NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here .

Posted 1 week ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description: • Overall 10+ years of experience working within a large enterprise consisting of large and diverse teams. • Minimum of 6 years of experience on APM and Monitoring technologies • Minimum of 3 years of experience on ELK • Design and implement efficient log shipping and data ingestion processes. • Collaborate with development and operations teams to enhance logging capabilities. • Implement and configure components of the Elastic Stack, including, File beat, Metrics beat, Winlog beat, Logstash and Kibana. • Create and maintain comprehensive documentation for Elastic Stack configurations and processes. • Ensure seamless integration between various Elastic Stack components. • Advance Kibana dashboards and visualizations modelling, deployment. • Create and manage Elasticsearch Clusters on premise, including configuration parameters, indexing, search, and query performance tuning, RBAC security governance, and administration. • Hands-on Scripting & Programming in Python, Ansible, bash, data parsing (regex), etc. • Experience with Security Hardening & Vulnerability/Compliance, OS patching, SSL/SSO/LDAP. • Understanding of HA design, cross-site replication, local and global load balancers, etc. • Data ingestion & enrichment from various sources, webhooks, and REST APIs with JSON/YAML/XML payloads & testing POSTMAN, etc. • CI/CD - Deployment pipeline experience (Ansible, GIT). • Strong knowledge of performance monitoring, metrics, planning, and management. • Ability to apply a systematic & creative approach to solve problems, out-of-the-box thinking with a sense of ownership and focus. • Experience with application onboarding - capturing requirements, understanding data sources, architecture diagrams, application relationships, etc. • Influencing other teams and engineering groups in adopting logging best practices. • Effective communication skills with the ability to articulate technical details to a different audience. • Familiarity with Service now, Confluence and JIRA. • Understand SRE and DevOps principles Technical Skills: APM Tools – ELK, AppDynamics, PagerDuty Programming Languages - Java / .Net, Python Operating Systems – Linux and Windows Automation – GitLab, Ansible Container Orchestration – Kubernetes Cloud – Microsoft Azure and AWS Interested candidates please share your resume with balaji.kumar@flyerssoft.com

Posted 1 week ago

Apply

5.0 - 8.0 years

11 - 15 Lacs

Bengaluru

Work from Office

Transport is at the core of modern society Imagine using your expertise to shape sustainable transport and infrastructure solutions for the futureIf you seek to make a difference on a global scale, working with next-gen technologies and the sharpest collaborative teams, then we could be a perfect match Position Description As Software Engineer, need to actively cooperate and communicate with various stakeholders and team members, English language are required Role Overview As a Senior Software Engineer, you will design, develop, implement and maintain applications using our tech stack (detailed below) In this role, you'll support the Volvo Groups digital transformation and product modernization initiatives, focusing on applications that provide information on substances of concern at the Part/BOM level within product compliance and chemical & environmental compliance domain User Interface: C#, OOPS, AspDot Net MVC, AspDot Net Core, Web API, Angular JS, Jquery, Bootstrap, HTML Back-End: C#, NHibernate(5 0 and above), Dot Net framework 4 8 Database: Oracle SQL, Microsoft SQL Server, DB2 and PostgreSQL (Linux/Windows) DevSecOps: Azure ADO ,Azure DevOps, YAML, Powershell script Experience in SonarQube, security tools-DAST/SAST, OWASP Good to have skill: Github Copilot, Azure cloud and Skilled in writing Unit and Integration Tests MOQ Ways of Working: Agile with continues improvements mindset Must Have Skills Having expertise on SOA and experience in modernizing IT application Experience having in web application Architecture and Development with hands on expertise in delivering solutions using on C#, MVC, SQL Server 2017 Analysis, design, implementation, testing and maintenance of Web Based, Windows Based, Client-Server and N-tier Architecture Good knowledge of C#, OOPS, AspDot Net MVC, AspDot Net Core, Web API, Design Patterns and SOLID principles Experience with Dot Net Core, Dot Net Core MVC and Dot Net Core Web APIs Experience with client-side technologies like HTML5, CSS3, jQuery, Angular Framework Experience with Visual Studio Team Services (VSTS) Good-Have Skills Well versed in designing and building Azure solutions that include high availability, multi-region and multi-set architectures using virtual networks, availability sets and affinity groups Azure AD Experience in Azure portal & Azure resource manager in cloud deployments Good experience in of Microsoft Azure Cloud Service like Azure AD, App Service and DB services, Azure storage accounts, Azure VMs, Azure key vault, Azure Functions, Azure messaging Good experience in microservice development and deployment Good experience in Configuring Azure DevOps Pipelines for Continuous Integration and Continuous Deployment Data Base/Integration Good database design skillsMicrosoft SQL Server, and query Optimization techniques Good Knowledge on IBM WebSphere MQ Ways of working Experience working in Agile & SAFe Methodology Experience in working with DevSecOps tools that secure DevOps workflows Primary Responsibilities Responsible for end-to-end design and optimization the solution Understand the user requirements, analyze such requirements, and deliver design, development, and unit test services that support business requirements Develop functional databases and servers to support websites on the back end Develop efficient software according to business requirements within given time frame Ensure compliance with relevant IT Services processes, methods and tools and business processes Secure that the necessary and relevant review and audits are performed for the developed modules Stay current and provide insight on cutting edge software approaches, architectures Ensure that non-functional requirements such as security, performance, maintainability, scalability, usability, and reliability are being considered when architecting solutions Personal attributes Affiliative You thrive when working as part of a larger team or organization, driven by a shared purpose Sociable You naturally seek out interactions and enjoy building meaningful connections with others Influential You have a knack for motivating, guiding and persuading your colleagues and stakeholders If you are passionate about this position to work with product compliance domain and want to join a dynamic team that is making a difference for Volvo Groups different business units, we encourage you to apply We value your data privacy and therefore do not accept applications via mail Who We Are And What We Believe In We are committed to shaping the future landscape of efficient, safe, and sustainable transport solutions Fulfilling our mission creates countless career opportunities for talents across the groups leading brands and entities Applying to this job offers you the opportunity to join Volvo Group Every day, you will be working with some of the sharpest and most creative brains in our field to be able to leave our society in better shape for the next generation We are passionate about what we do, and we thrive on teamwork We are almost 100,000 people united around the world by a culture of care, inclusiveness, and empowerment Group Digital & IT is the hub for digital development within Volvo Group Imagine yourself working with cutting-edge technologies in a global team, represented in more than 30 countries We are dedicated to leading the way of tomorrows transport solutions, guided by a strong customer mindset and high level of curiosity, both as individuals and as a team Here, you will thrive in your career in an environment where your voice is heard and your ideas matter

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Position: DevOps Engineer Location: Ahmedabad (On-site at office) Working Day: 5.5 Working Days Experience: 3 to 7 years of relevant experience Purpose: We are looking for a highly skilled DevOps professional with 3 to 7 years of experience to work with us. The candidate will bring expertise in GCP Platform, containerization & orchestration, SDLC, operating systems, version control, languages, scripting, CI/CD, infrastructure as code, and databases. Experience in the Azure Platform, in addition to the GCP platform, will be highly valued. Experience:  3-7 years of experience in DevOps.  Proven experience in implementing DevOps best practices and driving automation.  Demonstrated ability to manage and optimize cloud-based infrastructure. Roles and Responsibilities: The DevOps professional will be responsible for:  Implementing and managing the GCP Platform, including Google Kubernetes Engine (GKE), CloudBuild and DevOps practices.  Leading efforts in containerization and orchestration using Docker and Kubernetes.  Optimizing and managing the Software Development Lifecycle (SDLC).  Administering Linux and Windows Server environments proficiently.  Managing version control using Git (BitBucket) and GitOps (preferred).  Automating and configuring tasks using YAML and Python.  Developing and maintaining Bash and PowerShell scripts.  Designing and developing CI/CD pipelines using Jenkins and optionally CloudBuild.  Implementing infrastructure as code through Terraform to optimize resource management.  Managing CloudSQL and MySQL databases for reliable performance. Education Qualification  Bachelor’s degree in Computer Science, Engineering, or a related field.  Master’s degree in a relevant field (preferred). Certifications Preferred  Professional certifications in GCP, Kubernetes, Docker, and DevOps methodologies.  Additional certifications in CI/CD tools and infrastructure as code (preferred). Behavioural Skills  Strong problem-solving abilities and keen attention to detail.  Excellent communication and collaboration skills.  Ability to adapt to a fast-paced and dynamic work environment.  Strong leadership and team management capabilities. Technical Skills  Proficiency in Google Kubernetes Engine (GKE), CloudBuild, and DevOps practices.  Expertise in Docker and Kubernetes for containerization and orchestration.  Deep understanding of the Software Development Lifecycle (SDLC).  Proficiency in administering Linux and Windows Server environments.  Experience with Git (BitBucket) and GitOps (preferred).  Proficiency in YAML and Python for automation and configuration.  Skills in Bash and PowerShell scripting.  Strong ability to design and manage CI/CD pipelines using Jenkins and optionally CloudBuild.  Experience with Terraform for infrastructure as code.  Management of CloudSQL and MySQL databases. Non-Negotiable Skills  GCP Platform: Familiarity with Google Kubernetes Engine (GKE), CloudBuild, and DevOps practices.  Experience with Azure  Containerization & Orchestration: Expertise in Docker and Kubernetes.  SDLC: Deep understanding of the Software Development Lifecycle.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

Location: Kumbakonam or Coimbatore, Tamil Nadu (Onsite Only) Employment Type: Contract / Freelancing Experience Level: Intermediate (2–5 Years) Role Overview The AI Boundary Analyst is a core contributor to Meithee Tech’s AI Governance function. This role is responsible for identifying, defining, and formalizing the boundaries of what AI systems should do across various contexts and business scenarios. The individual will collaborate closely with AI Prompt Engineers, Ethics Managers, and Value Analysts to ensure all AI-driven solutions operate within ethically aligned, technically implementable, and business-relevant boundaries. This is a domain-neutral, prompt-engineering-aligned role that blends ethical reasoning with structured system thinking. The analyst acts as an internal authority on AI permissibility, defining red lines, guiding responsible usage, and shaping decision frameworks before any AI solution is developed or deployed. Key Responsibilities Analyze AI-driven thought flows, user scenarios, and business objectives to establish clear ethical and operational boundaries. Develop structured documentation such as boundary maps, red zone scenario trees, and use-case guardrail definitions. Translate abstract ethical boundaries into prompt-ready logic that can be used directly by AI Prompt Developers and Product Engineers. Evaluate risks, ambiguities, and unintended consequences across AI applications, and codify acceptable vs. restricted behaviors. Participate in prompt validation exercises, scenario walkthroughs, and compliance simulations during the AI system design lifecycle. Maintain clear, scalable documentation that can be referenced by technical teams, governance teams, and product manager. Required Qualifications 2–5 years of experience in software development, AI product analysis, prompt engineering, or a related technical role. Strong analytical and logical reasoning skills with the ability to structure and defend ethical decisions. Exposure to large language models (LLMs), prompt engineering, or conversational AI systems. Ability to author precise documentation suitable for product and engineering consumption (YAML, JSON, prompt trees, etc.). Familiarity with AI lifecycle and software development processes (SDLC). Excellent communication skills in English. Tamil proficiency is an advantage. Additional Information - This is a non-remote position. Shortlisted candidates will undergo a 1-week onsite training and evaluation program at the designated office. Final hiring decisions will be based on scenario handling, collaboration ability, and clarity in AI boundary articulation.

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title : DevOps Engineer Key Skills : Devops, Azure DevOps, Jenkins, Docker, Kubernetes, TeamCity, Terraform, Jira, Bitbucket, and Confluence Job Locations : Hyderabad Experience :4 - 6 Education Qualification : Any Graduation. Work Mode : Hybrid. Employment Type : Contract. Notice Period : Immediate Job Description: 1 JD DE-SDLC DevOps Foundation_Server_Repo_IAC_Cloud Leica Biosystems is a global leader in workflow solutions and automation. As the only company to own the workflow from biopsy to diagnosis, we are uniquely positioned to break down the barriers between each of these steps. Our mission of Advancing Cancer Diagnostics, Improving Lives is at the heart of our corporate culture. Our easy-to-use and consistently reliable offerings help improve workflow efficiency and diagnostic confidence. The company is represented in over 100 countries. It has manufacturing facilities in 9 countries, sales and service organizations in 19 countries, and an international network of dealers. The company is headquartered in Nussloch, Germany. Visit LeicaBiosystems.com for more information. Job Description We are seeking a highly skilled Senior DevOps Engineer to join our growing engineering team. This role requires a dynamic and experienced individual who is passionate about automation, infrastructure as code, and delivering highly available and scalable systems. You will play a key role in designing, building, and maintaining the CI/CD pipelines, cloud infrastructure, and internal tools that support our development and deployment processes Key Responsibilities: Design, implement, and maintain CI/CD pipelines using Azure DevOps, Jenkins, and TeamCity. Automate infrastructure and deployment processes using Terraform and scripting languages such as PowerShell, Groovy, YAML, PowerCLI, and InstallShield. Manage cloud infrastructure on Azure or AWS, including: Resource group creation and management Dloyment automation Key Vault and certificate monitoring Collaborate with development and QA teams to streamline software delivery. Integrate and maintain SonarQube for continuous code quality and static code analysis. Use Jira for tracking development tasks and Bitbucket for source control and code reviews. Maintain and contribute to documentation using Confluence. Monitor system performance and troubleshoot issues across environments. Required Skills: Strong experience with CI/CD tools: Azure DevOps, Jenkins, TeamCity Proficient in scripting: PowerShell, Groovy, YAML, PowerCLI, InstallShield Hands-on experience with Terraform Solid understanding of Azure or AWS cloud services Experience with Jira, Bitbucket, and Confluence Familiarity with DevOps best practices and agile methodologies Familiarity with version control systems (Git) and branching strategies. Strong problem-solving and communication skills Preferred Qualifications: Certifications in Azure/AWS or Terraform Experience with containerization (Docker, Kubernetes) Strng analytical and communication skills Previous experience leading DevOps initiatives or managing infrastructure at scale.

Posted 1 week ago

Apply

10.0 years

0 Lacs

Andhra Pradesh, India

On-site

Role-Lead Engineer Experience 10 years experience in API design, development, and implementation 3 years experience of cloud platform services (preferably GCP) Hands-on experience in designing, implementing, and maintaining APIs that meet the highest standards of performance, security, and scalability. Hands-on experience in Design, develop, and implement microservices architectures and solutions using industry best practices and design patterns. Hands-on experience with cloud computing and services. Hands-on experience with proficiency in programming languages like Java, Python, JavaScript etc. Hands-on experience with API Gateway and management tools like Apigee, Kong, API Gateway. Hands-on experience with integrating APIs with a variety of systems/applications/microservices and infrastructure . Deployment experience in Cloud environment (preferably GCP) Experience in TDD/DDD and unit testing. Hands-on CI/CD experience in automating the build, test, and deployment processes to ensure rapid and reliable delivery of API updates. Technical Skills Programming & Languages: Java, GraphQL, SQL, API Gateway and management tools Apigee, API Gateway Database Tech: Oracle, Spanner, BigQuery, Cloud Storage Operating Systems Linux Expert with API design principles, specification and architectural styles like REST, GraphQL, and gRPC, Proficiency in API lifecycle management, advanced security measures, and performance optimization. Good Knowledge of Security Best Practices and Compliance Awareness. Good Knowledge of messaging patterns and distributed systems. Well-versed with protocols and data formats. Strong development knowledge in microservice design, architectural patterns, frameworks and libraries. Knowledge of SQL and NoSQL databases, and how to interact with them through APIs Good to have knowledge of data modeling and database management design database schemas that efficiently store and retrieve data. Scripting and configuration (eg yaml) knowledge. Strong Testing and Debugging Skills writing unit tests and familiarity with the tools and techniques to fix issues. DevOps knowledge CI/CD practices and tools. Familiarity with Monitoring and observability platforms for real-time insights into application performance Understanding version control systems like Git. Familiarity with API documentation standards such as OpenAPI. Problem-solving skills and ability to work independently in a fast-paced environment. Effective Communication negotiate and communicate effectively with stakeholders to ensure API solutions meet both technical and non-technical stakeholders.

Posted 1 week ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Bengaluru, Karnataka Factspan Overview: Factspan is a pure play data and analytics services organization. We partner with fortune 500 enterprises to build an analytics center of excellence, generating insights and solutions from raw data to solve business challenges, make strategic recommendations and implement new processes that help them succeed. With offices in Seattle, Washington and Bengaluru, India; we use a global delivery model to service our customers. Our customers include industry leaders from Retail, Financial Services, Hospitality, and technology sectors. Role Overview We’re looking for a Legacy Jenkins Engineer who will maintain, optimize, and eventually modernize legacy Jenkins- based CI pipelines for enterprise applications. The role includes managing freestyle and scripted Jenkins jobs, integrating with Git repositories, and contributing to the migration effort toward GitLab CI and YAML-based declarative pipelines. You will be a key contributor in ensuring stability while driving innovation as part of a cross-functional DevOps POD. Key Responsibilities: Manage and enhance legacy Jenkins pipelines, including Freestyle, Pipeline (Groovy), and multi-branch jobs. Maintain scripted job logic and address performance/stability issues across multiple environments. Collaborate with SCM and application teams to manage job triggers, SCM polling, build chaining, and post-build actions. Support integration with Git, SonarQube, Artifactory, Maven, and uDeploy. Lead the migration of legacy Jenkins jobs to modern GitLab CI YAML templates and reusable pipeline modules. Create shared libraries for common CI workflows and enforce pipeline standardization. Write and maintain Groovy scripts for complex build automation and shared utilities. Implement logging, error tracking, and pipeline dashboards for visibility and troubleshooting. Ensure CI processes support code quality gates, secure scanning, and audit compliance. Work with the uDeploy team for downstream deployment orchestration and rollback strategies. Key Skills CI Tools: Jenkins (Freestyle, Groovy pipelines), GitLab CI (basic knowledge or migration interest) Scripting: Groovy, Bash, Shell SCM & Build: Git, GitLab, Maven, Artifactory, SonarQube Pipeline Management: Shared libraries, folder structures, scripted condition handling Monitoring & Debugging: Jenkins logs, Job history analysis, failure pattern identification Soft Skills: Attention to legacy systems, documentation discipline, steady-state support experience Required Qualifications: Experience in CI pipeline rationalization/consolidation projects. Hands-on experience in GitLab CI migration and template design. Familiarity with containerization (Docker) and basic Kubernetes CI practices. Understanding of Retail app release cycles, especially seasonal traffic prep. Cross-team collaboration mindset with NOC, DevOps, and SRE teams. If you are passionate about leveraging technology to drive business innovation, possess excellent problem-solving skills, and thrive in a dynamic environment, we encourage you to apply for this exciting opportunity. Join us in shaping the future of data analytics and making a meaningful impact in the industry. Why Should You Apply? People: Join hands with the talent- Grow with Us: Be part of a Buoyant Culture: Embark on an ed, warm, collaborative team and hyper- growth startup with exciting journey with a team that highly accomplished leadership. ample number of opportunities innovates solutions everyday, to Learn & Innovate. tackles challenges head-on and crafts a vibrant work environment

Posted 1 week ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh

Remote

Req ID: 333751 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Java Microservices Developer to join our team in Noida, Uttar Pradesh (IN-UP), India (IN). Role Title JAVA Developer Summary of Role: In that role, they will: We are looking for highly skilled Core Java developer who will work on writing clean, reusable, modular, and maintainable code that is easy to understand and easy to change. In-depth knowledge of data structures and algorithms is required. This is so that you can apply them better in day-to-day tasks, and so you know which one to choose over the other, especially when using frameworks like the collection framework. Java Developers need to compile detailed technical documentation and user assistance material, requiring excellent written communication. What you will do: Design and development for RESTful APIs and Microservices Collaborate with software and production engineers to design scalable services, plan feature roll-out, and ensure high reliability and performance of our products. Conduct code reviews, develop high-quality documentation, and build robust test suites for your products. Design and build complex systems that can scale rapidly with little maintenance. Design and implement effective service/product interfaces. Develop complex data models using common patterns like EAV, normal forms, append only, event sourced, or graphs. Able to lead and successfully complete software projects without major guidance from a manager/lead. Provide technical support for many applications within the technology portfolio. Respond to and troubleshoot complex problems quickly, efficiently, and effectively. Handle multiple competing priorities in an agile, fast-paced environment. Should be able to work in with Scrum and other Agile processes. Communication, group dynamics, collaboration and continuous improvement are core - being best practice driven. Pro-active, quick learner, ability to effectively work in multi-cultural and multi-national teams. Positive attitude and ability to engage with different stakeholders managing scope and expectations skilfully. Experience and Skills Required: Mandatory 5+ years of hands-on experience in design and development of RESTful APIs and Microservices using technology stack: Java/J2EE, Spring framework, Spring Batch, AWS Elastic Kubernetes Services (EKS), RDS Oracle DB, Apigee/API Gateway. 5+ Years experience & expertise in frontend development using React JS, HTML5, CSS3 and Responsive web application development. Must have experience in Rest API integrations. Experience in API layer security (e.g., JWT, OATH2), API logging, API testing, creating REST API documentation using Swagger and YAML or similar tools desirable. Experience in TDD, writing unit test cases in JUnit. Unit Test Frameworks: Mockito (Java), Junit (Java); Nice to have exposure to End-to-end Test Frameworks: Fitnesse/Test API, Protractor; Functional Testing: Cucumber; Performance Test Tools: JMeter Proficient in SQL and Stored Procedures such as in RDS Oracle DB Experience with Unix, Linux Operating Systems preferably on AWS environment. Knowledge of Jira, Git/SVN, Jenkins, DevOps, CI/CD, Build tools - Maven, Jenkins Knowledge of Spring framework - (4.x), Hibernate ORM - 4.x Knowledge of Database - MS SQL Server Knowledge of SQL Versioning tool - flyway Apache Active MQ -PDF generation libraries - iText, flying saucer, html, CSS (for pdf 1.x) UI - Understanding of Core JavaScript is needed other JavaScript frameworks can be learned and these are the frameworks and libraries. Skill sets and experience in open-source frontend frameworks such as React and Angular is highly desirable. Knowledge of Java script libraries like typescript, redux, Knowledge of rx js Knowledge of Load ash ,Gulp or webpack About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com Whenever possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each client's needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, https://us.nttdata.com/en/contact-us. NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

As a Junior Engineer (Enterprise Automation and Orchestration) at Rainfall Inc., you will be part of a dynamic team located in Pune with the flexibility to work from home. You will be responsible for automating infrastructure and application deployment using various tools and platforms, with a focus on Typescript, Javascript, Bash, and Docker Containerization. Your role will require a deep understanding of DevOps practices, AWS, Infrastructure as Code, cloud services, and Docker container creation pipelines. Collaboration with the Development Team, Platform Support team, and Operational teams is essential to ensure smooth deployment and operation of applications. Debugging and troubleshooting issues reported by internal teams or end-users will also be part of your responsibilities. Additionally, you must possess excellent troubleshooting and problem-solving skills, proficiency with GitHub for version control, experience with microservices architecture, and strong documentation skills. A bachelor's degree in Engineering or similar field is required, along with 2+ years of hands-on experience with automation and Infrastructure as Code tools. Your ability to design, implement, and deploy GitHub/N8N workflows will be crucial in this role. If you are a talented and experienced Engineer with a passion for automation and orchestration, we encourage you to apply for this exciting opportunity at Rainfall Inc.,

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Duties & Responsibilities Be a part of a Scrum team working on API and microservices development using open-source technologies like Java, SpringBoot etc Design APIs following RESTful API design principles and API-led architecture. Lead API development and integrations, working with other developers, architects, and product owners. Build consistent reusable & secure APIs and microservices following all enterprise standards and best practices. Develop code that is highly scalable and has consistent performance even with high load on the application Establish a strong culture of security awareness and ownership to establish the DevSecOps practices within API Dev Lifecycle Responsible for debugging within a complex environment that includes multiple connected systems. Responsible for operational reporting, health monitoring of the application Ensure that the API based architecture enables for best-in-class user experience and response time. Ensure the reusability of all the components developed. Requirements A minimum of 3 years’ experience in IT, including a minimum of 2 years in API and microservices development, using leading methodologies and processes. 3+ years of experience in Java Backend Development and skills with Spring Framework and Springboot. 2+ years’ experience working in Agile/Scrum model. 1+ years’ experience working in cloud (preferably AWS) and familiar with cloud services. 2+ experience with Gradle Experience with integrations with databases (SQL and NoSQL), SFTP servers, REST/SOAP/GraphQL APIs and other systems and platforms. Strong skills and expertise in unit testing frameworks like Junit Experience integrating with Messaging platforms like IBM MQ, Kafka and NATS Experience with architecture, design, development, deployment, testing, and integration of enterprise-wide applications. Experience with a wide variety of continuous integration and source control tools. Experience with designing API proxies and REST APIs using API Management platform. Knowledge of API security including OIDC and OAuth2.0 concept Experience designing API specifications in RAML/YAML/Swagger Understand the fundamentals of DevSecOps CI/CD pipelines and ability to review troubleshoot pipeline issues and collaborate with DevOps team. Ability to work collaboratively in a team environment with a strong focus on customer service and solution ownership.

Posted 1 week ago

Apply

9.0 - 12.0 years

5 - 5 Lacs

Bengaluru

Work from Office

Key Responsibilities: Define and lead automation strategy for enterprise infrastructure. Design and build automated solutions for provisioning, monitoring, and operational tasks. Collaborate with DevOps, Cloud, Security, and App teams to find and implement automation opportunities. Use Infrastructure as Code (IaC) tools like Terraform, Ansible, CloudFormation, or ARM templates. Automate across multi-cloud (AWS, Azure, GCP) and hybrid environments. Build and maintain CI/CD pipelines (e.g., Jenkins, GitLab CI, Azure DevOps). Create workflows using tools like StackStorm, n8n, VMware, ServiceNow, and monitoring tools (Zabbix, Logic Monitor, SolarWinds, Icinga). Set standards for version control, change management, and compliance. Mentor and guide the automation engineering team. Review code for scalability, security, and performance. Stay updated with new tools and technologies in automation. Required Skills & Qualifications: 12+ years in IT infrastructure, with 7+ years in automation. Strong skills in Terraform, Ansible, PowerShell, Python, Bash, and YAML (including YAQL, JINJA). Solid knowledge of AWS, Azure, or GCP cloud platforms. Experience with CI/CD tools (Jenkins, GitLab, Azure DevOps). Hands-on with orchestration tools like StackStorm, n8n, or ServiceNow Orchestration. Knowledge of Docker and Kubernetes is a plus. Strong leadership, problem-solving, and communication skills. Key Skills: Python Scripting RESTful APIs Infrastructure as Code (IaC) Required Skills Python Scripts,Restfull Api,Infrastructure as Code

Posted 1 week ago

Apply

8.0 - 11.0 years

7 - 11 Lacs

Hyderabad

Work from Office

Job Description: Develops and implements technical standards, procedures, and techniques for the resolution of Enterprise IT system problems to ensure maximum application availability and performance. Develop proof of concepts architecture for application and automation initiatives. Drives new ideas and innovative solutions to resolve problems. Engages with other engineering teams to improve the lifecycle of services on our platforms. Collaborates with other IT and non-IT related professionals such as Developers, Architects, Project Managers, Business Analysts, and business leaders. RSM has an opportunity for a highly motivated DevOps Engineer who has a passion for orchestrating site resiliency and DevOps standards/technologies. You will work alongside our top-notch IT professionals supporting RSMs modern information technology infrastructure. Youll be a leading member of a team who is responsible for the delivery and management of consumable technologies, processes, and integrations to bolster RSM lines of business and their respective portfolios. Outside of your team, youll collaborate with Enterprise Solutions Developers, IT Architecture Engineers, line-of-business (LOB) professionals as well as other IT professionals to automate and streamline IT business operations and processes. Youll be challenged to create innovative solutions to legacy and cloud compute as well as new concepts, ideas, and continuous process improvement. You will demonstrate and maintain high standards while fostering a proactive, efficient, and service-oriented work environment. Communication and professionalism are paramount as you will be representing RSM Technical Services to effectively engage with technical and business leadership as well as external providers of IT services. You will also use all your abilities to explain solutions and complex issues while demonstrating the ability to lead and impart knowledge effectively to other team members. Orchestrates legacy, public, and private cloud infrastructure utilizing automation while maintaining established change management procedures. Automate and accelerate the testing, release, and deployment cycles through authored scripts to automate configuration, and provisioning. Achieve maximum system automation and integration through Infrastructure as Code (IaC), Web Services and scripting technologies and tools. Develop and employ continuous delivery system practices via cloud services and infrastructure. Execute and automate Continuous Integrations pipelines for various development projects using a core suite of tools. Monitors, scales, and optimizes distributed services in the cloud infrastructure. Integrates closely with enterprise solution development teams on identifying, problem solving and resolving issues that impact software releases and service delivery Provides direct support of enterprise infrastructure including cloud computing solutions, Single Sign-On, IIS servers, load balancing, backups, and antivirus platforms. Orchestrates compute legacy environments. Configures and Integrates custom and 3rd party applications and add-ons. Regular review of alerts, logs, and performance. Works with end-users, Microsoft Support, and other vendors in resolution of support issues as needed. Participates in scheduled and unscheduled weekend/after-hours system maintenance and support. Performs rotational on-call duty. EDUCATION/CERTIFICATIONS Preferred: Bachelor's degree in computer science, Software Engineering, Information Systems, equivalent work history/experience or working towards achieving a degree. TECHNICAL SKILLS Microsoft Windows and Non-WinOS such as Linux Server administration. Troubleshooting of complex distributed environments. Management of Public Cloud Offerings (IaaS, SaaS, PaaS, O365, ADO). Advanced Scripting Skills (PowerShell, Terraform and other IaC languages). Ability to use/implement automation tools and IaC. Strong performance tuning experience. Microsoft Internet Information Server (IIS) and the basic operation of webites, application pools, IIS administration, ports, SSL certificates Citrix VDI administration 3rd Party Tools (Veeam, ServiceNow, WebEx, and MS SCOM) EXPERIENCE Strong knowledge of IT infrastructure, network and directory services required Experience in the following areas. 5+ years Administering Directory Services for MS Windows 2012-2019 3+ Experience with Public Cloud Solutions (Azure/AWS/Google) Managing/deploying secure certificates (SSL) IaC Technologies (Ansible, ADO, Pipelines, Git, Terraform, YAML) Managing infrastructure in a virtualized environment Experience in Orchestration and containerization using Kubernetes Identity Management SSO/MFA Agile Methodology Experience Experience with Site reliability engineering and ITIL framework LEADERSHIP/SOFT SKILLS Experience in team collaboration Excellent written and oral presentation skills Exceptional analytical and process development skills

Posted 1 week ago

Apply

4.0 - 7.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Develop Python-based automation scripts for network configuration push and rollback across Cisco, Juniper, and Nokia routers. Integrate scripts with Rundeck (or similar automation/ workflow orchestration tool) for job scheduling, parameterization, and logging. Design Jinja2-based configuration templates and YAML/JSON variable structures. Create automated workflows for software upgrades, validation checks, and rollback. Integrate verification modules using NAPALM, pyATS/Genie, ncclient (NETCONF), and/or TextFSM. Collaborate with network SMEs to translate manual workflows into code-based logic. Maintain Git repository for scripts and templates with proper version control and branching. - Grade Specific Focus on Digital Continuity and Manufacturing. Develops competency in own area of expertise. Shares expertise and provides guidance and support to others. Interprets clients needs. Completes own role independently or with minimum supervision. Identifies problems and relevant issues in straight forward situations and generates solutions. Contributes in teamwork and interacts with customers.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Summary Job Title: Backend Software Developer Location: TechM Pune Hinjewadi Years of Experience: 3 5 Years Job Summary We are seeking a skilled Backend Software Developer with a strong foundation in Java and experience in building robust applications. The ideal candidate will have hands on experience with Spring Boot and a solid understanding of REST APIs, JWT based authorization, and Neo4j. You will be responsible for developing high quality software solutions, collaborating with cross functional teams, and ensuring the implementation of best practices in coding and design. Responsibilities Design, develop, and maintain backend services and applications using Java and Spring Boot. Implement RESTful APIs and ensure secure authorization using JWT. Work with Neo4j to design and optimize graph database solutions. Write clean, maintainable, and testable code following industry best practices. Collaborate with frontend developers and other stakeholders to integrate user facing elements with server side logic. Participate in code reviews and contribute to team knowledge sharing. Utilize Apache Maven for project management and dependency management. Implement CI/CD practices to streamline deployment processes. Stay updated with emerging technologies and industry trends. Mandatory Skills 3 5 years of backend development experience using Java (Java 17+ or 21 preferred). Strong hands on experience with Spring Boot (version 3.x preferred). Proficient in Neo4j (Graph DB) with proven experience in designing and querying graph databases. Experience with Apache Maven (v3.9+), JUnit 5, and Git. Solid understanding and implementation experience with REST APIs and JWT based authorization. Experience with PostgreSQL and writing optimized queries. Good understanding of microservices architecture and integration. Preferred Skills Exposure to Yang and YAML. Familiarity with TMF APIs or other standard telecom API frameworks. Experience in containerization tools like Docker and Kubernetes. Understanding of CI/CD practices and tools like Jenkins and SonarQube. Qualifications Strong analytical and problem solving skills. Proactive mindset with a strong sense of ownership. Excellent communication and collaboration skills. Adaptable and eager to learn emerging technologies. If you are passionate about backend development and meet the above criteria, we encourage you to apply and join our dynamic team!

Posted 1 week ago

Apply

5.0 - 8.0 years

11 - 15 Lacs

Noida

Work from Office

1. C#, Microsoft SQL Server or Azure SQL, Azure Cosmos DB, Azure Service Bus, Azure Function Apps, Auth0, Web Sockets 2. Strong development experience in C# and .NET core technologies built up across a range of different projects 3. Experience of developing API's which conform as much as possible to REST principles in terms of Resources, Sub Resources, Responses, Error Handling 4. Experience of API design and documentation using Open API 3.x YAML Swagger 5. Some familiarity with AWS, and especially Elastic Search would be beneficial but not mandatory. 6. Azure Certifications an advantage Mandatory Competencies Programming Language - .Net Full Stack - Angular Programming Language - .Net - .NET Core Beh - Communication and collaboration Cloud - AWS - AWS Lambda,AWS EventBridge, AWS Fargate Cloud - Azure - ServerLess (Function App Logic App) Database - Sql Server - SQL Packages Programming Language - Other Programming Language - C# Middleware - API Middleware - Microservices Middleware - API Middleware - API (SOAP, REST)

Posted 1 week ago

Apply

0 years

3 - 7 Lacs

Bengaluru

On-site

W e help the world run better A t SAP, we enable you to bring out your best. Our company culture is focused on collaboration and a shared passion to help the world run better. How? We focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from. What you`ll do: Within the Customer Services & Delivery (CS&D) Line of Business, SAP Enterprise Cloud Services (ECS) supports customers throughout their cloud transformation and SAP S/4HANA adoption journey. We run the Intelligent Enterprise so they can be an Intelligent Enterprise. Our portfolio of SAP Private Cloud and SAP Cloud Application Services turns SAP products into a solution-as-a-service on customer’s preferred infrastructure, including Hyperscalers, as one SAP. The Enterprise Cloud Services (ECS) Delivery team acts as the bridge for fulfilling SAP’s vision and ECS’s mission for accelerating our customers’ cloud transformation and delivering on RISE with SAP. We successfully do this supporting our ECS Portfolio Adoption, providing state-of the-art security compliance & governance and technology platform, ensuring high availability, automation, and best-in-class support experience & encouraging our teams to grow, develop necessary future skills and build a strong culture. ECS Delivery is responsible for running customer landscapes around the globe in a 24x7 operations model. We manage a wide variety of business-critical SAP systems and applications in a global delivery model. We are responsible for building and operating customer systems for ECS. You are part of the Monitoring CoE and so far, our team is responsible for the application and database and now we also take over the infrastructure area. You work with other new team members to take over Infra Monitoring, build synergies with App/DB Monitoring, and work to continuously improve our service. Core Technical Expertise: A utomation, Virtualization, Containers: Expert in Linux, Ansible, Terraform, Python, and Bash. Strong proficiency with Docker, Kubernetes (including Helm), and VMware vCenter. M onitoring & Observability: Expert in Prometheus and Grafana (across all layers: hardware, hypervisor, OS, containers, applications). Hands-on experience with Promtail, Loki, OpenTelemetry, ELK stack, and Jaeger. D evOps Tooling: Experienced with ArgoCD and Workflows, GitHub Actions, Jenkins, YAML and JSON, REST APIs, forward & reverse proxies, load balancers, Kubernetes Ingress, and MongoDB. I TSM & Alerting Integration: Integration experience with ServiceNow, JIRA, Microsoft Teams, and PagerDuty. C ertifications (Must-Have): A WS Certified (Cloud) R ed Hat Ansible Certified (Automation) C NCF Certified Kubernetes Administrator (CKA) A dditional Role Requirements: D eep technical understanding of server infrastructure E nd to end / customer-oriented view and analytical skills E xcellent communication with the ability to engage effectively with stakeholders, customers and partners B asic understanding of Cloud based processes and operations S trong problem-solving and analytical skills, with a focus on continuous improvement and operational excellence. F luency in English language and the ability to work in global, multi-cultural teams is a key requirement. F lexibility, Openness, Reliability and willingness to grow and adapt in a dynamic working environment. Meet your team: SAP Enterprise Cloud Services enables our customers to focus on driving innovation for the future with a focus on flexibility and simplification. We provide customers with a scalable and adaptable operating model, secure and resilient best-in-class technology, and IT governance to ensure production availability across the application and infrastructure landscape. ECS Delivery is the operational organization and our XDU Delivery unit, of which you will be part, is responsible for all cross topics which are necessary to smoothly run the business. Our Unit runs Cross Projects, represents the Process Managers,operate the Tools Operation Control Center (TOCC) and taking care for Monitoring as well as for Data Quality Management. Most of your team members are currently located in Germany and India, but whole ECS as stakeholder is distributed around the globe. #SAPECSCareers B ring out your best S AP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. W e win with inclusion S AP’s culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone – regardless of background – feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy. Specific conditions may apply for roles in Vocational Training. E OE AA M/F/Vet/Disability: Q ualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. R

Posted 1 week ago

Apply

5.0 years

25 Lacs

Bengaluru

On-site

Job Information Date Opened 21/07/2025 Job Type Permanent Work Experience 5+ years Industry IT Services Salary 25LPA City Bangalore North Province Karnataka Country India Postal Code 560002 Job Description About the Role: We are seeking a DevOps Engineer to lead the migration of multiple applications and services into a new AWS environment. This role requires a strategic thinker with hands-on technical expertise, a deep understanding of DevOps best practices, and the ability to guide and mentor other engineers. You will work closely with architects and technical leads to design, plan, and execute cloud-native solutions with a strong emphasis on automation, scalability, security, and performance. Key Responsibilities: Take full ownership of the migration process to AWS, including planning and execution. Work closely with architects to define the best approach for migrating applications into Amazon EKS. Mentor and guide a team of DevOps Engineers, assigning tasks and ensuring quality execution. Design and implement CI/CD pipelines using Jenkins, with an emphasis on security, maintainability, and scalability. Integrate static and dynamic code analysis tools (e.g., SonarQube) into the CI/CD process. Manage secure access to AWS services using IAM roles, least privilege principles, and container-based identity (e.g., workload identity). Create and manage Helm charts for Kubernetes deployments across multiple environments. Conduct data migrations between S3 buckets, PostgreSQL databases, and other data stores, ensuring data integrity and minimal downtime. Troubleshoot and resolve infrastructure and deployment issues, both in local containers and Kubernetes clusters. Required Skills & Expertise: CI/CD & DevOps Tools: Jenkins pipelines (DSL), SonarQube, Nexus or Artifactory Shell scripting, Python (with YAML/JSON handling) Git and version control best practices Containers & Kubernetes: Docker (multi-stage builds, non-root containers, troubleshooting) Kubernetes (services, ingress, service accounts, RBAC, DNS, Helm) Cloud Infrastructure (AWS): AWS services: EC2, EKS, S3, IAM, Secrets Manager, Route 53, WAF, KMS, RDS, VPC, Load Balancers Experience with IAM roles, workload identities, and secure AWS access patterns Network fundamentals: subnets, security groups, NAT, TLS/SSL, CA certificates, DNS routing Databases: PostgreSQL: pg_dump/pg_restore, user management, RDS troubleshooting Web & Security Concepts: NGINX, web servers, reverse proxies, path-based/host-based routing Session handling, load balancing (stateful vs stateless) Security best practices, OWASP Top 10, WAF (configuration/training), network-level security, RBAC, IAM policies Candidate Expectations: The ideal candidate should be able to: Explain best practices around CI/CD pipeline design and secure AWS integrations. Demonstrate complex scripting solutions and data processing tasks in Bash and Python. Describe container lifecycle, troubleshooting steps, and security hardening practices. Detail Kubernetes architecture, Helm chart design, and access control configurations. Show a deep understanding of AWS IAM, networking, service integrations, and cost-conscious design. Discuss TLS certificate lifecycle, trusted CA usage, and implementation in cloud-native environments. Preferred Qualifications: AWS Certified DevOps Engineer or equivalent certifications. Experience in FinTech, SaaS, or other regulated industries. Knowledge of cost optimization strategies in cloud environments. Familiarity with Agile/Scrum methodologies. Certifications or experience with ITIL or ISO 20000 frameworks are advantageous.

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description About the Role: We are seeking a DevOps Engineer to lead the migration of multiple applications and services into a new AWS environment. This role requires a strategic thinker with hands-on technical expertise, a deep understanding of DevOps best practices, and the ability to guide and mentor other engineers. You will work closely with architects and technical leads to design, plan, and execute cloud-native solutions with a strong emphasis on automation, scalability, security, and performance. Key Responsibilities Take full ownership of the migration process to AWS, including planning and execution. Work closely with architects to define the best approach for migrating applications into Amazon EKS. Mentor and guide a team of DevOps Engineers, assigning tasks and ensuring quality execution. Design and implement CI/CD pipelines using Jenkins, with an emphasis on security, maintainability, and scalability. Integrate static and dynamic code analysis tools (e.g., SonarQube) into the CI/CD process. Manage secure access to AWS services using IAM roles, least privilege principles, and container-based identity (e.g., workload identity). Create and manage Helm charts for Kubernetes deployments across multiple environments. Conduct data migrations between S3 buckets, PostgreSQL databases, and other data stores, ensuring data integrity and minimal downtime. Troubleshoot and resolve infrastructure and deployment issues, both in local containers and Kubernetes clusters. Required Skills & Expertise CI/CD & DevOps Tools: Jenkins pipelines (DSL), SonarQube, Nexus or Artifactory Shell scripting, Python (with YAML/JSON handling) Git and version control best practices Containers & Kubernetes Docker (multi-stage builds, non-root containers, troubleshooting) Kubernetes (services, ingress, service accounts, RBAC, DNS, Helm) Cloud Infrastructure (AWS) AWS services: EC2, EKS, S3, IAM, Secrets Manager, Route 53, WAF, KMS, RDS, VPC, Load Balancers Experience with IAM roles, workload identities, and secure AWS access patterns Network fundamentals: subnets, security groups, NAT, TLS/SSL, CA certificates, DNS routing Databases PostgreSQL: pg_dump/pg_restore, user management, RDS troubleshooting Web & Security Concepts NGINX, web servers, reverse proxies, path-based/host-based routing Session handling, load balancing (stateful vs stateless) Security best practices, OWASP Top 10, WAF (configuration/training), network-level security, RBAC, IAM policies Candidate Expectations The ideal candidate should be able to: Explain best practices around CI/CD pipeline design and secure AWS integrations. Demonstrate complex scripting solutions and data processing tasks in Bash and Python. Describe container lifecycle, troubleshooting steps, and security hardening practices. Detail Kubernetes architecture, Helm chart design, and access control configurations. Show a deep understanding of AWS IAM, networking, service integrations, and cost-conscious design. Discuss TLS certificate lifecycle, trusted CA usage, and implementation in cloud-native environments. Preferred Qualifications AWS Certified DevOps Engineer or equivalent certifications. Experience in FinTech, SaaS, or other regulated industries. Knowledge of cost optimization strategies in cloud environments. Familiarity with Agile/Scrum methodologies. Certifications or experience with ITIL or ISO 20000 frameworks are advantageous. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#68B54C;border-color:#68B54C;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered="">

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Job Information Date Opened 21/07/2025 Job Type Permanent Work Experience 5+ years Industry IT Services Salary 25LPA City Bangalore North Province Karnataka Country India Postal Code 560002 Job Description About the Role: We are seeking a DevOps Engineer to lead the migration of multiple applications and services into a new AWS environment. This role requires a strategic thinker with hands-on technical expertise, a deep understanding of DevOps best practices, and the ability to guide and mentor other engineers. You will work closely with architects and technical leads to design, plan, and execute cloud-native solutions with a strong emphasis on automation, scalability, security, and performance. Key Responsibilities: Take full ownership of the migration process to AWS, including planning and execution. Work closely with architects to define the best approach for migrating applications into Amazon EKS. Mentor and guide a team of DevOps Engineers, assigning tasks and ensuring quality execution. Design and implement CI/CD pipelines using Jenkins, with an emphasis on security, maintainability, and scalability. Integrate static and dynamic code analysis tools (e.g., SonarQube) into the CI/CD process. Manage secure access to AWS services using IAM roles, least privilege principles, and container-based identity (e.g., workload identity). Create and manage Helm charts for Kubernetes deployments across multiple environments. Conduct data migrations between S3 buckets, PostgreSQL databases, and other data stores, ensuring data integrity and minimal downtime. Troubleshoot and resolve infrastructure and deployment issues, both in local containers and Kubernetes clusters. Required Skills & Expertise: CI/CD & DevOps Tools: Jenkins pipelines (DSL), SonarQube, Nexus or Artifactory Shell scripting, Python (with YAML/JSON handling) Git and version control best practices Containers & Kubernetes: Docker (multi-stage builds, non-root containers, troubleshooting) Kubernetes (services, ingress, service accounts, RBAC, DNS, Helm) Cloud Infrastructure (AWS): AWS services: EC2, EKS, S3, IAM, Secrets Manager, Route 53, WAF, KMS, RDS, VPC, Load Balancers Experience with IAM roles, workload identities, and secure AWS access patterns Network fundamentals: subnets, security groups, NAT, TLS/SSL, CA certificates, DNS routing Databases: PostgreSQL: pg_dump/pg_restore, user management, RDS troubleshooting Web & Security Concepts: NGINX, web servers, reverse proxies, path-based/host-based routing Session handling, load balancing (stateful vs stateless) Security best practices, OWASP Top 10, WAF (configuration/training), network-level security, RBAC, IAM policies Candidate Expectations: The ideal candidate should be able to: Explain best practices around CI/CD pipeline design and secure AWS integrations. Demonstrate complex scripting solutions and data processing tasks in Bash and Python. Describe container lifecycle, troubleshooting steps, and security hardening practices. Detail Kubernetes architecture, Helm chart design, and access control configurations. Show a deep understanding of AWS IAM, networking, service integrations, and cost-conscious design. Discuss TLS certificate lifecycle, trusted CA usage, and implementation in cloud-native environments. Preferred Qualifications: AWS Certified DevOps Engineer or equivalent certifications. Experience in FinTech, SaaS, or other regulated industries. Knowledge of cost optimization strategies in cloud environments. Familiarity with Agile/Scrum methodologies. Certifications or experience with ITIL or ISO 20000 frameworks are advantageous.

Posted 1 week ago

Apply

0.0 - 2.0 years

0 Lacs

Hyderabad, Telangana

On-site

Category Application Development and Support Location Hyderabad, Telangana Job family Software Engineering Shift Day Employee type Regular Full-Time Duties & Responsibilities: Be a part of a Scrum team working on API and microservices development using open-source technologies like Java, SpringBoot etc Design APIs following RESTful API design principles and API-led architecture. Lead API development and integrations, working with other developers, architects, and product owners. Build consistent reusable & secure APIs and microservices following all enterprise standards and best practices. Develop code that is highly scalable and has consistent performance even with high load on the application Establish a strong culture of security awareness and ownership to establish the DevSecOps practices within API Dev Lifecycle Responsible for debugging within a complex environment that includes multiple connected systems. Responsible for operational reporting, health monitoring of the application Ensure that the API based architecture enables for best-in-class user experience and response time. Ensure the reusability of all the components developed. Requirements: A minimum of 3 years’ experience in IT, including a minimum of 2 years in API and microservices development, using leading methodologies and processes. 3+ years of experience in Java Backend Development and skills with Spring Framework and Springboot. 2+ years’ experience working in Agile/Scrum model. 1+ years’ experience working in cloud (preferably AWS) and familiar with cloud services. 2+ experience with Gradle Experience with integrations with databases (SQL and NoSQL), SFTP servers, REST/SOAP/GraphQL APIs and other systems and platforms. Strong skills and expertise in unit testing frameworks like Junit Experience integrating with Messaging platforms like IBM MQ, Kafka and NATS Experience with architecture, design, development, deployment, testing, and integration of enterprise-wide applications. Experience with a wide variety of continuous integration and source control tools. Experience with designing API proxies and REST APIs using API Management platform. Knowledge of API security including OIDC and OAuth2.0 concept Experience designing API specifications in RAML/YAML/Swagger Understand the fundamentals of DevSecOps CI/CD pipelines and ability to review troubleshoot pipeline issues and collaborate with DevOps team. Ability to work collaboratively in a team environment with a strong focus on customer service and solution ownership. View more

Posted 1 week ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Senior Associate Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. As a business application consulting generalist at PwC, you will provide consulting services for a wide range of business applications. You will leverage a broad understanding of various software solutions to assist clients in optimising operational efficiency through analysis, implementation, training, and support. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations Job Description & Summary: We are looking for a seasoned Azure DevOps Experienced Candidate Skill: Azure DevOps Job Position Title: Business Application Consulting - Business Applications Consulting Generalist – Senior Associate - CS - G Responsibilities: Azure Landing Zone using IaC Azure (Compute, Storage, Networking, BCP, Identity, Security, Automation): good grasp on at least 4/7 would be good to proceed with Terraform (State management knowledge is a must, modules, provisioners, built-in functions, deployment through DevOps tools) Containerization (Docker, K8S/AKS): either of them with questions covering identity, network, security, monitoring, backup along with core concepts and K8S architecture DevOps (ADO, Jenkins, GitHub): include questions on yaml based pipelines, approval gates, credential management, stage-job-steps-task hierarchy, job/task orchestration, agent pools Migrations: Knowledge on migrating planning and assessment would be ideal Experience with different caching architectures Knowledge of security compliance frameworks, such as SOC II, PCI, HIPPA, ISO27001 Knowledge of well-known open source tools for monitoring, trending and configuration management Mandatory skill sets: · Azure Infra Design · CI CD pipeline · Azure Migration · Terraform Preferred skill sets: · Azure Infra Design · CI CD pipeline · Azure Migration · Terraform Years of experience required: 4 to 8 Years Education qualification: BE/B.Tech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Master of Engineering, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Java Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Analytical Thinking, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Communication, Creativity, Documentation Development, Embracing Change, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Performance Assessment, Performance Management Software {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 1 week ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

What You’ll Work On 1. Deep Learning & Computer Vision Train models for image classification: binary/multi-class using CNNs, EfficientNet, or custom backbones. Implement object detection using YOLOv5, Faster R-CNN, SSD; tune NMS and anchor boxes for medical contexts. Work with semantic segmentation models (UNet, DeepLabV3+) for region-level diagnostics (e.g., cell, lesion, or nucleus boundaries). Apply instance segmentation (e.g., Mask R-CNN) for microscopy image cell separation. Use super-resolution and denoising networks (SRCNN, Real-ESRGAN) to enhance low-quality inputs. Develop temporal comparison pipelines for changes across image sequences (e.g., disease progression). Leverage data augmentation libraries (Albumentations, imgaug) for low-data domains. 2. Vision-Language Models (VLMs) Fine-tune CLIP, BLIP, LLaVA, GPT-4V to generate explanations, labels, or descriptions from images. Build image captioning models (Show-Attend-Tell, Transformer-based) using paired datasets. Train or use VQA pipelines for image-question-answer triples. Align text and image embeddings with contrastive loss (InfoNCE), cosine similarity, or projection heads. Design prompt-based pipelines for zero-shot visual understanding. Evaluate using metrics like BLEU, CIDEr, SPICE, Recall@K, etc. 3. Model Training, Evaluation & Interpretation Use PyTorch (core), with support from HuggingFace, torchvision, timm, Lightning. Track model performance with TensorBoard, Weights & Biases, MLflow. Implement cross-validation, early stopping, LR schedulers, warm restarts. Visualize model internals using GradCAM, SHAP, Attention rollout, etc. Evaluate metrics: • Classification: Accuracy, ROC-AUC, F1 • Segmentation: IoU, Dice Coefficient • Detection: mAP • Captioning/VQA: BLEU, METEOR 4. Optimization & Deployment Convert models to ONNX, TorchScript, or TFLite for portable inference. Apply quantization-aware training, post-training quantization, and pruning. Optimize for low-power inference using TensorRT or OpenVINO. Build multi-threaded or asynchronous pipelines for batched inference. 5. Edge & Real-Time Systems Deploy models on Jetson Nano/Xavier, Coral TPU. Handle real-time camera inputs using OpenCV, GStreamer and apply streaming inference. Handle multiple camera/image feeds for simultaneous diagnostics. 6. Regulatory-Ready AI Development Maintain model lineage, performance logs, and validation trails for 21 CFR Part 11 and ISO 13485 readiness. Contribute to validation reports, IQ/OQ/PQ, and reproducibility documentation. Write SOPs and datasheets to support clinical validation of AI components. 7. DevOps, CI/CD & MLOps Use Azure Boards + DevOps Pipelines (YAML) to: Track sprints • Assign tasks • Maintain epics & user stories • Trigger auto-validation pipelines (lint, unit tests, inference validation) on code push • Integrate MLflow or custom logs for model lifecycle tracking. • Use GitHub Actions for cross-platform model validation across environments. 8. Bonus Skills (Preferred but Not Mandatory) Experience in microscopy or pathology data (TIFF, NDPI, DICOM formats). Knowledge of OCR + CV hybrid pipelines for slide/dataset annotation. Experience with streamlit, Gradio, or Flask for AI UX prototyping. Understanding of active learning or semi-supervised learning in low-label settings. Exposure to research publishing, IP filing, or open-source contributions. 9. Required Background 4–6 years in applied deep learning (post academia) Strong foundation in: Python + PyTorch CV workflows (classification, detection, segmentation) Transformer architectures & attention VLMs or multimodal learning Bachelor’s or Master’s degree in CS, AI, EE, Biomedical Engg, or related field 10. How to Apply Send the following to info@sciverse.co.in Subject: Application – AI Research Engineer (4–8 Yrs, CV + VLM) Include: • Your updated CV • GitHub / Portfolio • Short write-up on a model or pipeline you built and why you’re proud of it OR apply directly via LinkedIn — but email applications get faster visibility. Let’s build AI that sees, understands, and impacts lives.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

The Red Hat Customer Experience and Engagement (CEE) team is looking for an experienced engineer to join our Solutions Support team in LOCATION. In this role, you will become an expert in Red Hat's offerings and technologies, like Red Hat OpenShift, Red Hat Enterprise Linux (RHEL), and Red Hat Ansible Automation Platform. You'll provide skilful, direct technical support to a very small subset of our customers. You'll work closely with your customer, Red Hat's Global Support team, Critical Accounts team, and Engineering teams. You will interact with some of Red Hat's most strategic, critical, and innovative customers. You will be a part of Red Hat's unique culture that is enriched with Open Practices in management, decision-making, DEI, and associate growth. Red Hat consistently ranks as one of the best workplaces in technology due to our culture and our focus on associate growth, work/life balance, and associate opportunity. You'll be able to bring innovative solutions to complex problems. You will also have the opportunity to be a part of several Red Hat Recognition programs to connect, recognize, and celebrate success. As a Red Hat engineer, you can collaborate with international teams to improve open-source software. Provide high-level technical support to your customers through web-based support and phone support. Work with Red Hat enterprise customers across the globe on a 24x7 basis that requires you to work in different shifts periodically. Meet with your customers regularly to ensure that Red Hat is aligned with the customers" support priorities. Collaborate with other Red Hat teams engaged with your customers. Perform technical diagnostics and troubleshoot customer technical issues to develop solutions. Exceed customer expectations with outstanding communication and customer service. Consult with and develop relationships with Red Hat engineering teams to guide solutions and improve customer satisfaction. Share your knowledge by contributing to the global Red Hat Knowledge Management System; present troubleshooting instructions and solutions to other engineers within Red Hat. 5+ years of relevant experience. Ability to communicate clearly and effectively with your customer across technical and non-technical communications. Excellent troubleshooting and debugging skills. A passion for technical investigation and issue resolution. Linux system administration experience, including system installation, configuration, and maintenance. Basic knowledge of Linux containers. Experience with container orchestration (Kubernetes), cloud services such as AWS, Azure, GCP, knowledge of Ansible and YAML, Linux scripting experience, understanding of typical change window/change controls. Prior Red Hat Certified Engineer (RHCE) or other Linux certifications; A successful associate in this role is expected to be able to pass the RHCE certification within 90 days. Red Hat is the world's leading provider of enterprise open-source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact. Red Hat's culture is built on the open-source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. We empower people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We welcome and encourage applicants from all the beautiful dimensions that compose our global village.,

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies